132 59 90MB
English Pages 287 [292] Year 2023
Augmented and Virtual Reality in Social Learning
Augmented and Virtual Reality
Edited by Vishal Jain
Volume 3
Augmented and Virtual Reality in Social Learning Technological Impacts and Challenges Edited by Rajendra Kumar, Vishal Jain, Ahmed A. Elngar and Ahed Al-Haraizah
Editors Dr. Rajendra Kumar Sharda University 32, 34 APJ Abdul Kalam Rd Greater Noida 201310 Uttar Pradesh India [email protected] Dr. Vishal Jain Associate Professor, Department of Computer Science and Engineering, School of Engineering and Technology, Sharda University, Greater Noida, U.P., India [email protected]
Dr. Ahmed A. Elngar Faculty of Computers and Artificial Intelligence Beni-Suef University Salah Salem Str 62511 Beni-Suef City Egypt [email protected] Dr. Ahed Al-Haraizah Oman College P.O. Box 680 320 Halban Barka Al Batinah South Oman [email protected]
ISBN 978-3-11-099492-6 e-ISBN (PDF) 978-3-11-098144-5 e-ISBN (EPUB) 978-3-11-098149-0 Library of Congress Control Number: 2023940906 Bibliographic information published by the Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available on the internet at http://dnb.dnb.de. © 2024 Walter de Gruyter GmbH, Berlin/Boston Cover image: Thinkhubstudio/iStock/Getty Images Plus Typesetting: Integra Software Services Pvt. Ltd. Printing and binding: CPI books GmbH, Leck www.degruyter.com
Preface This book focuses on the design, development, and analysis of augmented and virtual reality (AR/VR)-based systems along with technological impacts and challenges in social learning. AR is an expanded shape of the real physical world objects achieved by applying digitally projected elements, sound, or some other sensory incitements delivered using information technology infrastructure. VR, on the other hand, can be defined as the computer-simulated experience to visualize real-world things. The major applications of VR include gaming and entertainment, training, education, and business (as used by Tommy Hilfiger, Coach, and Gap). For example, using VR a 360-degree experience of products attracts the customers who try the clothes virtually. Several such VR applications are there on social media, like Facebook has launched Horizon Worlds, which is a VR world of avatars. Also, Meta is envisioning a virtual world for connecting the digital avatars through work, entertainment, and so on using VR headsets. AR is a pleasant way to increase engagement and interaction by providing a deeper user experience. Recent research has revealed that AR is increasing the apparent value of the products. An attractive implementation of AR activities conveys the innovations and responsiveness to think about a product forward. AR provides new ways for storytelling and creative expression possible with experiences related to our homes, offices, and public windows. AR facilitates the digital information to superimpose and integrate into our physical environments. AR brings up the outside world experience at home from a virtual tour with three-dimensional objects like animals, trees, and hills in our living room using Google’s AR search using a smartphone to collaborate with the avatars of remote friends and relatives. The use of AR has a big scope in social media. Facebook uses AR camera effects to let people interact with the products on the feed. AR, a powerful visualization tool, is no longer just a technology rather it is about defining how we wish to live in the physical world with this platform and how we can feel the significant experiences to enrich humanity. The contributors of this book are aimed to provide their findings and analysis on AR/VR systems and will explore the ongoing practices, new ways for design, and sustainable implementations of AR/VR systems along with their impacts on society and challenges. The contributors shall also focus on the robust endorsement of AR/VR systems using state-of-the-art technologies like the internet of things (IoT), blockchain, big data, 5G, and Li-Fi. The book contains 14 chapters discussing basic fundamentals of social learning in AR and VR environment, including supporting technologies, healthcare, tourism, and case studies. Chapter 1 presents VR applications and challenges in social learning. The applications presented belong to metaverse, social media, neuropathy and psychology, training, and distance education fields. Each application is covered with references to the most recent research in the field.
https://doi.org/10.1515/9783110981445-202
VI
Preface
Chapter 2 presents the integration of blockchain techniques in AR and VR. This chapter examines hybrid VR/AR/blockchain technologies that have helped businesses and academic researchers in a wide range of application fields to address issues affecting conventional services and goods. Chapter 3 presents a comparative study of Li-Fi over Wi-Fi and the application of Li-Fi in the field of AR and VR. Li-Fi has so many benefits compared to Wi-Fi, but it has some certain disadvantages also. The chapter outlines the impact of Li-Fi over WiFi in the coming days and also how Li-Fi can use the features or the applications of AR/VR to make commutation faster and easier. Chapter 4 presents AR in cross-domain applications. The research includes receiving a high focus on advanced driver assistance system (ADAS). ADAS is utilized on every level, from forecasting the weather to security, regardless of the fact that the machine is operating the vehicle. Chapter 5 discusses healthcare services in the society at fifth stage and Health 5.0 giving the flexibility to the patient and consumers with support of emerging information technologies. This chapter gives a comprehensive overview in advanced healthcare technologies emphasizing Health 5.0 and its birth, transition for advanced healthcare practice. Chapter 6 presents a systematic literature review of the current research in the field of the metaverse and understands its key features, enabling technologies and applications in the field of socializing, working, and gaming. This chapter investigates how the metaverse supports digital collaboration among enterprises in the VRenabled metaverse. Chapter 7 explores in detail AR and VR in the healthcare sector, and the case study at the end of the chapter makes it very useful for researchers working in the same field. The technologies presented provide practical answers to the healthcare system’s many issues, as well as countless diversified chances for their deployment in a variety of fields, such as general diagnostics and medical education. Chapter 8 presents an exploratory study of the parental perception of social learning among school-aged children based on AR and VR. The idea goes beyond conventional behavioral suppositions, which only consider validation as a factor in behavior; instead, it addresses the prominent and central position that many internal dynamics play in the maturing individual. Chapter 9 presents an application prepared from AR in the field of school education for normal and mentally weak students. This application can be used in schools in rural areas, where physical subject-wise models are not available or are fewer. Due to the nonavailability of models based on the subject, it is difficult for the students to understand the detailed information related to the subject, so through this application, the detailed information of the subject can be explained very well by showing the model according to the subject to the students. Chapter 10 presents AR- and VR-based visitor experiences. Virtual tours offer alternatives to the pandemic melancholy by enabling users to see locations they have
Preface
VII
always desired to visit while remaining in their own place. A Case Study of Heritage Tourism Sites in Rajasthan (India) is presented in this chapter. Chapter 11 elucidates the prevailing and innovative AR travel applications and their use cases. Various location-based, marker-based, and simultaneous localization and mapping applications are discussed in this chapter, and their advantages, as well as the future scope, are enlightened. Chapter 12 discusses the infrastructure requirements in terms of blockchain and the IoT in AR and VR environments for the simulation of various things. The chapter includes simulation examples in the domain of e-games, smart cities, amusement, and theme parks. The presented simulations are 6G supported, and the real/virtual objects are captured by cameras equipped with IoT sensors. Chapter 13 investigates whether VR tourism can replace traditional tourism or represents a distinct subset of the tourist industry. Tourism is placed and contextualized inside the VR domain, and the origins and evolution of VR are investigated in order to assess this astounding change in an ancient industry. This chapter also investigates the validity of the VR tourist experience and the pros and cons of it. Chapter 14 presents the real-time weed detection and classification using deep learning models and IoT-based edge computing for social learning applications. It compares and analyzes various deep learning models, preprocessing and feature extraction techniques, and performance metrics used in existing studies on weed detection in precision agriculture. Dr. Rajendra Kumar, Dr. Vishal Jain, Dr. Ahmed A. Elngar, Dr. Ahed Al-Haraizah
Contents Preface
V
List of contributors
XI
Himani Mittal 1 VR in social learning: applications and challenges
1
Shishir Singh Chauhan, Gouri Sankar Mishra, Gauri Shanker Gupta, Yadvendra Pratap Singh 2 Integration of blockchain techniques in augmented and virtual reality 11 Shreejita Mukherjee, Shubhasri Roy, Sanchita Ghosh, Sandip Mandal 3 A comparative study of Li-Fi over Wi-Fi and the application of Li-Fi in the field of augmented reality and virtual reality 27 Ajay Sudhir Bale, Salna Joy, Baby Chithra R., Rithish Revan S., Vinay N. 4 Augmented reality in cross-domain applications 43 P. K. Paul 5 Advanced ICT and intelligent systems in sophisticated Healthcare 5.0 practice in modern social and healthcare transformation: an overview 63 Jyoti Singh Kirar, Purvi Gupta, Aashish Khilnani 6 A study on enterprise collaboration in metaverse
81
Pawan Whig, Shama Kouser, Ankit Sharma, Ashima Bhatnagar Bhatia, Rahul Reddy Nadikattu 7 Exploring the synergy between augmented and virtual reality in healthcare and social learning 99 Nafees Akhter Farooqui, Madhu Pandey, Rupali Mirza, Saquib Ali, Ahmad Neyaz Khan 8 Exploratory study of the parental perception of social learning among school-aged children based on augmented and virtual reality 117
X
Contents
J. P. Patra, Manoj Kumar Singh, Yogesh Kumar Rathore, Deepak Khadatkar 9 An innovative application using augmented reality to enhance the teaching-learning process in school education 141 Vikas Gupta 10 How do augmented and virtual reality influences visitor experiences: a case of heritage tourism sites in Rajasthan 159 Gnanasankaran Natarajan, Subashini Bose, Sundaravadivazhagan Balasubramanian, Ayyallu Madangopal Hema 11 Scope of virtual reality and augmented reality in tourism and its innovative applications 175 Aman Anand, Rajendra Kumar, Praveen Pachauri, Vishal Jain, Khar Thoe Ng 12 6G and IoT-supported augmented and virtual reality–enabled simulation environment 199 Raj Gaurang Tiwari, Abeer A. Aljohani, Rajat Bhardwaj, Ambuj Kumar Agarwal 13 Virtual reality in tourism: assessing the authenticity, advantages, and disadvantages of VR tourism 215 Jyoti Verma, Manish Snehi, Isha Kansal, Raj Gaurang Tiwari, Devendra Prasad 14 Real-time weed detection and classification using deep learning models and IoT-based edge computing for social learning applications 241 Editors’ biography Index
271
269
List of contributors Chapter 1 Himani Mittal Goswami Ganesh Dutta Sanatan Dharma College Sector 32, Chandigarh India [email protected] Chapter 2 Shishir Singh Chauhan Manipal University Jaipur Rajasthan India-303007 [email protected] Gouri Sankar Mishra Department of Computer Science & Engineering SHARDA UNIVERSITY Greater Noida India-201310 [email protected] Gauri Shanker Gupta BIT Mesra Ranchi Jharkhand India [email protected] Yadvendra Pratap Singh Manipal University Jaipur Rajasthan India-303007 [email protected] Chapter 3 Shreejita Mukherjee Institute of Engineering & Management West Bengal India [email protected]
https://doi.org/10.1515/9783110981445-204
Shubhasri Roy Institute of Engineering & Management West Bengal India [email protected] Sanchita Ghosh Institute of Engineering & Management West Bengal India [email protected] Sandip Mandal University of Engineering & Management Jaipur, India [email protected] Chapter 4 Ajay Sudhir Bale Department of ECE New Horizon College of Engineering Bengaluru, India [email protected] Salna Joy Department of ECE New Horizon College of Engineering Bengaluru, India Baby Chithra R Department of ECE New Horizon College of Engineering Bengaluru, India Rithish Revan S Department of ECE New Horizon College of Engineering Bengaluru, India
XII
List of contributors
Vinay N Independent Researcher Vinoba Nagar KNS Post, Kolar-563101 Karnataka, India Chapter 5 P. K. Paul Executive Director (MCIS) Asst. Prof.& Head/ Coordinator Department of CIS Raiganj University West Bengal, India [email protected] Chapter 6 Jyoti Singh Kirar Banras Hindu University Varanasi, India [email protected] Purvi Gupta Banras Hindu University Varanasi, India [email protected] Aashish Khilnani Banras Hindu University Varanasi, India [email protected] Chapter 7 Pawan Whig Vivekananda Institute of Professional Studies-TC New Delhi, India [email protected] Shama Kouser Department of computer science Jazan University Saudi Arabia Ankit Sharma Vivekananda Institute of Professional Studies-TC New Delhi, India
Ashima Bhatnagar Bhatia Vivekananda Institute of Professional Studies-TC New Delhi, India Rahul Reddy Nadikattu Senior IEEE Member University of Cumberland USA Chapter 8 Nafees Akhter Farooqui School of Computer Applications BBD University Lucknow, India [email protected] Madhu Pandey School of Liberal Arts Era University Lucknow, India [email protected] Rupali Mirza School of Liberal Arts Era University Lucknow, India [email protected] Saquib Ali School of Basic Science BBD University Lucknow, India [email protected] Ahmad Neyaz Khan Faculty of Engineering and IT Integral University India [email protected] Chapter 9 J P Patra Shri Shankaracharya Institute of Professional Management and Technology India [email protected]
List of contributors
Manoj Kumar Singh Shri Shankaracharya Institute of Professional Management and Technology India Yogesh Kumar Rathore Shri Shankaracharya Institute of Professional Management and Technology India Deepak Khadatkar Shri Shankaracharya Institute of Professional Management and Technology India Chapter 10 Vikas Gupta Lecturer in Discipline of Tourism and Hospitality The University of the South Pacific Laucala Campus, Suva Fiji Islands [email protected] [email protected] Chapter 11 Gnanasankaran Natarajan Assistant Professor Department of Computer Science Thiagarajar College Madurai, Tamilnadu, India [email protected] Subashini Bose Assistant Professor Department of Computer Science Thiagarajar College Madurai, Tamilnadu, India [email protected]
XIII
Sundaravadivazhagan Balasubramanian Faculty of IT Department of Information Technology University of Technology and Applied Sciences Al Mussanah, Oman [email protected] Ayyallu Madangopal Hema Associate Professor Department of Computer Science Thiagarajar College Madurai, Tamilnadu, India [email protected] Chapter 12 Aman Anand School of Engineering & Technology Sharda University Greater Noida, India Rajendra Kumar School of Engineering & Technology Sharda University Greater Noida, India [email protected] Praveen Pachauri UP Institute of Design Noida, India Vishal Jain School of Engineering & Technology Sharda University Greater Noida India Khar Thoe Ng Wawasan Open University George Town Malaysia
XIV
List of contributors
Chapter 13 Raj Gaurang Tiwari Chitkara University Institute of Engineering and Technology Chitkara University Punjab, India [email protected] Abeer A. Aljohani Computer Science department Applied College, Taibah University Saudi Arabia [email protected] Rajat Bhardwaj Assistant Professor, School of Computer Science and Engineering RV University Bengaluru, India [email protected] Ambuj Kumar Agarwal Department of Computer Science and Engineering Sharda University Greater Noida, India [email protected]
Chapter 14 Jyoti Verma Punjabi University Patiala Punjab, India [email protected] Manish Snehi Punjabi University Patiala Punjab, India [email protected] Isha Kansal Chitkara University Institute of Engineering and Technology Chitkara University Punjab, India [email protected] Raj Gaurang Tiwari Chitkara University Institute of Engineering and Technology Chitkara University Punjab, India [email protected] Devendra Prasad Panipat Institute of Engineering and Technology Panipat, Haryana India [email protected]
Himani Mittal
1 VR in social learning: applications and challenges Abstract: The concept of social learning has evolved over the years. As discussed in the work of Laland, social learning is a technique used by all animals and humans to learn from the environment. The new generation of students is raised in a world of information and communication technology. The wealth of knowledge is at their disposal in the form of the Internet. As a result, facilitating social learning is the primary consideration in the design of learning systems. The new social learning systems must assist the learner rather than teach him. The three key functions required are (1) assist in locating the appropriate content; (2) assist learners in connecting with the appropriate people; and (3) motivate/incentivize people to learn. Augmented reality/ virtual reality (VR) can be used to improve social learning among people. The interaction of people with a simulated environment made using computer technology, typically including computer graphics and artificial intelligence, is known as virtual reality. Virtual is something that does not exist in reality. In VR, a person can hold animated or virtual items created using computer graphics and artificial intelligence. This chapter discusses several VR applications for social learning. These applications are in the metaverse, social media, neuropathy and psychology, training, and distance education fields. Each application is covered with references to the most recent research in the field. Additionally highlighted are the social and ethical concerns with using VR for social learning. The conclusions are offered at the end. Keywords: Social learning, augmented reality, virtual reality, metaverse
1 Introduction The concept of social learning has evolved over the years. As discussed in the work of Laland [1], social learning is a technique used by all animals and humans to learn from the environment. The author has listed three types of persons in society: nonlearners, social learners,, and asocial learners. The on-learners do not learn. Social learners learn from others. Asocial learners learn on their own by making efforts and using their cognitive skills. He calls the asocial learners producers of information and social learners scroungers or information consumers. He has further discussed social learning as the act of copying. The copying can happen during events (when) and
Himani Mittal, Goswami Ganesh Dutta Sanatan Dharma College, Sector 32, Chandigarh, e-mail: [email protected] https://doi.org/10.1515/9783110981445-001
2
Himani Mittal
from specific persons (who). The author’s study evaluates empirically the evidence of learning. He further concluded that when all types of learning fail, innovation happens. This view has changed in the later literature. Reed [2] describes social learning as a process that brings about social change. This change is facilitated by people learning from each other in ways that can benefit wider social-ecological systems. They discuss the outcomes of social learning as favorable. The author says social learning may be both a process of people learning from one another and an outcome, that is, the learning that occurs as a result of these social interactions; it is often defined by the wide range of additional potential outcomes it may have. These include, for example, improved management of social-ecological systems, enhanced trust, adaptive capacity, attitudinal and behavioral change, stakeholder empowerment, strengthening of social networks, and so on. This chapter is based on the second opinion about social learning which is a positive act. Vassileva [3] discusses the evolving nature of social learning in their study. The new generation of students is raised in a world of information and communication technology. The wealth of knowledge is at their disposal in the form of the Internet. As a result, facilitating social learning is the primary consideration in the design of learning systems. The new social learning systems must assist the learner rather than teach him. The three key functions required are (1) assist in locating the appropriate content; (2) assist learners in connecting with the appropriate people; and (3) motivate/incentivize people to learn. In the design of such environments, other disciplines including social psychology, economic theory, game theory, and multiagent systems are essential sources of tools and methods. Their research shows how new and emerging technologies, such as social tagging versus ontologies, exploratory search versus self-managed social recommendations, reputation and trust mechanisms, mechanism design, and social visualization, can be leveraged to develop social learning systems. Social learning can be enhanced among people with the use of augmented reality (AR)/virtual reality (VR). VR [4] is the interaction of humans with the simulated environment created using Computing technology involving primarily computer graphics and artificial intelligence. The word “virtual” means imaginary and “reality” means truth. In VR, humans can hold virtual objects or animated objects. VR-created 3D landscapes are utilized for a variety of fascinating purposes, such as gaming, simulators to learn driving cars and planes, and many more. VR is often confused with AR. AR is the blending of virtual things in a real environment whereas VR involves placing people in a virtual 3D environment as they take in the sound and sight of the scene, giving the impression that it is happening. The VR Gear can be used for this. Different hardware and software are needed to give a sense of immersion to the user. The user can interact with the virtual world via input devices. VR is used in a variety of fields, including the military, healthcare, education, scientific visualization, and entertainment. As the sector grows swiftly, new, enhanced applications that make use of cutting-edge technology are being introduced to the market. There is a demand in the
1 VR in social learning: applications and challenges
3
market for VR-based products. However, there are several ethical, privacy, and health concerns with VR. AR [5], on other hand, is the reverse where the animated/virtual characters are added to real-world images and videos. AR is an extension of the real world where digital visual elements, sounds, and other sensory stimuli are delivered via technology. No separate hardware or software gears are required to experience AR. However, enhanced graphics algorithms and AI-enabled digital cameras are required to generate AR. The applications of AR are in the field of medicine, military, manufacturing, entertainment, robotics, visualization, education, navigation, tourism, path planning, shopping, construction, designing, and many more. AR is the mixing of virtual objects in a real environment, for example, adding stickers on Snapchat. VR is the introduction of humans in a simulated 3D environment with humans experiencing the sound and sight of the 3D scene creating an illusion that it is a reality. This chapter is organized as follows: Section 2 includes the applications of AR/VR technologies in social learning. The challenges in implementation are discussed in Section 3. Section 4 includes social and ethical issues in AR/VR. Section 5 presents the conclusions.
2 Review of applications of AR/VR in social learning The use of VR in social learning can be studied in applications where the learning dynamics are involved. How do the behaviors and tendencies of persons change when they interact with virtual technology or virtual platforms supporting single or multiple users? The following applications use the VR 3D environment, simulation, gamification, and other parameters to promote social learning.
2.1 Distance education In distance education through virtual learning platforms, VR technology is extremely helpful in giving people an experience of a virtual classroom. The use of 3D virtual worlds in settings including experiential learning may be perfect, according to Jarmon [6]. Their research experimentally examines the actual instructional effectiveness of Second Life as an immersive learning environment for interdisciplinary communication using several research approaches, including survey, focus group, and journal content analysis. According to the focus group participants, many aspects of Second Life encouraged learning. The 3D virtual environment promoted experiential learning and supported the application of ideas and strategies in the actual world. The participants believed that the only limitation of the SL experience was the user’s ingenuity. The SL
4
Himani Mittal
promoted exploration, inventiveness, and creativity. However, the survey’s sample size was minuscule. Scavarelli et al. [7] in their chapter explores VR and AR within social learning spaces, such as classrooms and museums. He considered the factors such as social interaction found within reality-based and immersive VR technology. The factors related to social learning spaces, namely, constructivism, social cognitive theory, connectivism, and activity theory relevant to building VR technology for education. He explored several VR/AR examples for learning. He emphasized further research with a greater focus on accessibility, and the interplay between the physical and virtual environments, and suggests an updated learning theory foundation. Akçapınar [8] in their paper states that engagement is an important factor that maintains students’ interest in online learning where the teacher is not physically present to enforce discipline. Low engagement leads to a high dropout rate and academic failure. The author says that learning analytics dashboards (LADs) are one way to promote engagement but it has low performance. In this paper, he proposes the use of gamification along with LADs to promote engagement. Thirty-one students enrolled in an elective Visual Programming course which was held remotely participated in a quasi-experimental study. The course interactions of students before and after using the gamified dashboard were monitored. The results show that adding the gamification element to LAD increased student engagement tremendously. Self-regulated learning (SRL), according to Viberg [9], can predict academic performance. In online learning, SRL is more necessary. For the students, it is challenging. Learning analytics (LA) can be utilized to enhance learning and assist students in the development of their SRL. Fifty-four works on LA empirical research are reviewed by Viberg in their paper. Zimmerman’s model (2002) was utilized by him to analyze the SRL stages. To improve learning outcomes, learning support and instruction, deployment, and ethical use, he looked at LA performance in four different ways. He claimed that whereas SRL supported performance and planning, it offered less support for introspection. He discovered scant support for LA in the form of 20% better learning outcomes and 22% better teaching and learning assistance. The use of LA was also found to be sparse, and only 15% of studies adopted an ethical approach to their research. The LA research conducted in this instance was more concerned with measuring SL than with fostering it. In discussion forums that are part of MOOCs, Dolek [10] assesses the reliability of an algorithm for gauging the effectiveness of social learning networks (which consist of video lectures and problem assignments). The algorithm simulates social learning as a result of users’ requirements for gathering and disseminating knowledge. By bringing together information searchers and disseminators, it offers optimization. According to the articles stated above, VR supports engagement, self-regulating learning, and improved performance in online learning by utilizing learning analytics and gamification. This improvement can be attributed to the social learning hypothesis, which states that competition with peers can lead to feelings of fulfillment and accomplishment.
1 VR in social learning: applications and challenges
5
2.2 Military training Military training requires the personnel to be trained with sophisticated weapons, arms and ammunition, and psychological training. Schatz [11] says the VR enabled training promotes: ubiquitous learner-centric, technology-enabled instruction; builds upon the foundations of data-driven learning; fosters a learning culture at the organizational level; encourages and empowers social learning; and draws upon deliberate practices and the evidence-based body of knowledge from learning science. All this is facilitated by learner-centric technology tools and personal learning assistants enabled by VR. It promotes learning through transmedia learning, which enables nonlinear learning, live/virtual/constructive (LVC) modeling and simulation; and mobile learning, where “anytime, anywhere.” After training it is necessary to test and generate the performance score. So, massive human-performance data is generated in the tools of VR used for training which can compare peer performance. It can even capture the latent variables of learning which are not possible otherwise. These platforms support social learning. He discussed five essentials of military learning required in the VR platforms and reported that none of the available platforms supports all five components. A gamebased learning environment for staff is suggested by Bhagat [12], supported by VR technology and social learning theory. The objective was to create a teaching and scoring system for firing ranges that would maintain achievement scores. The performance of the students was assessed in terms of their motivation and learning accomplishment to determine how effective the design was. The system was tested on 160 high school students and it was found that it improved learning outcomes. Greenwood [13] studies the role of VR in aviation training. Massively multiplayer online games facilitate learning in a virtual, immersive, and collaborative context. Smartphones, smartwatches, and tablets are ready tools that promote learning on the go and sharing of data by texting, screenshots, and other readily available media (imagery, video, screenshots, etc.). Learning in this context is both active and constructive, with the learner creating subjective representations of reality. These promote social interaction and social learning with inter and intra psychological effect on learners. It has more knowledgeable entities such as teachers and mentors as 3D avatars. It has methods to calculate the distance between a student and the ability to perform a task under an expert’s guidance. All these features in VR-based social learning supporting online platforms for aviation training are used by the armed forces.
2.3 Psychology and neural therapy In the work by Dirksen et al. [14], the applications of VR in psychology are enabling behavior change, empathy building, experiencing consequences, future projection, feedback, and emotional self-regulation. They point out that to bring about behavior change three factors are important: capability, motivation, and opportunity. The VR environ-
6
Himani Mittal
ment designed for behavior change should have all three factors. They have further classified these three factors as physical and psychological. This VR environment designed can be used for panic disorder treatment, medical education, treatment of psychosis patients, creating empathy for the less fortunate, digital pain reduction, digital phobia reduction, and so on. The VR environment should provide feedback for the user. The most common feedback types are progress, informational/educational, motivational/inspirational, changing attitudes and beliefs, providing social/emotional support, offering social norms, and comparisons Providing personal risk/protective factor information and engagement. Pamparău et al. [15] have developed a mixed reality for implicit social learning – Hololens application to examine how social learning happens and influences the human’s behavioral, cognitive, and emotional functioning. They used animations to design virtual avatars for implicit social learning and these have been developed by consulting psychology experts. The limitation of the developed system as discussed by the authors is that only one avatar is actively shown on screen at one time. It uses clues like gestures and recorded sounds. The other limitations pointed out by the authors are more technical and related to hardware. In the work of Kang et al. [16], the ethical dilemma decision-making of subjects is determined with the help of a virtual scenario of a trolley dilemma scenario. It helps determine whether the subject will harm others or itself saving others. Such games are of importance in psychology as well as in the training of personnel in special positions that require public service.
2.4 Social media According to Deaton [17], social media has a positive impact on social learning as it involves increased attention, memory, less harassment, motivation, healthy competition, esteem building, and much more. Social media helps in monitoring student context, lesson context, and faculty context of the learning process. In the study reported [18], in-service teachers were encouraged to make use of social media platforms along with regular face-to-face teaching. It was found that the teachers had a great experience as regards announcements and better dissemination of information.
2.5 Metaverse Kim et al. [19], Duan et al. [20], Maharg and Owen [21], Jovanović and Milosavljević [22], and Park and Kim [23] show the use of metaverse for education and social learning. Metaverse is purely an AR-VR environment that facilitates the use of technology for performing almost everything that can be thought of under the sun. The papers listed here are using metaverse for educational purposes in, law, gamified approach, and much more. In [19], the author discusses the increasing role of a metaverse in distance educa-
1 VR in social learning: applications and challenges
7
tion. The metaverse face used in the environment improves the attention of students. The real face in a metaverse environment would increase the student’s experience but its impact on attention is unknown. There were 38 undergraduate participants in the study in two groups, with and without face provision. The findings demonstrate that actual facial appearance had no bearing on attentiveness or social presence in the online learning environment [20]. While race, gender, and even physical disability conceptions would be diminished, the metaverse societies are being designed to be more realistic and provide true experiences to people, which would be extremely advantageous for society [21–23]. Although one of the most popular uses of the Internet for instruction and entertainment, higher education and formative education have so far paid little attention to simulation. In the work of Mustafa [24], the author mentions that the first use of VR in the digital world for education was as an avatar. Later this role developed into video games, and now immersive VR and AR collaborative spaces. The author discusses the feasibility and applicability of metaverse (much advanced VR) to education. In their study the author found the metaverse to be a useful aid in education from both teacher and student perspectives. Jun [25] discusses the use of VR and metaverse for the creation of a virtual church and whether it is acceptable. This is a unique application of metaverse using VR which relates people to shared virtual churches. In the work of Dincelli [26], the author discusses the five factors that immersive VR brings to the field of the metaverse, namely, embodiment, interactivity, navigability, sense-ability, and create-ability. Beck in their work [27] discuss the suitability of metaverse to education as it helps in better concept building and decision-making.
3 Ethical and social issues in AR/VR in social learning Willson [28] points out that people value togetherness and community feelings. In today’s world, physical togetherness is not possible all the time so people look at virtual togetherness as a solution. But this has many ethical issues and political implications that people should be aware of before using. The paper points out that the anonymity achieved in the virtual world and actions that you do not do otherwise in a physical presence is promoted in the virtual world. He says it takes more effort to build a community in the physical world than in the virtual world. In the work of Bhaduria et al. [29], the author discusses that the evolution of IT and globalization with increased use of ICT has given several terms namely, universal access, digital divide, e-content, e-democracy, hacking, cyber crimes, and cyber war. These terms include both the advantages as well as the drawbacks and problems. Similarly, these terms can be applied to the use of VR and its implications on society. It
8
Himani Mittal
particularly points out that netiquette, cultural differences, e-business, and privacy are some of the primary concerns in the use of ICT. In the work published in 2017 [30], the author discusses that in e-business the role of AI is more pronounced than that of VR. However, the results captured for user behavior using AI and VR-based games and other platforms are only indicative of how the person might behave. The rationale and decision-making of a person in the game are different than in reality. In the game, the user knows that the harm is not real but in real-life situations, the user is fully aware of the consequences and behaves more responsibly. Another aspect is that AI and VR collect the behavior of a person using past decisions and judgments but the past is not always indicative of future behavior as people tend to change. He refers to this as human distinctiveness. Jia and Chen [31] categorize the ethical dilemma in entertainment as morality: keeping track of the generation gap, Mentality: minding the state of mind; responsibility: through punishment; and human rights: concerning privacy. The distance VR creates between real communities can affect the family life of people and creates a generation gap between people of the same family. It creates an addiction to the virtual world. VR tends to change the behavior of people which can be positive or negative depending on the content they watch. It also talks about the damage the virtual world can cause to the real world and the involved punishment.
4 Discussion The wide applications of VR discussed in Section 3 show the promising aspects of VR in the coming time. With the evolution of time, VR technology has evolved from a simple avatar on the screen to immersive and now metaverse-based worlds like virtual churches and virtual universities with virtual avatars of people visiting these places and interacting with others. The ethical and social challenges in the use of VR cannot be ignored though. The virtual experience can make up for the infrequent physical activity but cannot replace the physical activity completely. The benefits a person derives from the actual physical connectivity and the behavior of people in the actual physical world are different. People who are otherwise introverts can be really loud on a social network behind their avatars. Real accomplishments happen in the real world and virtual achievements cannot match them. Take, for example, the simple game of games of snooker. A person who has a high score in virtual games of snooker game may perform poorly in a real game as they do not have the required motor skills.
1 VR in social learning: applications and challenges
9
5 Conclusions The use of VR is manyfold in social learning and many applications of both are possible as already discussed in this chapter namely, distance education, training, Neurotherapy, social media, and the metaverse. The VR technology grown using the concepts of social learning can help in building content and experience that can greatly help with the better development of social practices among communities. The ethics involved in the development of VR technology is a challenge. People need to be trained and sufficiently empowered to make the correct use of VR technology and enhance their social learning experiences. However, the virtual can never replace the real experience and therefore it should be used only to complement the real experience.
References Laland, Kevin N. 2004. “Social Learning Strategies.” Animal Learning and Behavior 32(1): 4–14. Reed, M. S., A. C. Evely, G. Cundill, I. Fazey, J. Glass, A. Laing, J. Newig, B. Parrish, C. Prell, C. Raymond, and L. C. Stringer. 2010. What is social learning? Ecology and Society 15(4): r1. [online] URL: http://www.ecologyandsociety.org/vol15/iss4/resp1/ [3] Vassileva, Julita. 2008. “Toward Social Learning Environments.” IEEE Transactions on Learning Technologies 1(4): 199–214. [4] Mittal, H. 2020. “Virtual Reality: An Overview.” CSI Communications: Knowledge Digest for IT Community 44(4): 9. [5] Mekni, Mehdi, and Andre Lemieux. 2014. “Augmented Reality: Applications, Challenges and Future Trends.” Applied Computational Science 20: 205–214. [6] Jarmon, Leslie, Tomoko Traphagan, Michael Mayrath, and Avani Trivedi. 2009. “Virtual World Teaching, Experiential Learning, and Assessment: An Interdisciplinary Communication Course in Second Life.” Computers and Education 53(1): 169–182. [7] Scavarelli, Anthony, Ali Arya, and Robert J. Teather. 2021. “Virtual Reality and Augmented Reality in Social Learning Spaces: A Literature Review.” Virtual Reality 25: 257–277. [8] Akçapinar, Gökhan, and Çiğdem Uz Bilgin. 2020. “Öğrenme analitiklerine dayalı oyunlaştırılmış gösterge paneli kullanımının öğrencilerin çevrimiçi öğrenme ortamındaki bağlılıklarına etkisi.” Kastamonu Eğitim Dergisi 28(4): 1892–1901. [9] Viberg, Olga, Mohammad Khalil, and Martine Baars. 2020. “Self-regulated Learning and Learning Analytics in Online Learning Environments: A Review of Empirical Research.” In Proceedings of the Tenth International Conference on Learning Analytics & Knowledge, pp. 524–533. [10] Tenzin, Doleck, David John Lemay, and Christopher G. Brinton. 2021. “Evaluating the Efficiency of Social Learning Networks: Perspectives for Harnessing Learning Analytics to Improve Discussions.” Computers and Education 164: 104124. [11] Schatz, Sae, David Fautua, Julian Stodd, and Emilie Reitz. 2015. “The Changing Face of Military Learning.” In Proceedings of the I/ITSEC. [12] Bhagat, Kaushal Kumar, Wei-Kai Liou, and Chun-Yen Chang. 2016. “A Cost-effective Interactive 3D Virtual Reality System Applied to Military Live Firing Training.” Virtual Reality 20: 127–140. [13] Greenwood, Andrew T., and Michael P. O’Neil. 2016. “Harnessing the Potential of Augmented and Virtual Reality for Military Education.” In Intelligent Environments 2016, pp. 249–254. Netherlands: IOS Press. [1] [2]
10
Himani Mittal
[14] Dirksen, Julie, Dustin DiTommaso, and Cindy Plunkett. 2019. “Augmented and Virtual Reality for Behavior Change.” The ELearning Guild. [15] Pamparău, Cristian, Radu-Daniel Vatavu, Andrei R. Costea, Răzvan Jurchiş, and Adrian Opre. 2021. “MR4ISL: A Mixed Reality System for Psychological Experiments Focused on Social Learning and Social Interactions.” In Companion of the 2021 ACM SIGCHI Symposium on Engineering Interactive Computing Systems, pp. 26–31. [16] Sinhwa, Kang, Jake Chanenson, Pranav Ghate, Peter Cowal, Madeleine Weaver, and David M. Krum. 2019. “Advancing Ethical Decision Making in Virtual Reality.” In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), pp. 1008–1009. IEEE. [17] Deaton, Shannon. 2015. “Social Learning Theory in the Age of Social Media: Implications for Educational Practitioners.” Journal of Educational Technology 12(1): 1–6. [18] Doğan, Dilek, and Yasemin Gülbahar. 2018. “Using Facebook as Social Learning Environment.” Informatics in Education 17(2): 207–228. [19] Kim, Kukhyeon, Yuseon Jeong, and Jeeheon Ryu. 2022. “Does the Real Face Provision Improve the Attention and Social Presence in Metaverse as Learning Environments?” In Society for Information Technology & Teacher Education International Conference, pp. 1760–1765. Association for the Advancement of Computing in Education (AACE). [20] Duan, Haihan, Li Jiaye, Sizheng Fan, Zhonghao Lin, Wu Xiao, and Wei Cai. 2021. “Metaverse for Social Good: A University Campus Prototype.” In Proceedings of the 29th ACM International Conference on Multimedia, pp. 153–161. [21] Maharg, Paul, and Martin Owen. 2007. “Simulations, Learning and the Metaverse: Changing Cultures in Legal Education.” Journal of Information, Law, Technology 1: 1–28. [22] Jovanović, Aleksandar, and Aleksandar Milosavljević. 2022. “VoRtex Metaverse Platform for Gamified Collaborative Learning.” Electronics 11(3): 317. [23] Park, Sungjin, and Sangkyun Kim. 2022. “Identifying World Types to Deliver Gameful Experiences for Sustainable Learning in the Metaverse.” Sustainability 14(3): 1361. [24] Mustafa, Bahaa. 2022. “Analyzing Education Based on Metaverse Technology.” Technium Social Sciences Journal 32: 278. [25] Jun, Guichun. 2020. “Virtual Reality Church as a New Mission Frontier in the Metaverse: Exploring Theological Controversies and Missional Potential of Virtual Reality Church.” Transformation 37(4): 297–305. [26] Dincelli, Ersin, and Alper Yayla. 2022. “Immersive Virtual Reality in the Age of the Metaverse: A Hybrid-narrative Review Based on the Technology Affordance Perspective.” The Journal of Strategic Information Systems 31(2): 101717. [27] Beck, Dennis, Leonel Morgado, and Patrick O’Shea. 2023. “Educational Practices and Strategies with Immersive Learning Environments: Mapping of Reviews for Using the Metaverse.” IEEE Transactions on Learning Technologies 1–23. [28] Willson, Michele. 1997. “Community in the Abstract: Apolitical and Ethical Dilemma.” In D. Holmes (Ed.) Virtual Politics: Identity and Community in Cyberspace, pp. 145–162. London: Sage. [29] Bhadauria, Sarita Singh, Vishnu Sharma, and Ratnesh Litoriya. 2010. “Empirical Analysis of Ethical Issues in the Era of Future Information Technology.” In 2010 2nd International Conference on Software Technology and Engineering, vol. 2, pp. V2–31. IEEE. [30] Kiruthika, Jay, and Souheil Khaddaj. 2017. “Impact and Challenges of Using of Virtual Reality & Artificial Intelligence in Businesses.” In 2017 16th International Symposium on Distributed Computing and Applications to Business, Engineering and Science (DCABES), pp. 165–168. IEEE. [31] Jia, Jingdong, and Wenchao Chen. 2017. “The Ethical Dilemmas of Virtual Reality Application in Entertainment.” In 2017 IEEE International Conference on Computational Science and Engineering (CSE) and IEEE International Conference on Embedded and Ubiquitous Computing (EUC), vol. 1, pp. 696–699. IEEE.
Shishir Singh Chauhan✶, Gouri Sankar Mishra, Gauri Shanker Gupta, Yadvendra Pratap Singh
2 Integration of blockchain techniques in augmented and virtual reality Abstract: Unquestionably, the public is interested in modern, cutting-edge technology like virtual reality (VR), augmented reality (AR), and blockchain. Many foreign investors are paying attention too. Although VR, AR, and blockchain – originally intended for entertainment – at first appeared to have nothing in common, over the past few years, a variety of use cases have shown how these concepts may be combined successfully. The current options are being taken into account. This chapter examines hybrid VR/AR/blockchain technologies that have helped businesses and academic researchers in a wide range of application fields address issues affecting conventional services and goods. Keywords: Augmented reality (AR), virtual reality (VR), blockchain
1 Introduction The term “cryptocurrency” has become more often and is used in professional and academic settings in the recent years. Bitcoin, one of the most well-known cryptocurrencies, had a capital market worth more than USD 10 billion in 2016 [1]. Blockchain is an important technological advancement that was used to construct Bitcoin; it was originally put forth in 2008 and put into use in 2009 [2]. Thanks to a unique data storage structure, transactions on the Bitcoin network can take place without the involvement of a third party. All transactions that have been completed are recorded in the public ledger of the blockchain, which is made up of a series of blocks. This chain expands as new blocks are continuously added to it. For user security and ledger consistency, asymmetric cryptographic and consensus mechanism approaches have been used. Decentralization, durability, secrecy, and traceability are the four core properties of blockchains. Because of these characteristics, blockchain could significantly reduce costs and increase efficiency.
✶ Corresponding author: Shishir Singh Chauhan, Manipal University Jaipur 303007, Rajasthan, India, e-mail: [email protected] Gouri Sankar Mishra, Department of Computer Science and Engineering, Sharda University, Greater Noida 201310, India, e-mail: [email protected] Gauri Shanker Gupta, BIT Mesra, Ranchi, Jharkhand, India, e-mail: [email protected] Yadvendra Pratap Singh, Manipal University, Jaipur 303007, Rajasthan, India, e-mail: [email protected]
https://doi.org/10.1515/9783110981445-002
12
Shishir Singh Chauhan et al.
Virtual reality (VR) and augmented reality (AR) are largely considered as the most inventive technologies of the twenty-first century. By stimulating our perceptions with false pictures, they are able to temporarily persuade our brains to accept VR/AR as a credible substitute for reality [3–7]. For a variety of purposes, VR and AR provide potent 3D interactive visual experiences. At the International Tourismus-Börse 2017 (Europe), several tourism technology providers displayed VR/AR content that featured real-time views of popular tourist locations and attractions. These businesses frequently produce content for hotels, tourist attractions, and other destination providers. Some of these businesses also give travel agencies access to content management systems (CMS), which they may use to develop unique, immersive VR experiences for their clients.
2 Virtual reality and augmented reality Techniques for VR have long been studied [11]. However, a variety of these products have been created and are offered to the general public [12]. There are many definitions of VR, but perhaps the following is the most thorough and inclusive one: VR is described as a real or simulated environment in which a perceiver feels telepresence, according to Jonathan Steuer’s article [13]. This description was chosen because it isolates the technological implications, allowing us to focus on methods and applications rather than identifying specific head-mounted displays (HDM) or globes in an effort to predict the route the technology will take. AR, like VR, is a technique for superimposing additional information on the actual world. With this idea, we may focus on technical advancement while identifying approaches and applications without having to talk about specific hardware [14]. These words are a part of the “virtuality continuum” as defined by Paul Milgram and Fumio Kishino [15]. This expression alludes to a continuum that extends from genuine reality to manufactured VR. The virtual continuum contains the subset of mixed reality, which has been defined as everything between reality and a fully virtual world (Figure 1).
2.1 Evolution In the past 60 years, a variety of immersive technologies have appeared, providing us with many hours of entertainment and exploration. These developments provide us with novel feelings by utilizing artificially enhanced environments, sometimes referred to as VR. Many claim that these artificial surroundings alter our experience of true space, ultimately leading to confusion about what is genuine. Academic and industry professionals’ attempts to categorize the enormous and 3D components found in headgear, computers, and mobile phones have only served to increase the complexity. These
2 Integration of blockchain techniques in augmented and virtual reality
13
Mixed Reality
Virtual Reality
Augmented Virtuality
Real Environment
Augmented Reality
Virutal Environment
Figure 1: Virtuality continuum.
evolving standards, which cover anything from digital to AR, may be something you are already aware of.
2.2 The creator of movies and pioneer of virtual reality There is a lot of mystery and complication in the AR/VR market. The first cracks in reality were first created in the 1950s by a filmmaker. Ironically, Morton Heilig was neither an engineer nor a computer scientist. He was a Hollywood cinematographer who wanted to figure out how to give the audience the impression that they were watching the movie themselves. He experimented with and created a three-dimensional (3D) film viewing apparatus as an inventor, which at the time required mounting numerous 35 mm cameras to a cinematographer. According to sources, Morton expressed his excitement about the potential as being “extremely.”
2.3 The beginning of virtual and augmented reality It wasn’t until 1989 that a standard name for the environment that computers generate was decided upon. Jaron Lanier, CEO of VPI Research Inc., created gloves and goggles to interact with what he initially referred to as “virtual reality” [25, 26].
14
Shishir Singh Chauhan et al.
A see-through head-mounted display (HUDSET) was first proposed by Thomas P. Caudell as a way to integrate dynamic images into a factory worker’s actual range of vision in the 1990s. His goal was to use a better interface to increase industrial activities’ precision. Caudell further distinguished between using a HUDSET for AR and VR on the grounds that AR required accurate registration with the real environment. Although Louis Rosenberg developed the first true AR system in 1992, when he developed virtual fixtures at the USAF Armstrong Labs, Caudell is credited with coining the term. Virtual fixtures demonstrated improved human performance in realistic dexterity tasks, such as instructing Air Force and other military sector pilots, using specialized binocular magnifiers and robot arms.
2.4 The state of immersive technology today Currently, a variety of “extending” technologies are used in the AR/VR experience, with 360° space predominating. Users can easily access 360° video on YouTube, which enables them to view images taken from the vicinity of a camera’s actual position. By enabling users to interact with the 360° environment through a headset, VR (VR) goes one step further and enables you to block out the outside world and inhabit virtual spaces. Everyone can experience VR now that equipment like the Oculus Quest 2 is more widely available and more affordable. The modern application of “augmented reality” allows for overlaying of digital images on the real world. Users of the well-known app Pokemon GO can capture animated Pokemon in actual locations by using their phones. Other examples are the IKEA app, which lets users to visually arrange furniture in their homes using a mobile device, the Instagram and Snapchat filters, and others.
2.5 Social learning spaces With the aid of augmented and VR (AR/VR) technology, students can interact with dynamic, engaging environments. This can significantly improve social learning. Personalized learning, collaborative learning, gamification, simulation-based learning, and real-time feedback are a few examples of AR and VR uses in social learning. Additionally, AR/VR offers people a range of social learning environments, such as in educational settings (classrooms) and museums. Additionally, it is expanding to incorporate pertinent concepts about interpersonal interactions that are present in more socially immersive and reality-based media frameworks. It is crucial to consider this for any social VR system since it clarifies how social context may both benefit and hinder learning. Overall, AR/VR technologies have the
2 Integration of blockchain techniques in augmented and virtual reality
15
potential to greatly improve social learning by immersing students in engaging environments that promote teamwork, individualized learning, and skill development.
2.6 Applications of blockchain The systems benefit greatly from blockchain since it reduces transaction time and cost. Blockchain technology is used across a wide range of businesses.
2.6.1 Use in financial sectors This technology is also fairly demanding in terms of logistical processing time. Businesses commonly conduct transactions using the blockchain technique. A Santander bank investigation estimates that eliminating middlemen from transactions utilizing blockchain technology will result in savings of USD 20 billion. Financial institutions built on blockchain reduce the time needed for deals. Benefits of the blockchain for the banking sector include: – Increased regularity in compliance: A selected regulator can view a split-up ledger that is transparently shared among financial institutions. – Risk management: Due to the quicker settlement times, it can boost liquidity and lower balance sheet risk. – Cost reduction: Post-trade processing efficiencies like reconciliation, settlement, and redundancy elimination lead to cost savings.
2.6.2 Healthcare implementation of blockchain New models for storing and sharing medical data have been developed as a result of blockchain’s potential to provide security and trust while lowering the time, money, and resources needed by traditional healthcare infrastructure. While systems like the All-Payer Claim Database and the Health Information Exchange were rendered useless, the use of Keyless Signature Infrastructure increased clearly [8]. In order to use blockchain technology responsibly, Healthbank, Netcetera (Switzerland), and Noser (Germany) have started an initiative.
2.6.3 Applications in many industries The application of blockchain technology is widespread in industry. As an illustration, think about the following:
16
–
–
Shishir Singh Chauhan et al.
Decentralized autonomous organizations (DAO) build trust mechanisms between people and computers and employ agents autonomously to carry out specialized jobs [9]. Digital contracts: Smart contracts enable self-implementation of transactions without the involvement of any third party by utilizing embedded information, such as execution rules and specified terms [10].
2.6.4 Cryptocurrency By eliminating middlemen and enabling transactions between several parties in a trusted environment, blockchains are being used in this manner. Many companies are testing distributed ledger technology in post-trade procedures like cash, clearing, and custody management.
3 Technology for augmented reality and virtual reality This section will address the technologies that enable users to view and engage with VR/AR applications. Technology for VR and AR is still evolving swiftly. Software is becoming more complex and speedier, and graphical representation is more complete. Hardware is becoming more affordable and smaller. The enhanced user experience is one of these technological advancements that cannot be overlooked. The fundamental components have not changed much since Ivan Sutherland, known as the “father of computer graphics,” and his students built some of the underpinnings of modern computer graphics in the 1960s, even if AR requires more sophisticated technology than VR. The following sections provide a quick description of the relevant AR/VR technology.
3.1 Display technologies for virtual reality The most crucial factor to take into account when choosing a VR device is usability, which includes being simple to wear, flexible in usage, and having psychedelic graphics that give the user a fun and engaging VE experience [27, 28]. Based on the most recent technical specifications available at the time of writing, it is possible that VR technology will have advanced by the time this is printed. Various VR hardware and software developers are engaged in conflict. The underpinnings of an effective user experience are examined in this essay. With the aid of these suggestions, you may assess any VR dis-
2 Integration of blockchain techniques in augmented and virtual reality
17
play and choose the best technology while asking the proper questions. The guidelines listed below will aid in the success of VR display technology. – Stereoscopic imaging: Some objects appear to be nearby, while others appear to be far away. A unilateral HMD might give the user a more or less realistic 3D experience by showing each eye slightly different picture quality that results in visual overlap. The spectator will see a dual image that is out of focus if the stereo overlapping is inaccurate. – Hybrid optical distance (IPD): The hybrid optical distance (IPD) is the separation between the pupils of the two eyes. Tandem pictures demand a face display with a variable IPD since each end user will see a clear, sharp image at a slightly different IPD level. – Field of view (FOV): While a human’s FOV typically ranges from 180° to 360°, HMDs are currently unable to match this. Modern HMDs offer a field of vision between 60° and 150°. A bigger field of vision results in a greater sense of immersion and realism, allowing the user to interact more successfully and gain a better understanding of their surroundings. – On-board computing and system software: WiFi HMDs, commonly referred to as “smart goggles,” allow programs to run locally on the HMD without even a third-party device, thanks to on-board hardware and software like Android. Even though there are now a number of early systems that link the HMD with “rucksacks” that house the processing platform and power packs, the challenge is maintaining the HMD construction as light as is practical.
3.2 Technology for augmented reality displays AR is made possible by recent technology developments including computer vision, object detection, micro gyroscopes, a GPS navigation system (GPS), and the solid state compass [28]. The following are the essential conditions for the AR display technologies [29] required to offer the AR experience: – Marker-based AR: This type of AR makes use of a camera and specific visual markers, such as QR/2D codes, that only produce results when read by a reader. The technology behind it is picture recognition. The camera-based marker-based apps can tell the marker apart from other objects in the real world. The bulk of markers are based on basic, easy-to-read patterns that require little computational work. The orientation and position are also established here when specific information or material is overlaid on the placement of a marker. – Markerless AR: Markerless AR, sometimes referred to as location-based, position-based, or GPS-based AR, displays information based on the precise location and orientation of the device using miniature versions of a GPS, a digital compass, and an accelerometer that are incorporated into the device. The adoption of marker-less AR technology is mostly being driven by the availability of smart-
18
–
Shishir Singh Chauhan et al.
phones with location-based services. They are mainly used for location-based smartphone apps, such as finding local businesses, getting directions, and using maps. Projection-based AR: Surfaces are illuminated artificially by AR that uses projectors. By projecting light onto a physical surface and tracking when someone touches it, projection-based AR apps enable human interaction with the technology. The user’s behavior is identified by distinguishing between an anticipated (or known) projection and the altered projection (caused by the user’s involvement). Commercial version applications are still in the early stages of development despite the fact that early demonstrations were launched in 2014. An interactive 3D holographic is shown using laser plasma technology in a wearable technology model.
3.3 Tracking sensors In order to change their perspective inside AR/VR and continuously update their position in the virtual world, users want grid-connected activities that enable them to move in the real world. As a result, it is acknowledged as one of the crucial parts of VR/AR systems. To express the user’s perspective, these gadgets essentially interface with a system processing component. VR/AR systems can use sensors to track a user’s position, speed, and motion in any direction [20–22]. The following three suggestions are related to tracking for VR configurations: 1. A rigid body has six degrees of freedom (DOF) for movement detection: forward/ backward, left/right, up/down, yaw, pitch, and roll in 3D space. 2. Orientation: The yaw, roll, and pitch of a 3D object are utilized to ascertain its orientation. 3. Coordinates: The locations of the objects along the X, Y, and Z axes are used to describe their orientation. These three ideas have an impact on how tracking for HMDs is created. Any tracking system includes a device that produces a signal that a sensor might be able to detect. The processing of this signal creation, information transmission to the central processing unit, and graphics processing unit are all handled by the entire VR/AR unit. Different sensors may produce signals that are electromagnetic, optical, mechanical, or aural in nature. These particular signal types are used by various tracking systems.
2 Integration of blockchain techniques in augmented and virtual reality
19
4 AR and VR technology comparison based on user experience The use of technology in social interactions is widespread, and computer science has advanced quickly. The usage of VR and AR technology is one of the primary application instructions, which is further utilized. Instead of using the usual human-machine interface (HMI), “virtual reality” (VR) refers to the idea that viewers are immersed in a 3D world created by a computer [23, 24]. VR therefore integrates computer technology in three dimensions, including artificial intelligence, sensing, visuals, intellect, and so forth. Users can add and locate virtual objects or data using AR, which is based on VR, and uses computer animation and display technologies. The digital products are then precisely “put” into the real-world utilizing sensing technology, which may properly merge virtual and real items using specific associated equipment. As a result, the actual world and the virtual world are combined, resulting in a thrilling experience. AR and VR technology is widely used in a variety of fields, including pleasure, construction, medicine, and education. The information service is growing swiftly, and the software systems for these two technologies are fairly diversified across many different industries. Information service providers must create a satisfying user experience in order to reach their target audience and foster brand loyalty. With a focus on the concept of user experience (UX), this article examines the application of AR and VR technologies in the area of real estate presentation. It also uses a variety of quantitative techniques and evaluation indices.
4.1 The process comparison of UX The primary components of its content during the typical real estate display phase are the region’s map, sand tables, site plan, flat layout model, prototype house, etc. Customers could only obtain the necessary knowledge through the verbal explanations of salespeople, and the only method to gain user experience was by going to a model home. When using VR and AR technology, the experience is different from how traditional real estate is presented (Figures 2 and 3).
4.2 Choosing quantitative techniques and evaluation metrics In the real estate display based on AR technology, the user replaces the dust table, layout plan, housing configuration prototype, home decorating materials, and household items with the virtual elements and places them into the real environment, while the estate showcase based on VR uses the result in the appearance of reality as
size of apartment layout
structure of apartment layout
features of apartment layout
project scale and location of sales office
overall project plan
building materials, construction technology standards of property managements
specific location of the project
surrounding traffic condition
commercial and municipal facilities
natural and cultural environment
Figure 2: The method and nature of user interaction in conventional real estate displays.
introduction of layout plan
Introduction of sand table
Introduction of project and surrounding environment
style of design
size and structure of sample house
standards of delivery
introduction of samplef house
20 Shishir Singh Chauhan et al.
2D Image, Video
1D Text Information Building Road, Landscape Figures
Introduction of Apartment Layout
Apartment Layout Autonomy Roam
Virtual Reality Experience
Sharing AR screenshots and Video etc
Internet Platfrom
Apartment Layout Automatic Roam
Automatic Roam
Autonomy Roam
Users to share
Indoors Roam
Outdoors Roam
Figure 3: The method and nature of a VR-based user experience.
Design Style
Project Location
Developers
Project overview
2 Integration of blockchain techniques in augmented and virtual reality
21
22
Shishir Singh Chauhan et al.
the bearer, such as in a vi In the property display sector, this article will compare these two methodologies using a single assessment criterion and benchmark. Then, based on a mix of objective and subjective standards, we will select three features [16] of well-known VR and AR real estate display products now offered on the market: technical merit, utilitarian purpose, and artistic sensibility represented in Table 1. Table 1: Evaluation index. Evaluation index
Index description
Technical functions
It is well received, simple to use, and has a high rate of information access. Some consumers like how easily they can get information. Personality groups are welcomed and given the knowledge they require.
Widely used (WU) Unusual use (UU) Personal common (PC)
Utility value Procedure (P)
Aesthetic sensibility
Processes required? Pti
Time (T)
Processing time? Tti
Information Receiving Frequency (P/T)
ISituation of Information Receiving (Pti/Tti)
Satisfaction (C)
subjective score for satisfaction
Emotion (E)
Score for subjective emotional responses
Aesthetic sensibility (A)
Score for aesthetic response
5 Discussion According to recent studies, AR and VR technologies will be crucial in controlling vehicle construction projects in the future. To increase productivity, the majority of construction management software now incorporates AR or VR. As previously indicated, construction management heavily utilizes the possibilities of AR and VR [17–19]. AR is used in the present construction process to schedule projects and monitor project progress. Scheduling is critical, but tracking progress effectively is crucial for both AR and VR technology. AR and VR are tried-and-true technologies that let project participants communicate more effectively and quickly. The platform provided to project participants is perfect for discussing and disseminating crucial information in real time without the need for a physical medium. AR and VR are also used to manage quality and defects in construction projects. While each technology has a vital and valuable duty to carry out, researchers regard AR more than VR for the administration of QA/QC. Take into account the potential of AR and VR technology for quality control in any construction project. For many years, AR and VR technology has been extensively used in worker training and safety management in the construction
2 Integration of blockchain techniques in augmented and virtual reality
23
industry. Numerous relevant researches have shown that AR and VR are the best training and safety management tools currently on the market. The ability to walk around a project’s parametric model before it is really started, giving the user a sense of being in the real world, is another amazing feature of AR and VR technology. From the design phase to the maintenance phase, VR has significantly aided in the visualization of the building project. Many construction companies already make effective use of VR and AR, and many more are preparing to do so for project visualization, training, safety, and progress tracking. Despite the fact that AR (AR) and VR technologies seem to be crucial tools for the construction industry, they also have a number of drawbacks. There appeared to be a number of limitations and drawbacks to the use of AR and VR technology in the building industry. Many academics believe that all of the challenges and limitations will be quickly resolved by the succeeding generations. With the assumption that these technologies will advance in terms of safety, quality, visualization, labor management, and time management, it is almost probable that AR and VR technologies will play more prominent roles in construction in the years to come. To maximize the benefits of modern construction management and to get the most out of these technologies, readers and the relevant authorities may employ AR and VR technology to their building projects.
6 Conclusion Construction is one of the largest industries in the world. There have been significant advancements throughout the building industry’s history. One of the many technological advances causing an unimaginable transformation and advancement in many construction management challenges are the AR and VR technologies. The major objective of this study is to investigate the potential applications of AR and VR in construction management. It also looks at how these technologies might be able to help with some of the issues that have beset the sector over the past three decades. According to the survey, the construction industry is being significantly impacted by these incredible developments in AR and VR technology in a number of different ways. This review highlights a number of uses for VR and AR technologies. This chapter discusses the prospects and benefits of using AR and VR technologies to address construction management challenges. Future readers might find it useful to understand the potential uses of AR and VR in construction management features and to observe how projects have benefited from them.
24
Shishir Singh Chauhan et al.
References [1] [2] [3] [4] [5] [6] [7] [8]
[9] [10]
[11] [12]
[13] [14] [15] [16] [17]
[18]
[19] [20]
State of Blockchain q1 2016: Blockchain Funding Overtakes Bitcoin. 2016. [Online]. http://www.coin desk.com/state-of-blockchain-q1-2016. Nakamoto, S., 2008 [Online]. “Bitcoin: A Peer-to-Peer Electronic Cash System.” https://bitcoin.org/ bitcoin.pdf. Burdea Grigore, C., and P. Coiffet. 1994. Virtual Reality Technology. London: Wiley-Interscience. Hale, Kelly S., and Kay M. Stanney, eds. Handbook of virtual environments: Design, implementation, and applications. CRC Press, 2014. Schmalstieg, Dieter, and Tobias Hollerer. Augmented reality: principles and practice. AddisonWesley Professional, 2016. Jerald, Jason. The VR book: Human-centered design for virtual reality. Morgan & Claypool, 2015. Blascovich, J., and J. Bailenson. 2011. Infinite Reality: The Hidden Blueprint of Our Virtual Lives. New York: HarperCollins Publishers. Mylrea, M., S. N. G. Gourisetti, R. Bishop, and M. Johnson. Apr. 2018 “Keyless Signature Blockchain Infrastructure: Facilitating NERC Cip Compliance and Responding to Evolving Cyber Threats and Vulnerabilities to Energy Infrastructure.” In IEEEPublications/PES Transmission and Distribution Conference and Exposition (T&D), vol. 2018, pp. 1–9. IEEE Publications. Singh, M., and S. Kim. 2019. “Blockchain Technology for Decentralized Autonomous Organizations.” Advances in Computers, vol. 115. Elsevier. https://doi.org/10.1016/bs.adcom.2019.06.001t. Wang, S., L. Ouyang, Y. Yuan, X. Ni, X. Han, and F. Y. Wang. 2019. “Blockchain-Enabled Smart Contracts: Architecture, Applications, and Future Trends.” IEEE Transactions on Systems, Man and Cybernetics: Systems 49(11): 2266–2277. https://doi.org/10.1109/TSMC.2019.2895123. Pucihar, K. C., and P. Coulton. 2013. “Exploring the Evolution of Mobile Augmented Reality for Future Entertainment Systems.” Computers in Entertainment (CIE) 11: 1. Paavilainen, J., H. Korhonen, K. Alha, J. Stenros, E. Koskinen, and F. Mayra. May 6–11 2017. “The Pokémon GO Experience: A Location-Based Augmented Reality Mobile Game Goes Mainstream.” In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, Denver, CO, US, pp. 2493–2498. Milgram, P., and F. Kishino. 1994. “A Taxonomy of Mixed Reality Visual Displays.” IEICE Transactions on Information and Systems 77: 1321–1329. Nincarean, D., M. B. Alia, N. D. A. Halim, and M. H. A. Rahman. 2013. “Mobile Augmented Reality: The Potential for Education.” Procedia: Social and Behavioral Sciences 103: 657–664. Milgram, P., and F. Kishino. 1994. “A Taxonomy of Mixed Reality Visual Displays.” IEICE Transactions on Information and Systems 77: 1321–1329. Zhibo, Yin, and Yang Ying. 2008. (in Chinese). “Research Methods of Quantifying User Experience II.” The Paper Volume of Fourth Harmonious Human-Computer Joint Academic Conference. Davila, Delgado, Juan Manuel, P. Demian, and T. Beach. Aug. 2020. “A Research Agenda for Augmented and Virtual Reality in Architecture, Engineering and Construction.” Advanced Engineer ing Informatics 45: 101122. Elsevier BV. https://doi.org/10.1016/j.aei.2020.101122. Tai, Nan-Ching. Jul. 2022. Applications of Augmented Reality and Virtual Reality on ComputerAssisted Teaching for Analytical Sketching of Architectural Scene and Construction. Journal of Asian Architecture and Building Engineering. Informa UK Limited. https://doi.org/10.1080/13467581.2022. 2097241. Ben Ghida, D. “Augmented reality and virtual reality: A 360 immersion into western history of architecture.” International Journal 8, no. 9 (2020). Ashwinin, K., N. P. Preethi, and R. Savitha. 2020. “Tracking Methods in Augmented Reality – Explore the Usage of Marker-Based Tracking.” SSRN Electronic Journal. Elsevier BV. https://doi.org/10.2139/ ssrn.3734851.
2 Integration of blockchain techniques in augmented and virtual reality
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
25
Satria, Bagus, and Prihandoko Prihandoko. Jan. 2018. “IMPLEMENTASI Metode Marker Based Tracking PADA Aplikasi Bangun RUANG Berbasis Augmented Reality.” Sebatik 19(1): 1–5. STMIK Widya Cipta Dharma. https://doi.org/10.46984/sebatik.v191.88. Afrillia, Yesy, and Dasril. Apr. 2021. “Implementation of Augmented Reality to the Seven World Using Marker Based Tracking Method (MBT) Android Based.” Jurnal Teknovasi 8(01): 37–44. Politeknik LP3I Medan. https://doi.org/10.55445/jt.v8i01.26. Lacoche, Jérémy, et al. Jul. 2022. “Evaluating Usability and User Experience of AR Applications in VR Simulation.” Frontiers in Virtual Reality 3. Frontiers Media SA. https://doi.org/10.3389/frvir.2022. 881318. Skarbez, Richard, M. Smith, A. Sadagic, and M. C. Whitton. Aug. 2022. “Editorial: Presence and Beyond: Evaluating User Experience in AR/MR/VR.” Frontiers in Virtual Reality 3. Frontiers Media SA. https://doi.org/10.3389/frvir.2022.983694. Cipresso, Pietro, Irene Alice Chicchi Giglioli, Mariano Alcañiz Raya, and Giuseppe Riva. Nov. 2018. “The Past, Present, and Future of Virtual and Augmented Reality Research: A Network and Cluster Analysis of Literature.” Frontiers in Psychology 9: 2086. Frontiers Media SA. https://doi.org/10.3389/ fpsyg.2018.02086. Irfan, Ahmer. Apr. 2020. “Virtual and Augmented Reality in Surgical Specialties. Past, Present and Future.” Journal of the College of Physicians & Surgeons – Pakistan 30(4): 455–456. College of Physicians and Surgeons Pakistan. https://doi.org/10.29271/jcpsp.2020.04.455. Xiong, Jianghao, et al. Oct. 2021. “Augmented Reality and Virtual Reality Displays: Emerging Technologies and Future Perspectives.” Light: Science and Applications 10(1). Springer Science and Business Media LLC. https://doi.org/10.1038/s41377-021-00658-8. Yin, Kun, E. Hsiang, J. Zou, Y. Li, Z. Yang, Q. Yang, P. Lai, C. Lin, and S. Wu. May 2022. “Advanced Liquid Crystal Devices for Augmented Reality and Virtual Reality Displays: Principles and Applications.” Light: Science and Applications 11(1). Springer Science and Business Media LLC. https://doi.org/10.1038/s41377-022-00851-3. Xiong, Jianghao, and Shin-Tson Wu. Jun. 2021. “Planar Liquid Crystal Polarization Optics for Augmented Reality and Virtual Reality: From Fundamentals to Applications.” eLight 1(1). https://doi. org/10.1186/s43593-021-00003-x.
Shreejita Mukherjee, Shubhasri Roy, Sanchita Ghosh, Sandip Mandal
3 A comparative study of Li-Fi over Wi-Fi and the application of Li-Fi in the field of augmented reality and virtual reality Abstract: Augmented reality (AR) and virtual reality (VR) are the new terminologies in the technical field. AR and VR technology has made a social impact in our lives. Li-Fi (light fidelity) is also a new terminology in the wireless communication field. We use Wi-Fi (wireless fidelity) for communicating with each other through radio signals. In Li-Fi we will use light to communicate with each other, sending or transferring data. In this chapter we discuss the impact of Li-Fi over Wi-Fi in the coming days and also how Li-Fi can use the features or the applications of AR/VR to make commutation faster and easier. Li-Fi has so many benefits compared to Wi-Fi, but it has some certain disadvantages too. To achieve a high data rate and meet the smart cities requirement, Li-Fi is the best solution. Li-Fi can accommodate a huge number of users that are needed to connect everything to the so-called Internet of things (IoT). Li-Fi consumes less energy compared to Wi-Fi and has a high-speed transmission rate. It is a bidirectional wireless system that transmits data via LED or infrared light. If the Li-Fi can be used in the field of AR/VR, then this will be a revolution in science and technology. We will see how this low-cost, broadcast, less harmful technology will take over Wi-Fi in near future. Keywords: Augmented reality, virtual reality, light fidelity, wireless fidelity, bidirectional, broadcast
1 Introduction Nowadays, whenever we go anywhere, we always check whether this place has a wireless fidelity (Wi-Fi) connection or not. It has become as important as our food and clothes to survive. If we travel by train or check-in at a hotel or we are going to school, college, working in any organization, every place we recommend that place or that thing in terms of Wi-Fi. If the Wi-Fi connection is good then we say this hotel is Shreejita Mukherjee, Institute of Engineering & Management, West Bengal, India, e-mail: [email protected] Shubhasri Roy, Institute of Engineering and Management, West Bengal, India, e-mail: [email protected] Sanchita Ghosh, Institute of Engineering and Management, West Bengal, India, e-mail: [email protected] Sandip Mandal, University of Engineering and Management, Jaipur, India, e-mail: [email protected] https://doi.org/10.1515/9783110981445-003
28
Shreejita Mukherjee et al.
very good or this place has a very good Wi-Fi connection. In a single day, every one of us is transferring huge amounts of data that may be text, files, images, audio, videos, or anything. Without Wi-Fi, we cannot imagine our day running. If for any reason the Wi-Fi connection goes off or down our life becomes miserable. From sending files to the printer to turning on lights and fans in a room wirelessly, Wi-Fi comes into the picture every time [1]. For Wi-Fi, the problem we generally face is the decrease in the bandwidth of Wi-Fi. If anyone’s personal Wi-Fi is on, people try to hack that Wi-Fi for their use. So many people at the same time are trying to use the same network so automatically the bandwidth or the data rate decreases. Another point is that Wi-Fi uses radio frequencies for data transmission, it affects our health slowly but it also affects us. Here comes light fidelity (Li-Fi) which can overcome all these problems of Wi-Fi. In the following section, we discuss Li-Fi.
1.1 What is Li-Fi? Li-Fi is a combination of light and Wi-Fi. It means it will act as Wi-Fi but it uses light as the data transmission medium instead of radiofrequency (RF) [2]. Li-Fi was first introduced by Professor Harald Haas in 2011, 10 years back. It gives more than 100 Gbps data rate, reaching the speed of 224 GB per second [3, 4]. It is a wireless communication technology that uses infrared and visible light spectrum for high-speed data transmission. In the Li-Fi the security issues will be resolved. If the user wants that he will be only to use the Wi-Fi in his room, no one else will use it which can be solved by Li-Fi. Here security issues as well as bandwidth issues both will be resolved.
1.2 What is VLC? Visible light communication (VLC) is the data communication method which is the subset of optical fiber communication [5]. VLC has grown largely due to its application in the field of communication [6]. Li-Fi uses VLC as the communication medium. As the LEDs are efficient, durable, and have a long life span and faster data transmission these features make LEDs as the source of communication [7]. In this technology, the LED lights are used with sensors to capture the signals. VLC uses visible light between 400 and 800 THz and the wavelength of 780–375 nm [8]. VLC is used as a communication medium as light-producing devices are used everywhere (such as indoor lamps, street lights [9], inside cars, and commercial displays) [10]. Li-Fi uses light as a transmission medium that has an intensity faster than the human eye can follow for sending and receiving the data. Gamma rays [11], X-rays, and ultraviolet rays all of them are dangerous for the human body [12, 13]. Infrared [11], which has a wavelength of 700 nm–1 mm, consumes low power and is safe for the eye, which is the reason behind using infared as the source of the signal. Radio waves with
3 A comparative study of Li-Fi over Wi-Fi and the application
29
a wavelength of 1 m and more can penetrate through the walls, so there is a question for security in the data transmission. But infrared cannot penetrate through the walls, so the data transmission will be highly secured. VLC can be used as a replacement in RF in the near future [14, 15]. In Figure 1, the visible light is shown [16].
Increasing Frequency (Hz) 104 – 100
1018
1014 – 1012
1016
Y rays
X rays
Ultra Violet
Infrared Ray
10–16 – 10–12
10–10
10–8
10–6 – 10–4
108
106
104 – 100
FM radio
AM waves
Long Radio Waves
1010
Microwave Ray
10–2
100
104 – 108
102
Increasing Wavelength λ (m) V
B
G
Y
O
R
Visible Spectrum
Figure 1: Visible light is only a small portion of the electromagnetic spectrum.
The block diagram of visible light communication is shown in Figure 2 [17].
Input Data
Output Data
Converting the raw input data into binary
Decoding of message (from binary to original)
LED Driver
Amplifier
High Illumination LED
Photoelectric Diode
Figure 2: Block diagram of VLC system.
The block diagram in Figure 2 shows how the VLC system works. VLC uses two things: the transmitter and the receiver. Transmitter has two types of LEDs in VLC: one is a single color and another one is a white LED [17].
30
Shreejita Mukherjee et al.
The message or the data input which is the binary information is sent to LED driver; there, high-illuminated LEDs are used to send the modulated signals to the photodiodes [18]; in the diode, the signal gets amplified by the amplifier and the original message is decoded and the user will receive the message as data output.
2 Working principle of Li-Fi Li-Fi architecture needs LED light as the source of transmission and a good silicon photodiode as the recipient. It requires high-intensity white LED lights. The configuration which is used for VLC, the same system is used in Li-Fi. In an area, there will be LED lights that will act as transmitters. The data will be passed through LED-enabled circuits, then the data will be captured by the photodiode [28], and this modulated signal will be demodulated to get the actual message sent by the sender to the receiver [19]. Figure 3 shows the working structure of Li-Fi.
Sender (data is transmitted in forms binary)
Receiver (receives original message)
Modulation unit
De-Modulation unit
Anode
Cathode
PhotoDiode
Figure 3: The block diagram of Li-Fi architecture.
3 Comparison between Wi-Fi and Li-Fi We are familiar with Wi-Fi, and we are using Wi-Fi. Even we cannot imagine a single day without using the internet. If in our work one day the internet goes down, it is similar to us like current off. Everything will be stopped. Every document or file sending and receiving will be stopped; we will not be able to do anything for the
3 A comparative study of Li-Fi over Wi-Fi and the application
31
whole day. Wi-Fi is one of the basic needs in our daily life. Li-Fi also has the same feature as Wi-Fi. It has been introduced as a new technology or we can say it is the future of wireless communication. Wi-Fi has some challenges to deal with and to overcome all those things Li-Fi can be used for [20–23]. In the table 1, we will discuss the differences between Wi-Fi and Li-Fi. Table 1: Comparison between Wi-Fi and Li-Fi. Sl. no. Characteristics Wi-Fi
Li-Fi
Definition
Li-Fi stands for light fidelity; it is a bidirectional wireless system that is used to connect the devices for wireless data transmission
First introduced Many researchers were involved in the Professor Harald Haas by and the year invention of Wi-Fi though Victor “Vic” Year: March ; years ago [] Hayes is called Victor “Father of Wi-Fi” [] Year: September ; years ago []
Source of the signal
Radio waves
Visible light spectrum and infrared radiation
Connector type
N-type
Visible light communication
Components
Routers, modems, and access points
LED bulb, sensors, chip, and photodiode
Working process
Wi-Fi uses radio waves to transmit data between the user device and the router via frequencies
Li-Fi uses LED light as a source of data transmission with some sensors, chips, and photodiodes to receive the light signals
Coverage
Wi-Fi can reach up to ft indoors and ft outdoor []
Li-Fi has a range of approximately m
Bandwidth
. GHz Wi-Fi can support up to or Band frequency , GHz Mbps GHz Wi-Fi can support up to , Mbps
Speed
GB per second
GB per second
IEEE standards
Wi-Fi , IEEE .ac, to , (Mbps)
IEEE .™ []
Manufacturing cost
More
Less
Wi-Fi stands for wireless fidelity; it is a technology used to connect electronic devices to transmit data over the internet
32
Shreejita Mukherjee et al.
Table 1 (continued) Sl. no. Characteristics Wi-Fi
Li-Fi
Installation cost Less
More
Network topology
Point-to-multipoint
Point-to-point
Interference
Wi-Fi connection is interrupted by several things may be microwaves, cordless phones, and other Wi-Fi networks
Li-Fi is bounded by area; light cannot go through the walls, so obviously there will be no such interference in the network
Security
For Wi-Fi whenever the connection is on, as it can cover a wide range, other people using the internet can hack the Wi-Fi password and easily can use the internet connection Less secure
As there is no interference in this technology, the other devices, if they want to be connected with Li-Fi, will not get easy access to the internet, until and unless they are in the same covered area More secure
The efficiency of the technology
Low
High
Location
Wireless LANs based on the IEEE . standards and the other requirements like access points, modem, and router where available
It can be used anywhere, where LED lights are available at home, office, schools, etc.
Availability
It needs lots of things for infrastructure, the place where the components are available, where we will get Wi-Fi
LED light is a common thing, so it can be available everywhere
Effect on human being
It uses radio waves for the transmission of data, it affects our body more
As it uses light as a source of signal, it is less harmful for human being
Pollution
Everyone has a smartphone nowadays, which is capable of having Wi-Fi connection in it, all the Wi-Fi connections use radio spectrum for the transmission purpose, so some amount of pollution will be done by Wi-Fi. Here the effectiveness is more
Light is used in the case of Li-Fi. A constant LED light source is required for data transmission, this will also cause some amount of light pollution in the environment, but the effectiveness will be less
3 A comparative study of Li-Fi over Wi-Fi and the application
33
4 Advantages of Li-Fi (1) The main advantage of Li-Fi is the high data transmission rate. LED lights are able to transfer data of 224 GB per second, which is 100 times faster than the speed which is achieved by Wi-Fi. The problem of buffering while downloading the data using Wi-Fi will be resolved. (2) It requires a constant source of light, but light cannot penetrate through the walls, so in a bounded area we can use the Li-Fi. So, the data will be safe. For example, when we turn on our personal Wi-Fi connection many people try to connect their devices with our connection. If they can steal the password, they will be able to use our personal Wi-Fi. So, there is a security problem. Li-Fi is able to overcome that problem. If the person is not with the LED light sources, then he will not be able to get the connection. So, there is no hacking issues or security issues with this new technology. (3) The places where radio waves are banned, Li-Fi can easily be used in those places. Light is not as harmful as radio waves, so on medical grounds, hospitals, and medical devices we can use Li-Fi instead of Wi-Fi. Places like oil platform and petroleum can use Li-Fi. (4) It can also be used in airlines; it will not interfere with the other signals for data transmission. (5) The availability of LED light sources is common. The main component of this technology is cheap in terms of cost and the source is available everywhere [29]. (6) The bandwidth of the Wi-Fi differs sometimes because of congestion in the network or for some poor connectivity. For Li-Fi that can be solved, as in a bounded region we will use it, many people at a time will not try to get the connection. As there will be a limited number of users in a bounded area, the bandwidth of the internet connection will be high. (7) As light is a common source, like Bluetooth and Wi-Fi, this will be available in all the locations [30]. (8) For Wi-Fi setup, it requires optic fiber, modem, router, and all these things. The setup is not easy; it requires good networking as well as hardware knowledge of people to deal with that. Compared to that, Li-Fi is easy to install. (9) Li-Fi consumes low power; if the power consumption is low it will be economically acceptable to the users. It is used in IoT applications too [31]. So, we will get faster data transmission, high bandwidth, low cost, and less harmful for our health using Li-Fi. It is the combination of all good qualities which are available in Li-Fi.
34
Shreejita Mukherjee et al.
5 Disadvantages of Li-Fi Though Li-Fi has so many advantages over Wi-Fi or visible light communication (VLC), it still has some disadvantages: (1) The smartphones, tables, laptops, and desktops we are using now are capable of connecting to Wi-Fi, so there must be a change in the hardware of all those electronic devices to be compatible with the Li-Fi connectivity. This will take time to upgrade all the device’s hardware components. (2) The cost of LED light is less, so the manufacturing cost [32] of Li-Fi is not much expensive. The installation of it is expensive [33]. (3) There may be an interruption in communication due to several light sources. The sunlight outdoors is the biggest challenge for Li-Fi technology. This may cause an effect on the internet connection [34]. (4) Light cannot go through walls. Li-Fi uses visible light. So, it is difficult to cover a wide range of network connections. The signal range is limited by physical barriers [34]. (5) The light will be on all the time, whenever we will use an internet connection, we need the light. In the indoor also the sunlight will come, this will affect the internet connection. There is a chance of light pollution indoors, which may affect our health [33]. (6) There should be a constant light source for this internet connection. Li-Fi technology works in the LED light and in infrared light. When there will be a power cutoff this connection will be hampered. So, if Li-Fi technology is used for communication, there should be a strong source of power supply. (7) Li-fi doesn’t work in the dark. If the LED light is low, there may be an interruption in the internet connection. So, strong LED light is needed for data communication. (8) As the installation of Li-Fi is not easy, so if any of the LED is damaged there will be a huge problem to reconstruct it. The process is not easy.
6 Application of Li-Fi The application count is huge [28], some are described below. – Live streaming The Internet is used for every purpose, from office to online business. It is a new trend now to watch live streaming. People prefer to watch live rather than watching non-live videos. So those who are doing online business also need a strong internet connection and those who are watching also need it. Li-Fi can win the heart of everyone as Li-Fi has high bandwidth and high-speed data transmission. Easily the live streaming can be watched using Li-Fi.
3 A comparative study of Li-Fi over Wi-Fi and the application
35
– Medical field Li-Fi can be used in hospitals [35] in different ways. Li-Fi uses LED lights for data transmission so it is less harmful to the human body, it can be used in the waiting room, patient’s cabin, corridor, or any place in the hospital. We struggle to get Wi-Fi connections in hospitals or other restricted areas, but LED light is much more available, so easily it can be used in hospitals. Li-Fi can also be used for real-time monitoring of a patient’s movement or any vital signs. It can also be used in medical devices. – Pharmaceutical industry Li-Fi could be used by pharmacists for receiving and screening electronically approved prescriptions directly in the unit. Li-Fi has high bandwidth and faster speed, so we can use this feature to get an automated health monitoring system. We can use digital jewelry like Li-Fi-enabled wristwatches, earrings, and bracelets to capture the health report of anybody. This will be connected to the hospitals, if anything goes wrong that will be directly reported. – Workplace or in the organization Li-Fi uses wireless communication systems. We cannot imagine a single day in our workplace or offices without an internet connection. If we are sending or receiving the emails attached to their important documents or if we are doing any recruitment process in the office, for the first selection round there will be a large number of the pupil or people will appear for the placement or interview, the recruiters, or the organizing place need a strong, reliable, and faster internet connection. Li-Fi is highly recommendable for this case. – Education In schools, colleges, and places where research is done we need less harmful, high speed, and secure technology for the study. If the classrooms are Li-Fi-enabled, then students can easily do research projects. Students will be more technically advanced. In colleges and the research place, we need the high-speed connection as we have to do lots of projects and assignments and so many events involved in studies. We have to study what other researchers are doing, what they are thinking, what is their way of view, and what new technologies are invented as well as updated every day. All these things need high and safe internet connections. – Business Li-Fi can be used in business that may be online or offline. Shoppers going to the stores can easily check which product is available and which product has what kind of discount; they can take their digital coupons. On the other hand, sellers can also do digital promotion using Li-Fi, checking stock availability and anything they can do. – Airplanes Passengers would recommend a flight that has an internet connection. Li-Fi can be used in flights so that passengers will get high and secured in-flight connectivity. Li-Fi
36
Shreejita Mukherjee et al.
doesn’t interfere with communication and it is not using radio waves for data transmission, so Li-Fi is safe to use in airplanes. – Disaster management Li-Fi can be used for communication purposes when a disaster like an earthquake or cyclone happens. People can get notifications about the coming disaster and also those who are in the rescue committee also get to communicate with the distressed people. – Industry 4.0 Industry 4.0 is also known as the “Fourth Industry Revolution” where Li-Fi may have a huge impact. There is a thought of automation everywhere. The industry now is very much focused on artificial intelligence (AI) and the application of the Internet of things (IoT). The industry wants to connect traditional things with AI and IoT so that everything will get automated. Li-Fi has some benefits rather than Wi-Fi, the beneficial features of Li-Fi can be used in the industry for a great purpose. – Augmented reality (AR) A new technological trend is augmented reality (AR). We can get a physical view of any places where we are not present by using AR-enabled devices. AR has so many applications in the field of education and industry. Li-Fi can be used in places where we do not need the connectors or cables only LED lights are needed. In a bounded region, Li-Fi is used with a high-speed data rate. Students can get the easy concept of complex things. – Oil plant In any petrochemical places where RF is banned, we can use Li-Fi there. Li-Fi is safe and secure technology. No radio wave of the spectrum is used in this technology, so this will not create any chemical interaction in the petrochemical industry. All the oil plants are situated in those places which are far away from the place where people usually live. In those places, it is difficult to install a Wi-Fi connection. First, radio waves are not safe in this place, second is the range of the signal. If the LED light is only used for data transmission, this will not affect the chemical industry. People will be benefited from this new technology. – Military In the military field, Li-Fi can be used. Soldiers are placed in such places where they will not get an internet connection through radio waves; it is difficult to get the signals in those places. But if we have Li-Fi then militants will be benefited. Li-Fi helps in safe and secure data transmission. No one will be able to hack the network easily. – Navy The sailors who are in the Navy also face the same problem of unavailability of connection. In the Navy, sailors also use signals for communication. There the Wi-Fi connection may be hampered, as this will involve interference. But Li-Fi will not be
3 A comparative study of Li-Fi over Wi-Fi and the application
37
involved in any interference, so using LED lights in the ship they can easily access the internet connection. Li-Fi does not require any complex infrastructure to be installed, so in a small place, they will get a strong source of the network. – Underwater applications Light cannot penetrate the walls, but it can go through the water. All remotely operated underwater vehicles use a wired connection. It requires a cable connection. If LiFi is used underwater, there will be no problem with cables. Light can pass through the water and Li-Fi can cover a wide area for network connection. – Cryptocurrency Cryptocurrency is the medium of exchange; it is safe and secured for transaction purposes. If the currency system is Li-Fi-enabled then it will help in the secured transmission of data.
7 Li-Fi in IoT The IoT is a network that is connected to the internet as well as with the physical objects which will help to transfer the data from one device to another or from sender to receiver. IoT is now a trendy word; people are used to this new word. This is connected to the cloud, so the data will remain safe and the user need not worry about the shortage of storage of data. Cloud will take care of the data. It is connected to the sensors, chips, and hardware devices, and it uses simulation to see whether the circuit is giving the correct output or not. If this IoT will be able to use Li-Fi as a networking source, then there will be faster and more secure data transmission. In the contributions [36, 37], the authors have discussed how it is beneficial to use IoT-enabled Li-Fi technology. The benefit of using Li-Fi in IoT technology is discussed as follows: 1. As the light source is available everywhere (especially LED light bulbs) [38] this will help in constant data transmission. 2. Light has high bandwidth, so there is better capacity and the equipment is already available. 3. It can be implemented in all locations. But in dark places, it cannot be implemented. Researchers are focusing on smart, real-time, reliable data transmission. In that case, Li-Fi is a visible solution in the field of IoT and Big data. In Figure 4. the concept of integrating Li-Fi and IoT has been shown. The components used in this concept are: a. Cloud station: Cloud is acting as the data storage in this system. It will store all the information which is transmitted from the sender to the receiver. All the updates regarding information will be saved in the cloud. There is no need to update the file multiple times.
38
Shreejita Mukherjee et al.
Cloud
Internet
Remote User
Li-Fi System
Figure 4: The concept of integrating Li-Fi and IoT.
b. Remote users: Users can directly communicate through the internet or receive information about the operation. c. Internet: Users are connected with the Li-Fi technology for transmitting the data to the other user or other devices. d. Li-Fi technology: We can see in the block diagram that Wi-Fi is replaced by Li-Fi technology. In the Li-Fi the LED lights will receive the data from the users or the senders and then pass it to the photodiodes, then the data will be decrypted to get the original message which is sent by the sender [39]. In this way, IoT and Li-Fi can work together so that they will become the solution to all the problems we are facing using Wi-Fi.
8 Li-Fi and AR/VR AR and virtual reality (VR) are emerging areas in the field of technology, entertainment, education, business, and so many things. AR is nothing but a superimposed digital layer that is mixed with the physical layer to make you feel that this virtual world is the real world. Today the application of AR/VR is used in many areas. AR gives the effects of virtual things over real things. VR itself creates a world where people want to be, but without the physical existence of that place, people can feel it, touch it, and enjoy it. Both these fields can be used in marketing, educational sectors, and the entertainment industry.
3 A comparative study of Li-Fi over Wi-Fi and the application
39
Li-Fi also has an impact on the field of technology and data communication. Li-Fi is also used in the same application areas where AR/VR can be used. Li-Fi can be used in those areas of AR/VR where only LEDs can be used for transmitting the data. Some of the applications of AR/VR are mentioned below: – Healthcare industry – Military training – Education – Retail and E-commerce – Travel and tourism – Manufacturing industry – Gaming and entertainment industry Of the above-mentioned application areas, some of them are common in applications of Li-Fi. So, the places where the data can be passed through LED light bulbs, where Li-Fi technology can be used, these places surely AR/VR concepts can be used. AR/VR and Li-Fi combined technology can give the world a safe, secure, happy environment for passing or sending messages through text, images, audio, or video files.
9 Conclusions and future scope Li-Fi and AR/VR is the future of communication system. In the near future when all the electronic devices which will be Li-Fi enabled will be in the hand of safe and secure data transmission. In this chapter, we have discussed how Li-Fi has a powerful impact on Wi-Fi, how VLC and Li-Fi work, and how they are related. We also discussed the applications of Li-Fi, as it can be used in several places, from indoor to outdoor, from underground to airplanes, from restricted areas to street lights anywhere it can be installed [40]. Though it will take time to implement all the software or architectural changes of our laptops, smartphones, and tablets to be capable of using Li-Fi when it will overcome all the barriers there will be a huge revolution in the industry as well as in education. If both Wi-Fi and Li-Fi are used together for data transmission, then the communication system will achieve the next level of revolution [41]. Li-Fi has so many more characteristics than Wi-Fi, so it will be our next need in our daily life. Li-Fi can also be used as an application field of AR/VR. These technological fields will give the world fast, safe data communication in every place of the world. Researchers are working in the field of wireless technology and they are trying to invent some new technologies which are helpful for people and less harmful to people. Li-Fi is one of them. The disadvantages which are mentioned in the chapter will be overcome soon and then everywhere we will see Li-Fi as our communication device. Li-Fi has its unique characteristics which will give the user a fast, high speed,
40
Shreejita Mukherjee et al.
and reliable medium of data transmission. In this chapter, it has been mentioned that AR/VR can also be used as an application of LI-Fi technology. But much work has not been done yet. This will give the researchers a scope of whether the real-world and virtual-world mixture can be done through Li-Fi technology or not.
References [1] [2] [3]
[4] [5]
[6]
[7] [8]
[9] [10]
[11] [12]
[13] [14] [15]
“GeeksforGeeks”. kartik kejariwal, accessed October 25, 2022. https://www.geeksforgeeks.org/li-fi/. Ramadhani, E., and G. P. Mahardika. 2017. The Technology of LiFi: A Brief Introduction. Yogyakarta, Indonesia: IOP Publishing. Hadi, M.A. 2016. “Wireless Communication Tends to Smart Technology Li-Fi and its Comparison with Wi-Fi.” American Journal of Engineering Research (AJER)e-ISSN: 2320-0847 p-ISSN: 2320-0936 5(5): 40–47. Mundy, Jon. 03 Dec. 2015. “What is Li-Fi and How Could it Make Your Internet 100 Times Faster.” Trusted Reviews. Xianbo, Li, Hussain Babar, Li Wang, Jiang Junmin, and Yue C. Patrick. 2018. “Design of a 2.2-mW 24-Mb/s CMOS VLC Receiver SoC with Ambient Light Rejection and Post-Equalization for Li-Fi Applications.” Journal of Lightwave Technology 36(12): 2366–2375. Siddique, Imran, Awan Muhammad Zubair, Khan Muhammad Yousaf, and Mazhar Azhar. Janu. 2019. “Li-Fi the Next Generation of Wireless Communication through Visible Light Communication (VLC) Technology.” International Journal of Scientific Research in Computer Science, Engineering and Information Technology 5(1). ISSN: 2456-3307, UGC Journal No: 64718. Cevik Taner and Yilmaz Serdar; “An Overview Of Visible Light Communication Systems”. International Journal of Computer Networks & Communications (IJCNC) Vol.7, No.6, November 2015 Tavakkolnia, Iman, Chen Cheng, Bian Rui, and Haas Harald. 2018. “Energy-Efficient Adaptive MIMOVLC Technique for Indoor LiFi Applications.” In 25th Internataional Conference on Telecommunication, 978-1-5386-2321-3/18/$31.00 ©2018 IEEE, accessed 30 October, 2022. doi: 10.1109/ict.2018.8464933. Pawar A, Anande A, Badhiye A, Khatua I,”'Li-Fi: Data Transmission through illumination”. International Journal of Scientific & Engineering Research, Volume 6, Issue 10, October-2015 Riurean Simona, Antipova Tatiana, Rocha Álvaro, Leba Monica, and Ionica Andreea. 2019. “VLC, OCC, IR and LiFi Reliable Optical Wireless Technologies to be Embedded in Medical Facilities and Medical Devices.” Journal of Medical Systems 43: 308, accessed 30 October 2022. doi: 10.1007/s10916-0191434-y. Ekta, Kaur Ranjeet, “Light Fidelity (LI-FI)-A Comprehensive Study”. International Journal of Computer Science and Mobile Computing, Vol.3 Issue.4, April- 2014 Sowbhagya M P, Krishna P Vikas, Darshan S, Nikhil A R. “Evolution of Gi-Fi and Li-Fi in Wireless Networks”. International Journal of Computer Sciences and Engineering, Volume-4, Special Issue 3, May 2016 ISSN: 2347–2693 Shetty, Ashmita. Sept. 2016. “A Comparative Study and Analysis on Li-Fi and Wi-Fi.” International Journal of Computer Applications (0975–8887) 150(6). Haas, H, L. Yin, Y. Wang, and C. Chen. 2016. “What is LiFi?” Journal of Lightwave Technology 34(6): 1533–1544. Leba, Monica, Riurean Simona, and Andreea Lonica. 2017. “LiFi – The Path to a New Way of Communication”. In 12th Iberian Conference on Information Systems and Technologies (CISTI). doi: 10.23919/cisti.2017.7975997.
3 A comparative study of Li-Fi over Wi-Fi and the application
41
[16] Haas Harald. 2018. LiFi is a Paradigm-Shifting 5G Technology.” Reviews in Physics 3: 26–31. ELSEVIER, accessed 31st October, 2022. https://doi.org/10.1016/j.revip.2017.10.001 [17] Kumar, S., and P. Singh. 2019. “A Comprehensive Survey of Visible Light Communication: Potential and Challenges.” Wireless Personal Communications 109: 1357–1375. Springer, accessed 20th October, 2022. https://doi.org/10.1007/s11277-019-06616-3. [18] Latif, Ullah K. May 2017. “Visible Light Communication: Applications, Architecture, Standardization and Research Challenges.” Science Direct, Digital Communications and Networks 3(2): 78–88. https://doi.org/10.1016/j.dcan.2016.07.004. [19] Albraheem Lamya, I., H. Alhudaithy Lamia, A. Aljaser Afnan, R. Aldhafian Muneerah, and M. Bahliwah Ghada. 19 July 2018. “Toward Designing a Li-Fi-Based Hierarchical IoT Architecture.” IEEE Access [Internet] 6: 40811–40825. doi: 10.1109/ACCESS.2018.2857627. [20] Jon, Mundy, “What is Li-Fi and How Could it Make Your Internet 100 Times Faster”. 03 Dec. 2015. Trusted Reviews http://www.trustedreviews.com/opinions/what-is-lifi [21] Panda, S., M. Soyaib, and A. Jeyasekar. Sept. 2015. “Li-Fi Technology – Next Gen Data Transmission through Visible Light Communication”. In National Conference on Emerging Trends in Computing, Communication & Control Engineering. [22] Rani, J., P. Chauhan, and R. Tripathi. 2012. “Li-Fi (Light Fidelity) – The Future Technology in Wireless Communication.” International Journal of Applied Engineering Research 7(11). ISSN 0973-4562. [23] Saini, S., and Y. Sharma. Feb. 2016. “Li-Fi the Most Recent Innovation in Wireless Communication.” International Journal of Advanced Research in Computer Science and Software Engineering 6(2): 347–351. [24] “Vic Hayes”. Engineering and Technology History Wiki, 1 Mar. 2016. [25] Wikipedia. https://en.wikipedia.org/wiki/Wi-Fi, last edited on 4 April 2023, at 15:01 (UTC). [26] Wikipedia. https://en.wikipedia.org/wiki/Li-Fi, last edited on 6 February 2023, at 21:51 (UTC). [27] Lifewire Tech for Huamn, Bradley Mitchell. https://www.lifewire.com/range-of-typical-wifi-network816564, November 5, 2020. [28] Li Fi Tech News, Chukwuemeka Livinus. 2019. https://www.lifitn.com/blog/2019/6/6/top-li-fiapplications-updated-list. [29] Tutorials point. https://www.tutorialspoint.com/what-is-lifi. [30] Bushra, Nousheen, K. Jajee Mayuri, and Nandyal Suvarna. Aug. 2020. “A Novel Navigation System using Light Fidelity (Lifi) Technology.” International Journal for Research in Applied Science & Engineering Technology (IJRASET) 8(VIII). ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.429. https://doi.org/10.22214/ijraset.2020.30900. [31] Dimitrov, S., and H. Haas. 03–06 Sep. 2012. “Optimum Signal Shaping in OFDM-based Optical Wireless Communication Systems.” In IEEE Vehicular Technology Conference (VTC Fall), 2012. IEEE, Quebec City, QC, Canada. doi: 10.1109/VTCFall.2012.6399084 [32] Chang, M. H., J. Y. Wu, W. C. Hsieh, S. Y. Lin, Y. W. Liang, and H. Wei. 2010. “High Efficiency Power Management System for Solar Energy Harvesting Applications.” IEEE, IEEE Asia Pacific Conference on Circuits and Systems, Kuala Lumpur, pp. 879–882. Kuala Lumpur, Malaysia. doi: 10.1109/ APCCAS.2010.5774960 [33] Subha T. D., T. D. Subash, Elezabeth Rani N., P. Janani. 2020. “Li-Fi: A Revolution in Wireless Networking”. Science Direct 24(Part 4): 2403–2413. Elsevier, International Conference on Advances in Material Science & Nanotechnology, ICMN-2K19. https://doi.org/10.1016/j.matpr.2020.03.770 [34] Techopedia. https://www.techopedia.com/7/31772/technology-trends/what-are-the-advantages-anddisadvantages-of-li-fi-technology [35] Tsonev, Dobroslav, Chun Hyunchae, Rajbhandari Sujan, Jonathan J. D. McKendry, Videv Stefan, Gu Erdan, Haji Mohsin, Watson Scott, E. Kelly Anthony, Faulkner Grahame, D. Dawson Martin, Haas Harald, and O’Brien Dominic. Apr. 2014. “A 3-Gb/s Single-LED OFDM-based Wireless VLC Link Using
42
[36]
[37]
[38]
[39] [40] [41]
Shreejita Mukherjee et al.
a Gallium Nitride μLED.” IEEE Photonics Technology Letters 26(7): 637–640. 02 January 2014. doi: 10.1109/LPT.2013.2297621. Sahil Nazir, Pottoo, Wani Tahir Mohammad, Dar Muneer Ahmad, and Mir Sameer Ahmad. 2018. “IoT Enabled by Li-Fi Technology.” In National Conference on Recent Advances in Computer Science and IT (NCRACIT) International Journal of Scientific Research in Computer Science, Engineering and Information Technology, © 2018 IJSRCSEIT 4(1). ISSN: 2456-3307. Chatterjee, Shubham. June 2015. “Scope and Challenges in Light Fidelity (Li-Fi) Technology in Wireless Data Communication.” International Journal of Innovative Research in Advanced Engineering 2(6). Sathiyanarayanan, Mithileysh, Sathiyanarayanan Mithileysh, and Jahagirdar Nandakishor, “Challenges and Opportunities of Integrating Internet of Things (IoT) and Light Fidelity (LiFi)”. 2017. 3rd International Conference on Applied and Theoretical Computing and Communication Technology (iCATccT). doi:10.1109/icatcct.2017.8389121 Mutasim, Ibrahim. Dec. 2016. “Internet of Things & Li-Fi: Smart Things Under the Light.” International Journal of Science and Research 5(12). Jain, Roma, Kale Pallavi, Kandekar Vidya, and Kadam Pratiksha. Wireless Data Communication Using Li-Fi Technology. doi: 10.4010/2016.846. ISSN 2321 3361 © 2016 IJESC. Arunmozhi, Selvi S., Dr. T. Ananth Kumar, Dr. R. S. Rajesh, Dr. M. Ajisha, and Angelina Thanga. 2019. “An Efficient Communication Scheme for Wi-Li-Fi Network Framework.” In 2019 Third International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC), Palladam, India, 12–14 December. doi: 10.1109/I-SMAC47947.2019.9032650
Ajay Sudhir Bale, Salna Joy, Baby Chithra R., Rithish Revan S., Vinay N.
4 Augmented reality in cross-domain applications Abstract: A comparatively recent but rapidly growing field there at convergence of pattern recognition and visual effects, augmented reality (AR) has a wide range of uses ranging from gameplay and amusement to healthcare and educational settings. Although it has existed for almost half a century, the scientific world has recently shown a substantial amount of interest in it. In this, we focus on the cross-domain applications of AR and efforts are put forward to bring out the key features of the same. Currently, the capability of AR and digital twins (DT) has started to emerge, sparking an increase in ongoing research from both academia and business. Another area of research which is receiving high focus is advanced driver assistance system (ADAS). ADAS are utilized on every level from forecasting the weather to security, regardless of the fact that the machine is operating the vehicle. Such cutting-edge ADASs employ a variety of aids for motorists to ensure the safety of the voyage; such complex aids send out early indications of a variety of situations, including such changes in the road’s state, potential traffic congestion, and severe weather. We also propose a method of AR-assisted ADAS by which the road accidents can be reduced. Keywords: Augmented reality, artificial intelligence, advanced driver assistance system, digital twin
1 Introduction In clear terms, augmented reality (AR) refers to enhancing the realism of real-time items that we perceive with our sight either via devices like phones. You might wonder, “How is this subject gaining so much attention?” The response that it could significantly provide a life-changing event in studying, evaluating three-dimensional objects, or evaluating the health data of disaster situations including the most severe difficulties [1]. The AR market will exhibit an upward trajectory, achieving an expected increase of 72.7 billion dollars by 2024, in accordance with the most recent analysis by MarketsandMarkets. This is all feasible because businesses or educational institutions are keen to invest in programs that support the concepts of AR that are currently unexplored. AR makes science Ajay Sudhir Bale, Department of ECE, New Horizon College of Engineering, Bengaluru, India, e-mail: [email protected] Salna Joy, Baby Chithra R., Rithish Revan S., Department of ECE, New Horizon College of Engineering, Bengaluru, India Vinay N., Vinoba Nagar, KNS Post, Kolar 563101, Karnataka, India https://doi.org/10.1515/9783110981445-004
44
Ajay Sudhir Bale et al.
fiction a reality. Similar to the Star Wars and Marvel movies, holograms nowadays are pervasive in the physical world and provide an immersive interface which extends far beyond just plain enjoyment. AR is a useful tool for businesses nowadays. To solve a multitude of company problems, AR is used in a wide range of industries, like retailing, commerce, entertainment, medical, and the army. It is essential to monitor these innovations if you want to know where the business is headed [2]. The fact that AR is being used in combination with another virtual world tech surely does not surprise you. The metaverse has inundated the media when Facebook renamed itself to “Meta.” But it’s not just meaningless marketing. One of the goals of metaverse science is to eliminate the barriers between the virtual and physical worlds. All these people and companies can profit from AR since it allows us to see digital environments that are incorporated in the real world. The next sections focus on recent trends of AR in various applications.
2 Comparative study This section contains the comparative study of the various trends of AR in various cross domain applications.
2.1 On-skin AR devices More human-like and human-interactive technology is required in order to reduce the gaps between natural and artificial perception which provides an immersing virtual experience. Due to the thickness, poor body adhesion, and weight, typical AR devices have an unpleasant wearing awareness. Skin electronics provide an alternative to rigid, thick, and bulky AR devices [3, 4]. Figure 1 shows the typical AR devices and wearable ultra-thin on-skin devices noninvasively fixed onto the human body. On-skin electronics opens doors to the next-generation platform for AR devices. On-skin AR devices are made of thin, light, stretchable, and flexible materials and installed noninvasively and uniformly on human skin. They have less-limited skin sensing, less-limited physiology, and low wearing discomfort compared to conventional AR devices. On-skin input devices, output devices power generation, and storage devices are integrated into the on-skin AR device system. Input devices accept stimuli from the outside world, identify human body conditions, and acknowledge them to the AR system. Output devices act as stimulators to deliver various sensations for AR applications, including audio, touch, and visual [5–7]. On-skin AR devices provide an alternative to bulky and heavy conventional AR devices. Skin-conforming AR devices that are thin, lightweight, and stretchy are attached to the skin. Energy system limitations and multiple input-output synchroniza-
4 Augmented reality in cross-domain applications
45
Conventional VR/AR Devices: – Rigid and thick – Heavy – High Wearing Discomfort – Limited Skin Sensing – Limited User Physiology
Contact Lens
Integrated Syatem
Haptic device (output)
Energy Source
Tactile Sensor (input) ECG Sensor (input)
Skin Electronic VR/AR Devices: – Flexible and Thin – Lightweight – Low Wearing Discomfort – Less-limited Skin Sensing – Less-limited User Physiology
Figure 1: Conventional AR devices versus on-skin AR devices [5] (reprint with copyright permission).
tions are the challenges in on-skin AR electronics system devices [5–7]. With advancements in holographic optical elements (HOEs), future display platforms are emerging for higher levels of human-digital interfacing. Alternatives to conventional optics are provided by ultra-light optical components like photopolymer HOEs and liquid-crystal HOEs. Tiny micro displays with maximum resolution and high stability are made possible by micro-LEDs [8]. Table 1 summarizes the input and output on-skin sensors used to acquire outside as well as inside human body parameters. The information is inspired from [5]. Table 1: Skin electronics input-output AR devices. Skin sensor
Functionality
Thickness
Photodetector
Detecting visual stimuli; used in oximeters to get blood oxygen levels
–, nm
Image sensor
Biometric identification
µm
Auditory sensors
Receive surrounding sounds Used for heart signals acquisition
mm
Pressure sensors
Work as a tactile touch sensor; measure blood pressure
µm
Thermal sensors
Work as a tactile touch sensor; measure skin temperature
µm
Strain sensors
Motion recognition
nm
46
Ajay Sudhir Bale et al.
Table 1 (continued) Skin sensor
Functionality
Thickness
Electrophysiological sensor
Helps in artificial stimuli experience; measure neural activity
µm
On-skin displays
Provide visual output
. µm
Loudspeakers
Provide audio output; used for heart signals acquisition
nm
2.2 Augmented reality in mobile applications Users can run AR apps on mobile platforms at a reduced cost and with increased mobility. Mobile AR system (MARS) consists of input sensors, processing functions, or devices and output devices as shown in Figure 2. The idea for this figure is inspired from [9]. Cameras, gyroscopes, microphones, and wearable devices with touch inputs are used as input devices [10, 11]. Output devices project the virtual elements in the real context. AR displays include handheld displays see-through head-mounted display (HMDs) and AR projectors [12]. A cloud, MEC, or fog server works as a processing unit. Tracking aids in the initial alignment of virtual elements in the real environment. Rendering is the process of deciding when the virtual content has to be triggered. Registration and calibration enable the precise alignment of virtual content in the real world. The user can manipulate the AR content with interaction techniques [13].
Register
Input Devices
AR Content Manipul ation
MEC, FOG or CLOUD Server as Processor
Calibera tion
Initial Align ment
Output Devices
Render ing
Figure 2: Mobile augmented reality system.
Future mobile AI applications based on the forthcoming 5G communication technology might provide extraordinarily rapid data transmission and ultra-low delays. Mobile AR systems may be cloud or edge-based having localized or hybrid architecture.
4 Augmented reality in cross-domain applications
47
Mobile AR applications need fast response cycles that could drive the 5G network architecture design. All MARS devices are anticipated to have 5G connectivity due to the specific requirements of extremely low latency and bandwidth. By improving the quality of user experience a 5G and AR integration will add value to a wide range of application fields [14, 15]. Different MARS architectures can be summarized as: A. Cloud-based MARS Cloud-based MARS can be used anywhere but requires high mobility. The system provides popular applications such as AR-assisted Google maps, Language translation applications, and home interior applications and can be used in smart city contexts. Cloud-based MARS is suitable for intense computational tasks of digital image processing [15]. Cloud-based MARS offers extended battery life and lightweight AR devices but the system suffers from increased latency. B. Edge-based MARS Edge-based systems are localized systems that work without cloud support and have very high mobility requirements. They can be employed in the healthcare sector for AR-assisted surgeries, indoor navigation [14, 15], wearable AR devices, and in the industry as monitoring systems. In contradiction to cloud-based systems, edge architecture offers very low latency and lightweight wearable AR devices [16]. But a one-point failure disrupts the entire system in edge architecture. C. Localized mobile augmented reality system They are standalone systems with high-security protocols. The low latency makes the system suitable for military applications. Localized mobile AR systems are used in race cars and at museums [15]. Handheld AR devices are used in education, sports, marketing/advertising, games and entertainment, cultural heritage, and in the healthcare sector [17]. Localized systems are less prone to security attacks but they suffer from poor battery life. D. Hybrid mobile augmented reality system The hybrid architecture combines cloud and edge architectural features. It has the advantage of low latency with respect to cloud-based systems and more security than edge-based systems. They can be employed in the industry for factory maintenance [18], transparent front vehicle, and remote-assisted surgery. Hybrid systems are more complicated than other architectures and need high administration costs.
2.3 AR landscape fusion model The formation of a person’s mental and physical health is greatly influenced by the design of natural landscapes particularly the design of greenery [19]. People may feel the distinct representation of the outdoor scene indoors and experience it as a realistic model with the help of AR. An AR fusion multiview landscape model could provide a distinct perspective of the landscape from different directions [20]. The captured
48
Ajay Sudhir Bale et al.
image is trained carefully considering the perspective, distance, illumination, and shelter. Feature extraction is done using Gaussian algorithms which helps to retain detailed features of the landscape. The final perception of reality is defined by the superimposition of a 3D model on the real scene which is performed by the AR system. A 3-dimensional registration method based on texture, color, contour, and edge gradient adds more realistic expressions to the image. [21]. By merging 3-dimensional models and synthesis methods, the AR fusion of natural landscapes is made possible. Through the use of virtual objects in an actual natural context, the technology merges the virtual world and the real world. The AR fusion process helps in creative and cultural visual displays [22]. The first step in the design process is to identify and extract the key cultural elements from the landscape prototype. The cultural and landscape elements are sorted and classified into creative products. Applying graphic design methodologies, the link between visuals and cultural elements is established. The form, color extraction, and composition of each visual are thoughtfully designed and a map of identification is formed [23]. Depending on the prototype of the landscape, the cultural component is extracted, developed, and composed into graphic images. The design elements like structure, color, and form are inherited during composition. During the development stage using transformation, breakup, replacement composition, and reconstruction the picture is made moving. Thus, cultural information is inherited and transmitted with a broader communication space [24]. Depending on the extracted cultural components, from the landscape prototype, the final output of the 3-dimensional AR model is decided using AR software. Enhanced media platforms are used for an effective AR visual experience.
2.4 Augmented reality in online retails The pandemic changed the traditional offline retail sector into web or mobile-based online business models. Augmented reality assists customers to enrich the shopping experience and get a better understanding of the product. It helps to personalize typical products. Online websites and brands that offer virtual trials have an increased selling rate compared to their competitors. With 4G and 5G mobile and web-based, AR applications emerged which provide an immersive and novel shopping experience [25, 26]. Garment customization can be done using mobile AR applications. The human body is modeled using software and deformation clothing algorithms are used which provide realistic clothing characteristics. Using image acquisition algorithms real image is obtained and transfer image algorithms superimpose virtual cloths in the real context. The scaling ratio of the virtual garment is calculated and 3-dimensional fitting of virtual clothing is done with customized clothing experience [27]. Different
4 Augmented reality in cross-domain applications
49
parameters like skin tone, body pattern, hair color, and backdrop are adjustable in the AR applications to bring a high-level shopping experience. A deformation clothing algorithm obtains the bone skin posture and refers to the clothing database to identify the similarity of real image posture, make changes to required regions separately, and combine all the changes in the final image. For the proposed garment transfer algorithm, the eigenvalues are supposed to be the bone joints which gives better accuracy and reduces complexity [28]. Cosmetic try-ons are another emerging trend in online platforms. Deep learning algorithms deliver reliable outcomes for expensive cosmetics with virtual try-on. The try-on algorithm classifies the general facial traits into structured coding. The parameters obtained from facial traits of the input image are fed into a deep neural network which based on the cosmetics database produces a before-after image [29]. The proposed online shopping system in [30] consists of a VR/AR headset for a fully immersed shopping experience which gives a 360° view of the store. The system software consists of a virtual shop, product database, and artificial neural network (ANN) for customization. Virtual shop is common for all users and it serves as a typical internet store thereby reducing the traffic and bandwidth requirement. The ANN trains the system according to different user traits. The system gives product recommendations depending on similar user traits and users’ history. These recommendations along with reasonable discount deals attract the user to make the purchase. In the long run, more purchase data gets added to the system and ANN can train the system better to improve the purchase rate and level of customization. The virtual shop can be shifted like an in-game store. Once the application starts running the customer’s AR avatar enters the virtual shopping center and tries different products. Audio description of the product and AR salesman guide the customer. The research conducted on regular online purchases versus AR-assisted purchases shows that buying intention is considerably higher in AR shopping systems compared to regular eCommerce shopping systems [31]. The Big 5 personality characteristic impacts on buying behavior and buying habits are validated using mediation analysis. Trait anxiety, openness to experience, flexibility, life satisfaction, and social competence are the Big 5 personality qualities. Traditional online shopping and ARassisted shopping are the subjects of mediation analysis, with buying intention serving as the expected variable and buying impulsiveness, user experience, perception of online shopping, system usability, and Big 5 personality used as antecedents [31]. A novel try-on system for eyeglasses based on convolutional neural networks (CNN) face detection is proposed in [32]. The system workflow is given in 3-dimensional virtual scenes created with the reconstructed user’s face wearing the chosen eyeglasses (Figure 3). After obtaining information from the user’s reconstructed face size and fitting variables for eyeglasses are computed automatically without using markers. A 3-dimensional rendering network lets users try the glasses on different angles. The figure shows the elements of the try-on system. The back-end of the framework is developed on cloud. User interface is developed as a web-based application. The working module is divided into a
50
Ajay Sudhir Bale et al.
Web based User Interface Image Acquisition
Working Module Face Identification Module
Model selection from 3D try-on database
Face Reconstruction Module Functions 1. Approximate the face size 2. Identify the key points 3. Calculate the eyeglass parameters 4. Merge the background
Virtual Try-on Result Figure 3: Eyeglasses virtual-try on system.
face identifying module and 3-dimensional reconstruction module. Tracking is not required as the face is reconstructed. The assisting subsystems are face size computing and key-point identification modules and fitting variable calculation module which helps in merging eyeglass and 3D face in exact proportions and sizes for the realistic appearance.
2.5 AR-assisted ADAS The rise in the number of vehicles on the roads underscores the necessity of addressing the challenge of road safety. To lower the number of deaths brought on by collisions, researchers and the automobile industry are continually pushing regulatory standards further among each new car generation. Self-driving vehicles are considered essential for future smart cities, promising improved lifestyles and reduced carbon emissions. Yet, for spatial awareness to improve and allow quick and automatic judgment, considerable interaction between all stakeholders is required. In scenarios when there are unanticipated circumstances or inadequate data, incremental shifts that require physical judgment may be preferable. VR apps and 360° live broadcasts are two recently developed innovations that can improve spatial awareness, provide unique environments, and speed up decision-making [33]. By combining cutting-edge technology, such as AI and VR/AR models, it is now possible to build smart automobiles that really can utilize information collected from a variety of drivers and provide tailored assistance to young drivers. By strengthening road danger identification and speeding up responsiveness, AR alerts have shown their effectiveness in improving safe driving. ADAS can adapt to human input and explain decision-making behavior, providing adequate feedback to improve road safety.
4 Augmented reality in cross-domain applications
51
For linked cars that use AR as the human machine interface (HMI), an ADAS is suggested in [34]. Via the windscreen, the technology projects guiding information onto the vehicle’s view. The research centers around the specific case of pedestrian junctions wherein linked cars can collaborate with one another to cross crossings without pausing, improving the energy and time effectiveness of the automobiles. A feedforward/feedback controller and a space allocation scheduling method are developed to aid the AR HMI of the ADAS. The proposed system is modeled using the Unity game engine, and its effectiveness is validated using human-in-the-loop simulation.
AR HMI
PROCESSOR
LOCALIZATION
PLANNING / CONTROL MODULE
PERCEPTION
CAN BUS
Figure 4: Augmented reality-based ADAS for reference and connected vehicle.
Figure 4 shows the AR-based ADAS for connected vehicles and inspired from [34]. In this proposed AR-based ADAS a specific sequence is allotted to each approaching vehicle by an AI algorithm and the automated controllers or AR HMI-guided drivers decide the vehicle movements. The AR HMI provides guidance through the road intersections which are unsignalized. It gives the visualization of the vehicles approaching from different directions of the intersection. Typical slots are reserved for different vehicles using a slot allocation algorithm while they approach the intersection. The reserved slot information will be shared to other connected vehicles by the AR HMI as the red, unavailable slot. Other conflicting vehicles consider the green available slots at the intersection. As per the changes of connected vehicle dynamic allocation of slots are performed by the AI algorithm. With the help of the projector, the unit displays the AR HMI on the windshield, wherein the road surroundings are identified by a front-end camera. The reserved slot overlays exactly upon the road from the field of view of the driver. The spatial transformation algorithm transforms the slot coordinates calculated by the control module to the AR HMI. Travel time and fuel consumption can be reduced significantly by the proposed AR-based advanced driver assistance system (ADAS). In [35], a novel approach to design and evaluate individualized and adaptive driver assistance using VR technology is proposed. Figure 5 shows the architecture of the vision supported truck docking assistant system which is inspired from [35]. The real-time positioning of the trailer at the docking yard is attained through a cameraconnected localization system. The system uses this localization method to determine
52
Ajay Sudhir Bale et al.
the best path for a truck to dock at the unloading station. This information is relayed to the driver through audiovisual instructions displayed as a colored light array above the windshield. To teach operators and provide individualized automated driving compensatory mechanisms, a system that makes use of VR simulators to evaluate driving skills has been developed. The simulator has two major goals: to gather data that computer methods may use to create a driving modeling and to test the HMI and input elements (such as lighting or visual suggestions) to use a motorist approach. This approach enables contextualized and consumer input and can be employed to assess the individual aggressive driving or identify variations from professional driving skills. The platform’s focus is on a driving scenario in which a truck combination is docked at a loading bay in a distribution center.
Behavioural Analysis
Eye and hand tracker
Vr simulator and driving assistance
Controller and kinematics
Steering wheel and pedals
Matlab and simulink
Figure 5: Vision supported truck docking assistant system.
A user can utilize virtual reality (VR) eyeglasses that show an ecological simulation of the user’s perspective first from a car cabin while working in a docking yard to help them effectively park a vehicle combination at a loading bay. The environmental model includes the docking yard, its surroundings, the integrated driver assistance HMI, and the vehicle combination. The dynamic model, which powers the environmental model and HMI, is run in Simulink. The physical steering wheel provides input for both the controller and dynamic model, with the user responding to visual inputs by adjusting the steer angle and pedal positions.
4 Augmented reality in cross-domain applications
Motion control inclination
53
Speed of car, eye contact, head position
Other information pertaining to the vehicle and its condition on the road
– Driver Identification
VR Simulator
– Predication of the driver is behaving
– All kinds of risk assesment.
Generation of feedback using AR display tablet sonification
Figure 6: ADAS employing VR simulator.
Research on ADAS systems is a good fit for the VR-simulator platform. One of the primary goals of ADAS is to deliver tailored, preventative advice that reduces crash risk while delivering key information unique to the circumstance. The system ought to have the ability to adjust to the driver’s past behavior in order to do these utilizing methods like ML. The programmed algorithm can issue a warning or encouraging comments if unusual driver behavior is discovered in a known environment. The computer may offer context-sensitive input on the HMI if a low-expertise situation is identified. An adaptable and customized ADAS is shown in Figure 6. The idea for this is taken from [35]. In [36] the work involved conducting a study that combines both user interfaces. The GPS navigation user interface and AR are both used in the standard UI. While portable ADAS solutions are less common and might not be as readily adopted by users, gps devices are frequently used. The findings suggest integrating precollision capabilities into a navigation system to boost acceptability. The suggested mobile ADAS offers turnby-turn directions while keeping an eye on the operator and the surrounding traffic [37]. A low-annoyance mobile driver assistance system that only gives warnings whenever the driver is distracted has indeed been developed by analyzing the video feed from a phone’s camera module. The study’s findings show that employing an aid system influence driving behavior in a beneficial way, boosting time ahead and reducing mean speed. Time headway is a crucial indicator of the intensity of specific road conditions.
54
Ajay Sudhir Bale et al.
2.6 Augmented reality in digital twin (DT) The administration and processing of all the data generated in an industrial system becomes more difficult as the number of network-connected devices grows. The virtual components of industrial devices (sensors, machines, and CLPs) are explored in this study using the idea of a digital twin, and an architecture based on web services is proposed for accessing their data. We demonstrate a case study in which an AR system uses web services to obtain data from the twin model and shows the user information in real time. The ideas involved, which have a solid computational foundation, are also reviewed in terms of how they relate to industrial uses and how they might broaden the scope of services and business models [38]. The DT statistics represent the tangible equivalent in real time. With the help of DT, real-time and past data are combined and integrated to provide the program with more key information. It ultimately possesses the capacity for self-improvement. The accompanying five essential elements are necessary for the display of DT data utilizing AR based on the features of the digital twin data: parts that are real, virtual, calibrated, enhanced, and controlled [23]. The structure for using AR to visualize DT data is shown in Figure 7. The idea is derived from [23]. A physical part is a representation of an actual real entity, such as a device, item, or portion or even an entire factory. The physical dimension also contains all of the instruments and data gathering devices. The physical component serves as the building’s foundation, while the other components work to evaluate, use, and modify the information derived from it. A program was developed to show the benefits and possibilities of the suggested notion in [23]. This user’s goal is to use AR to envision DT data while it is being machined. The HoloLens is the AR tool in use. Microsoft introduced the HoloLens in 2016, making the initial holographic HMD on the marketplace. This item was created with AR and mixed reality (MR) in mind. It offers certain specific benefits in the production setting as opposed to other HMDs and HHDs. First of all, the HoloLens frees up both hands for the user, and users can effortlessly operate the gadget using audio signals and movements. Since AR systems are typically accessible or transportable during the delivery and requirements contained, AR-assisted DT improves process interaction, particularly on-site DT managing data and status updates. In the delivery process, AR systems can replace manual processes like barcodes reading by identifying and updating the stock DT data. AR-aided DT incorporates all the processes, including defect prediction and alarm, on-site examination, service advice, annotation and information updating, and distant cooperation [39]. In other words, all such tasks may be easily carried out with gadgets without requiring users to read technical instructions or fill forms, which disrupts the process and increases productivity. Big data brand image is experiencing tremendous growth as a consequence of the advent of 5G and the IoT. In addition, recent advancements in computer capacity and enabling seamless have accelerated the emergence and evolution of AI. In this re-
4 Augmented reality in cross-domain applications
55
Standardisation
Real Time Materialistic Part Incorporate
Visualization of Physical Part
Augmented Operation
Controllability
Figure 7: Structure for the use of AR to visualize DT data.
spect, the DT has developed into a cutting-edge innovation that links the two worlds together and analyzes different sensor information through the use of AI algorithms. In this context, the various sensors are extremely desirable to gather environmental data. Yet, despite the fact that current sensor technologies, such as webcams, radios, and magnetic units of measure, are often utilized as detecting components for a wide range of uses, excessive energy consumption and battery issues remain. As self-powered sensors, triboelectric nanogenerators offer a workable framework for developing self-sustaining and low-power devices [40].
3 Challenges in predicting the best routes for safe travel On a daily basis, traffic jams affect individuals’ money in the form of time, gasoline, and irritation. At the same time, prolonged delays have an influence on authorities, which must maintain uninterrupted traffic for the transportation of products, the reduction of pollution in some places, and to ensure the security of people on the roadways (“What Is Traffic Prediction and How Does It Work?”) [41, 42]. Every aspect of civilization is impacted by the worldwide issue of overcrowding. Any person that has faced a travel delay
56
Ajay Sudhir Bale et al.
during their commute is familiar with one of the typical reasons for delay. Traffic issues like collisions and construction can result in unforeseen delays. Weather that is bad also slows down flow of traffic, and the ability of tiny interior routes is constrained by inefficient intersection signal scheduling. Yet, there are simply excessive cars for roadways with a constrained resource, and this is what is causing the biggest growth in traffic jams globally. Even while one can’t necessarily escape that, but is able to make better decisions which may conserve cash, time, and boost overall road safety by using realistic traffic forecasts. Authorities may promote the creation of innovative and environmentally friendly transportation solutions to lower general congestion rates by using extremely accurate transportation statistics. To anticipate traffic volume, the density, and pace as of right now, various machine learning (and particularly deep learning) approaches that are able to analyze enormous volumes of both historical and real-time data are utilized. Later on, we’ll go through a few effective calculations. But first, let’s examine what information is required for a traffic forecast and where to find it. In order to produce precise projections, you need to take into account all the variables that affect traffic. As a result, there are many primary categories of information which one must collect. A thorough map including transportation systems and associated characteristics is necessary initially. It is an excellent idea to connect to worldwide map data collections like Google Maps, TomTom, HERE, or OSM to get comprehensive and recent data. The next step is to gather past and present traffic-related data, such as the quantity of cars that pass, their rate of speed, and its make and model motorists (trucks, tiny cars, etc.). The tools utilized to get this information include radar systems, recording devices, movement sensors, circuit sensors, and other technology for sensors [43]. ADAS faces numerous challenges in forecasting safe but efficient routes. Key challenges are – Quickly changing traffic pattern: Traffic patterns can change rapidly and unexpectedly, making it difficult to provide exact real-time route recommendations. Social events, gatherings, and undergoing constructions in the route may give rise to unpredictable traffic congestions. Machine learning models fail to incorporate these unforeseen occurrences. Integration of real-time scenarios with the trained data prediction is a major concern to assisted navigation. – Network security: Self-driving systems are vulnerable to cyber-attacks, which can compromise their safety and security. Hackers can gain unauthorized access to the system and manipulate its functions, which can lead to accidents or robbery. – Weather condition: Automated driving systems rely a lot on sensors such as cameras, proximity sensor, lidars, and so on to perceive the traffic condition. However, these sensors can be limited by adverse weather conditions such as rain, fog, or dust wind making it difficult to accurately detect obstacles in real time. – Updated database: To provide safety recommendations and training the model’s large updated database on the number of crimes and accidents occurring and its intensity in that region needs to be collected and updated. The depth of the mishaps
4 Augmented reality in cross-domain applications
–
–
–
57
has been a potential concern. Analyzing and modeling these qualitative data is a challenge for the safety recommendation system. Trade-off between safety and efficiency: Often the optimal route might be a compromise between safety and efficiency. While the safest route may be the longest, it may not be the most efficient or practical option for many travelers. Limited infrastructure: In some regions, there may be limited infrastructure to support real-time route prediction models, such as sensors and cameras. As a result, the accuracy of these models may be limited. Less taken routes: More information to train the model might not be available in less traveled routes.
4 Proposed method At present we have cameras that are present on all sides of the car. We can capture the moving data of all the cars and capture the images in different angles. These captured images can be stored in the central database and can be converted to AR, 3D images. These AR images are then further used to train artificial intelligent algorithms. Then these trained AI algorithms are used in ADAS technology to predict the best possible routes. ADAS technologies are used to reduce road accidents. If the person wants to travel from one destination to another using ADAS technology, then he will input his starting and ending destination. Using the images present in the central database, the autonomous car system can pretrain and help in the accurate prediction of the root; it reduces accident rates by providing safe and comfortable routes. This technology can also be useful for the police in case of car robbery, where they can bring the image of the robbery from the database. Further adding IOT will improve the automation of the vehicle, where the IoT can access the data from the database. This is depicted in Figure 8.
5 Conclusion Modern life style and technology intern requires facility, comfort, and need for quick and fast solutions to our day-to-day challenges. Impairment of DT in modern business requirements and solution will reduce the human errors and will be an opt solution for real-time tasks. AR-aided and AR-assisted DT can play a vital role and yield greater profits in various business, education, gaming, and health sectors. We suggest a technique for reducing traffic fatalities using an AR-assisted ADAS. The self-driving system may pretrain using the photos in the computer repository and assists in the correct forecast of the root, which lowers crash risk by offering enjoyable and secure paths. In the event of a car theft, the authorities may find this equipment helpful in retriev-
58
Ajay Sudhir Bale et al.
ED
UR
APT
C GES IMA
D
URE
APT
C GES IMA
Images fed to the AI algorithm
AI Datas from database
ADAS TECHNOLOGY Best possbile route Starting point
Destination point
Figure 8: Proposed methodology.
ing the application’s snapshot of the theft. ADAS for a fully automated vehicle with the amalgamation of AI instigated AR with IoT and DT can improvise the automation.
References [1] [2]
[3] [4]
“Top 7 Modern-Day Applications of Augmented Reality (AR).” 2021. GeeksforGeeks. July 10, 2021. https://www.geeksforgeeks.org/top-7-modern-day-applications-of-augmented-reality-ar/. Makarov, Andrew. 2022. “12 Augmented Reality Trends of 2023: New Milestones in Immersive Technology.” MobiDev. August 2, 2022. https://mobidev.biz/blog/augmented-reality-trends-futurear-technologies. Craig, A.B. 2013. Understanding Augmented Reality: Concepts and Applications. Waltham, MA: Morgan Kaufmann Publishers Inc. Lee, Jinwoo, Heayoun Sul, Wonha Lee, Kyung Rok Pyun, Inho Ha, Dongkwan Kim, Hyojoon Park, et al. 2020. “Stretchable Skin‐like Cooling/Heating Device for Reconstruction of Artificial Thermal Sensation in Virtual Reality.” Advanced Functional Materials 30(29): 1909171. https://doi.org/10.1002/ adfm.201909171.
4 Augmented reality in cross-domain applications
[5]
[6] [7] [8]
[9]
[10]
[11]
[12] [13]
[14] [15] [16]
[17] [18]
[19]
[20]
[21] [22]
59
Kim, Jae Joon, Yan Wang, Haoyang Wang, Sunghoon Lee, Tomoyuki Yokota, and Takao Someya. 2021. “Skin Electronics: Next‐generation Device Platform for Virtual and Augmented Reality.” Advanced Functional Materials 31(39): 2009602. https://doi.org/10.1002/adfm.202009602. Redmon, S., R. Divvala, and A. Girshick. 2016. Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. Las Vegas. Schmalstieg, Dieter, and Tobias Hollerer. 2017. “Augmented Reality: Principles and Practice.” In 2017 IEEE Virtual Reality (VR). IEEE. Xiong, Jianghao, En-Lin Hsiang, Ziqian He, Tao Zhan, and Shin-Tson Wu. 2021. “Augmented Reality and Virtual Reality Displays: Emerging Technologies and Future Perspectives.” Light: Science and Applications 10(1): 216. https://doi.org/10.1038/s41377-021-00658-8. Siriwardhana, Yushan, Pawani Porambage, Madhusanka Liyanage, and Mika Ylianttila. 2021. “A Survey on Mobile Augmented Reality with 5G Mobile Edge Computing: Architectures, Applications, and Technical Aspects.” IEEE Communications Surveys and Tutorials 23(2): 1160–1192. https://doi.org/10.1109/comst.2021.3061981. Chatzopoulos, Dimitris, Carlos Bermejo, Zhanpeng Huang, and Pan Hui. 2017. “Mobile Augmented Reality Survey: From Where We Are to Where We Go.” IEEE Access: Practical Innovations, Open Solutions 5: 6917–6950. https://doi.org/10.1109/access.2017.2698164. Xiao, Robert, Julia Schwarz, Nick Throm, Andrew D. Wilson, and Hrvoje Benko. 2018. “MRTouch: Adding Touch Input to Head-Mounted Mixed Reality.” IEEE Transactions on Visualization and Computer Graphics 24(4): 1653–1660. https://doi.org/10.1109/TVCG.2018.2794222. Mistry, P., P. Maes, and L. Chang. 2009. “WUW-Wear Ur World: A Wearable Gestural Interface.” Proc. ACM Extended Abstracts Human Factors Comput. Syst. (CHI) 4111–4116. Zhou, Feng, Henry Been-Lirn Duh, and Mark Billinghurst. 2008. “Trends in Augmented Reality Tracking, Interaction and Display: A Review of Ten Years of ISMAR.” In 2008 7th IEEE/ACM International Symposium on Mixed and Augmented Reality. IEEE. Huang, Z., W. Li, P. Hui, and C. Peylo. 2014. “CloudRidAR: A Cloud-Based Architecture for Mobile Augmented Reality.” In Proc. ACM Workshop Mobile Augment. Real. Robot. Technol. Syst, 29–34. Xu, Sicheng, and Gedong Zhang. 2022. “Integrated Application of AR Technology Development and Drama Stage Design.” Mobile Information Systems 2022: 1–8. https://doi.org/10.1155/2022/5179451. Schneider, Michael, Jason Rambach, and Didier Stricker. 2017. “Augmented Reality Based on Edge Computing Using the Example of Remote Live Support.” In 2017 IEEE International Conference on Industrial Technology (ICIT). IEEE. Zhou, P., W. Zhang, T. Braud, P. Hui, and J. Kangasharju. 2018. “Arve: Augmented Reality Applications in Vehicle to Edge Networks.” In Proc. Workshop Mobile Edge Commun, 25–30. Ren, Jinke, Yinghui He, Guan Huang, Guanding Yu, Yunlong Cai, and Zhaoyang Zhang. 2019. “An Edge-Computing Based Architecture for Mobile Augmented Reality.” IEEE Network 33(4): 162–169. https://doi.org/10.1109/mnet.2018.1800132. He, Mei, Yiyang Wang, William J. Wang, and Zhong Xie. 2022. “Therapeutic Plant Landscape Design of Urban Forest Parks Based on the Five Senses Theory: A Case Study of Stanley Park in Canada.” International Journal of Geoheritage and Parks 10(1): 97–112. https://doi.org/10.1016/j.ijgeop.2022. 02.004. Song, Genlong, Yi Li, and Lu-Ming Zhang. 2022. “Landscape Fusion Method Based on Augmented Reality and Multiview Reconstruction.” Applied Bionics and Biomechanics 2022: 5894236. https://doi.org/10.1155/2022/5894236. Low, D.G. 2004. “Distinctive Image Features from Scale-Invariant Keypoints.” International Journal of Computer Vision 60(2): 91–110. Cranmer, Eleanor E., M. Claudia tom Dieck, and Paraskevi Fountoulaki. 2020. “Exploring the Value of Augmented Reality for Tourism.” Tourism Management Perspectives 35(100672): 100672. https://doi. org/10.1016/j.tmp.2020.100672.
60
Ajay Sudhir Bale et al.
[23] Qiu, Chan, Shien Zhou, Zhenyu Liu, Qi Gao, and Jianrong Tan. 2019. “Digital Assembly Technology Based on Augmented Reality and Digital Twins: A Review.” Virtual Reality and Intelligent Hardware 1(6): 597–610. https://doi.org/10.1016/j.vrih.2019.10.002. [24] Chen, Yu-Tso, Chi-Hua Chen, Szu Wu, and Chi-Chun Lo. 2018. “A Two-Step Approach for Classifying Music Genre on the Strength of AHP Weighted Musical Features.” Mathematics 7(1): 19. https://doi. org/10.3390/math7010019. [25] Cipresso, Pietro, Irene Alice Chicchi Giglioli, Mariano Alcañiz Raya, and Giuseppe Riva. 2018. “The Past, Present, and Future of Virtual and Augmented Reality Research: A Network and Cluster Analysis of the Literature.” Frontiers in Psychology 9: 2086. https://doi.org/10.3389/fpsyg.2018. 02086. [26] Brannon Barhorst, Jennifer, Graeme McLean, Esta Shah, and Rhonda Mack. 2021. “Blending the Real World and the Virtual World: Exploring the Role of Flow in Augmented Reality Experiences.” Journal of Business Research 122: 423–436. https://doi.org/10.1016/j.jbusres.2020.08.041. [27] Minaee, Shervin, Xiaodan Liang, and Shuicheng Yan. 2022. “Modern Augmented Reality: Applications, Trends, and Future Directions.” ArXiv [Cs.CV] http://arxiv.org/abs/2202.09450. [28] Tao, Wenjuan 2022. “Application of Garment Customization System Based on AR Somatosensory Interactive Recognition Imaging Technology.” Advances in Multimedia 2022: 1–9. https://doi.org/10. 1155/2022/7174889. [29] Alashkar, Taleb, Songyao Jiang, Shuyang Wang, and Yun Fu. 2017. “Examples-Rules Guided Deep Neural Network for Makeup Recommendation.” Proceedings of the . . . AAAI Conference on Artificial Intelligence. AAAI Conference on Artificial Intelligence 31 (1). https://doi.org/10.1609/aaai.v31i1.10626. [30] Billewar, Satish Rupraoji, Karuna Jadhav, V.P. Sriram, Dr A. Arun, Sikandar Mohd Abdul, Kamal Gulati, and Dr Narinder Kumar Kumar Bhasin. 2022. “The Rise of 3D E-Commerce: The Online Shopping Gets Real with Virtual Reality and Augmented Reality during COVID-19.” World Journal of Engineering 19(2): 244–253. https://doi.org/10.1108/wje-06-2021-0338. [31] Lixăndroiu, Radu, Ana-Maria Cazan, and Cătălin Ioan Maican. 2021. “An Analysis of the Impact of Personality Traits towards Augmented Reality in Online Shopping.” Symmetry 13(3): 416. https://doi. org/10.3390/sym13030416. [32] Marelli, Davide, Simone Bianco, and Gianluigi Ciocca. 2022. “Designing an AI-Based Virtual Try-on Web Application.” Sensors (Basel, Switzerland) 22(10): 3832. https://doi.org/10.3390/s22103832. [33] Marai, Oussama El, Tarik Taleb, and Jaeseung Song. 2022. “AR-Based Remote Command and Control Service: Self-Driving Vehicles Use Case.” IEEE Network 1–8. https://doi.org/10.1109/mnet.119. 2200058. [34] Wang, Ziran, Kyungtae Han, and Prashant Tiwari. 2020. “Augmented Reality-Based Advanced DriverAssistance System for Connected Vehicles.” ArXiv [Cs.HC]. http://arxiv.org/abs/2008.13381. [35] Ribeiro, Pedro, André Frank Krause, Phillipp Meesters, Karel Kural, Jason van kolfschoten, MarcAndré Büchner, Jens Ohlmann, Christian Ressel, Jan Benders, and Kai Essig. 2021. “A VR Truck Docking Simulator Platform for Developing Personalized Driver Assistance.” Applied Sciences (Basel, Switzerland) 11(19): 8911. https://doi.org/10.3390/app11198911. [36] Voinea, Gheorghe-Daniel, Cristian Cezar Postelnicu, Mihai Duguleana, Gheorghe-Leonte Mogan, and Radu Socianu. 2020. “Driving Performance and Technology Acceptance Evaluation in Real Traffic of a Smartphone-Based Driver Assistance System.” International Journal of Environmental Research and Public Health 17(19): 7098. https://doi.org/10.3390/ijerph17197098. [37] Rahim, Mussadiq Abdul, Sultan Daud Khan, Salabat Khan, Muhammad Rashid, Rafi Ullah, Hanan Tariq, and Stanislaw Czapp. 2023. “A Novel Spatio-Temporal Deep Learning Vehicle Turns Detection Scheme Using GPS-Only Data.” IEEE Access: Practical Innovations, Open Solutions 11: 8727–8733. https://doi.org/10.1109/access.2023.3239315.
4 Augmented reality in cross-domain applications
61
[38] Schroeder, Greyce, Charles Steinmetz, Carlos Eduardo Pereira, Ivan Muller, Natanael Garcia, Danubia Espindola, and Ricardo Rodrigues. 2016. “Visualising the Digital Twin Using Web Services and Augmented Reality.” In 2016 IEEE 14th International Conference on Industrial Informatics (INDIN). IEEE. [39] Yin, Yue, Pai Zheng, Chengxi Li, and Lihui Wang. 2023. “A State-of-the-Art Survey on Augmented Reality-Assisted Digital Twin for Futuristic Human-Centric Industry Transformation.” Robotics and Computer-Integrated Manufacturing 81(102515): 102515. https://doi.org/10.1016/j.rcim.2022.102515. [40] Zhang, Zixuan, Feng Wen, Zhongda Sun, Xinge Guo, Tianyiyi He, and Chengkuo Lee. 2022. “Artificial Intelligence‐enabled Sensing Technologies in the 5G/Internet of Things Era: From Virtual Reality/ Augmented Reality to the Digital Twin.” Advanced Intelligent Systems (Weinheim an Der Bergstrasse, Germany) 4(7): 2100228. https://doi.org/10.1002/aisy.202100228. [41] “Visualisation of the Digital Twin Data in Manufacturing by Using Augmented Reality.” Procedia CIRP 81: 898–903. https://doi.org/10.1016/j.procir.2019.03.223. [42] “What Is Traffic Prediction and How Does It Work?” n.d. TomTom. Accessed April 13, 2023. https://www.tomtom.com/newsroom/behind-the-map/road-traffic-prediction/. [43] “Traffic Prediction: How Machine Learning Helps Forecast Congestions and Plan Optimal Routes.” 2022. AltexSoft (blog). January 27, 2022. https://www.altexsoft.com/blog/traffic-prediction/.
P. K. Paul✶
5 Advanced ICT and intelligent systems in sophisticated Healthcare 5.0 practice in modern social and healthcare transformation: an overview Abstract: Digital technologies are important in advancing modern societies into a technology and information-enriched society, and this is required in sophisticated healthcare systems development. Digital technologies and computing are important to all people and stakeholders for the solid development and healthcare development. Organizations and institutions today are highly associated with different sub technologies toward sophisticated healthcare and medical sector. Various components of IT and computing are highly applicable in healthcare and medical systems advancement, bringing intelligent systems and smarter healthcare services. Digital health is important in the promotion of healthcare system for the wellbeing for everyone and to reach the target audience and patients of national, state, or regional level. Digital healthcare requires proper strategies so that various emerging technologies may be applied and solve the issues related to the finance organization, human, and technologies. Healthcare services to the society come at fifth stage and Health 5.0 gives the flexibility to the patient and consumers with support of emerging information technologies. This chapter gives a comprehensive overview in advanced Healthcare Technologies emphasizing Health 5.0 and its birth, transition for advanced healthcare practice. Keywords: Digital healthcare, health information system, medical systems, medical informatics, digitalization, ICT policies
1 Introduction As far as healthcare industry is concerned, digital healthcare gives modern and intelligent healthcare service, and emerging technologies gives newer recognition and acceptability worldwide [1]. India is moving toward cashless economy supported by electronic governances and mobile governance initiatives and digitalization is in rapid stage so that citizens may get required services and benefits [2, 4, 36]. Though to reach its goal digital literacy skills, and accessibility in digital resource development, building digital technologies is important and essential. Health informatics (HI) is the application of the
✶ Corresponding author: P. K. Paul, Department of CIS, Raiganj University, West Bengal, India, e-mail: [email protected]
https://doi.org/10.1515/9783110981445-005
64
P. K. Paul
Information Technology and Computing in healthcare and medical systems for collection, selection, organization, processing, management, and dissemination of the information. Latest technologies such as cloud computing, big data, internet of things (IoT), blockchain, cyber physical systems, and data analytics are important in enhancing healthcare systems and development. Like developed countries many developing countries are moving toward implementation of healthcare services and medical facilities using ICT. It is important to take note that in developing country like India traditional healthcare services are of concern with the issues viz. accessibility regarding proper technologies, land, investable resources, poor and disqualified medical practitioners, and for all the developments it is the need of hour to introduce healthcare information technologies and to reach health 5.0 features and facilities [20, 25]. As industry is currently highly adopting Industry 4.0 using latest and emerging technologies therefore in healthcare also such period of advancement can be noted as Healthcare 5.0 instead of Healthcare 4.0. Here how different parameters and features involved with Industry 4.0 and other previous phase is shown in Figure 1. Industry 4.0 – loT – AI & Big Data – Cloud Computing Industry 3.0 – Computer – Robotics – Automation Industry 2.0 – Electricity – Mass Production Industry 1.0 – Stream Power – Mechanization
Figure 1: Industry 4.0 and other previous periods and phases with their characteristics.
Health 5.0 gives opportunities in flexibility, timeliness, and support to the consumers to the real-life healthcare practices. Healthcare analytics is important in enhancing modern medical systems. Information and communication technology (ICT) emphasized with communication for healthcare communication including telemedicine, and intelligent medical practices [9, 10, 21, 32] and here other technologies like database, web systems, and multimedia play a leading role.
5 Advanced ICT and intelligent systems in sophisticated Healthcare 5.0 practice
65
2 Objective of the work This work is aimed to deal the following (but not limited to): – To learn about the basics of healthcare informatics with its features and functions and need. – To know about the basic technologies related to the HI practice specially IT components. – To find out that the emerging technologies required in better and sophisticated healthcare practices specially intelligent systems, artificial intelligence (AI), natural language processing, and IoT. – To learn about the digital healthcare and founding stakeholders, systems, and technologies for the advanced healthcare systems. – To know about the transition of Healthcare 5.0 for healthy digital healthcare practices including features and functions. – To gather, analyze, and report some of the issues related to the Health 5.0 practices in a context of developing country.
3 Methods This chapter focuses on the basic applications of information and communications technology in healthcare systems. This work highly depends on various primary sources and secondary sources viz. journals and books for the data collection from different sources analyzed and subsequently we prepared this report. For contemporary scenario analysis of HI practice specially to learn about the Health 5.0 different healthcare organization’s and hospital’s websites have been mapped and reported here. As this is a conceptual work no data collections from the primary sources were involved and therefore review of literature played a leading role in compilation of this work.
4 Latest trends in Healthcare Informatics and intelligent healthcare Healthcare systems have seen significantly changes in all the ways in different operations and this has happened during the last decade after advent of different emerging information technologies. Various inventions have been driving in healthcare sector for better and healthy medical systems for enhancing patient services and consumer benefits [7, 8, 29]. Tremendous AI applications have changed healthcare services and workflows and this is the trend worldwide. Accessible and quality medical service is the need of the hour and this is supported by various basic technologies:
66
P. K. Paul
Software technologies Various kinds of software and applications are important in better and enhanced medical services and advanced software technologies required in designing, developing, and analyzing managing health related software and apps. Software systems are needed in planning and implementing newer software and applications for enhanced medical systems [3, 18]. Database technologies Database is an important component in computer and information science required in database designing, development, and management related to healthcare and medical systems. Health-related databases are important in storing health-related data and contents. Medical database are important for researchers, scientists, and physicians for the teaching-learning activities. Each and every day a lot of medical data are being generated and database are important in storing such data and contents [18, 40]. Web technologies Website is important in healthcare and medical systems for finding, booking, and teleconferencing-related activities. Without website it is just impossible to think and imagine modern medical services. Web technologies are required in planning, designing, developing, managing websites, and here the concern of health sector is crucial and worthy. Multimedia technologies Multimedia is important in the medical sectors and allied areas including pharmacy, nursing, and dentistry. Application of multimedia is required in different places viz. teaching-learning of medical subjects, in designing and writing books and journals, medical and patient’s record management, displaying information in enhanced formats, use of augmented reality and virtual reality in medical treatments, and telemedicine purposes. Apart from the mentioned technologies some other sub-technologies such as security technologies, intelligent technologies, and so on. And among the super specialty areas of IT and computing following are considered worthy. – Virtualization and cloud computing – Internet of things (IoT) and other emerging forms – Big data and analytics – Data science and visualization – AI and machine learning – Robotics – Cyber physical systems – Blockchain, etc. However, not all the technologies are worthy or important or possible to integrate in the healthcare settings in contemporary scenario due to many issues, challenges, and
5 Advanced ICT and intelligent systems in sophisticated Healthcare 5.0 practice
67
concerns [11, 21, 37]. Improvement of the performance, efficiency, and security is also important in bringing modern and IT-enabling healthcare.
4.1 Intelligent healthcare and potentialities There are a lot of areas where information systems and intelligent medical systems and healthcare become possible and among these important are included as follows (also refer to Figure 2).
4.1.1 In diagnosis As far as diagnosis is concerned intelligent healthcare becomes possible and worthy with the support from the AI and robotics support. According to modern researchers, AI and machine learning have tremendous potentialities in healthcare particularly in information processing and decision-making [19, 22]. In diagnosis process AI and allied technologies important in finding potential diagnosis including efficiency of the results and search (i.e., decision support systems). AI also considered being important in analyzing CT scan for detecting diseases, for example, detection of pneumonia during latest COVID-19. Many companies have invested, developed, and implemented various tools and among these the most important project is Project InnerEye, which is an outstanding initiative of Microsoft regarding radiotherapy. This is an AI-supported system and mechanism for speeding radiotherapy with 3D support for the patient and it ultimately reduces time significantly of the services viz. 1 h task may be completed in few minutes only. It is worthy to mention that this process is also available as GitHub in open-source platform [12, 13, 33]. Microsoft has some other projects too and among these important one is dedicated for the physician teachers and researchers required to catalog research paper and other related works related to the biomedical sciences from the medical database called PubMed.
Aid in Treatment Improved Decesion Making
3 2
Early Detection of Diseases
End of Life Care
1
4
Connected Care 5 5
Figure 2: Basic and advanced applications of intelligent systems in healthcare.
Better overall Experience
68
P. K. Paul
4.1.2 For brain and mental health including psychological behavior controlling Intelligent system is also worthy and important in improving mental health and psychological aspects in many ways. And day by day in addition to physical health AI is improving mental healthcare systems. Intelligent system is effectively useful in finding symptoms of the diseases and illness identification. AI can be useful in identification of illness in brain caused by any chemical changes viz. dementia diseases [16, 17, 38]. It is important to note that there are a variety of dementia forms and among these important is Alzheimer’s which is being characterized by the issues and concerns viz. communicative, reasoning as well as memory problems. Different kinds of mental problems and symptoms therefore lead to many other issues and thus early detection is the only way to prevent the diseases, and in this context AI application is important. Furthermore researchers from MIT, Harvard University also used AI and ML in curing physical health, and this study has been conducted during COVID-19. Using AI-supported intelligent systems analyzing patient data become easy. Scientists have identified that AI is useful in audio processing, human speech analysis, and so on. Speech processing is empowered by decision support systems and here AI-based modeling helps a lot.
4.1.3 Regarding cancer treatment As far as cancer treatment is concerned AI and decision support systems directly and indirectly help a lot. This is specifically for the identification and diagnostics of the cancer-related diseases. There are different strategies in the identification of cancer treatment but biopsy is considered as important in this regard. Tissue extraction is important for the identification of the diseases by analysis of the digital scans of histopathology results. Whole slide image (WSI) is valuable in examining the status of the cancer but it is difficult in analyzing whole scan images and in this regard AI and decision support system are considered as crucial [14, 30, 31]. Pathologist can take help of AI-supported system for scanning the whole WSI and therefore the challenges can be reduced accordingly. Pathologist and medical expert need to zoom the whole scan and go through all the areas of WSI but AI-based system can help in easy look into and moving different parts. Therefore all over the world medical professionals are moving toward AI-supported mechanism which lies on advanced system rather than conventional neural network. Apart from identification of diseases, in time saving too, AI-based systems are worthy. It is worthy to mention that WSI analysis using AI needs some proper training and orientation to the medical industry professionals for its proper uses and enhancement [13, 26, 27]. In radiotherapy and chemotherapy too potentialities of AI and decision support system are considered as valuable.
5 Advanced ICT and intelligent systems in sophisticated Healthcare 5.0 practice
69
4.2 Natural language processing and data extraction Natural language processing is an important concern in computing and AI, and it enhances telehealth and telemedicine in many ways. Chatbots are important in medical systems, which lie on AL and machine learning issue. Chapbots also depend on natural language processing in many ways and this is very useful in self diagnosing the system. UCLA of USA developed a chatbot-integrated virtual interconventional radiologist that helps in radiological data collection and extraction.
4.3 Data, AI, and IoT: healthcare context As far as healthcare segment and AI are concerned data and similar object play a vital role. And in this regard machine learning and deep learning are considered as worthy and crucial. The higher amount of data and quality of data are vital for real and solid development of the database results. Good and enhanced AI expert and data scientist are important in developing real-world data and healthcare system’s improvement. IoT is promoting intelligent systems as well, and this trend is growing all over the world and therefore a new branch of IoT has been created called Internet of medical things. Day by day wireless technologies are increasing and different sectors are also rising in this context. Telehealth technologies are more expected to grow in the coming years due to the involvement of many organizations and industries. According to experts and research organizations, within 2026 general and emerging IoT applications and integration in medical system is likely to grow USD 94.2 billion from 26.5 billion USD in 2021. It is worthy to note that in all the cases AI including other allied technologies are expected to support in IoT in healthcare integration process even in the developing countries [6, 28, 33]. There are many obstacles in integrating IoT in medical and healthcare settings. Health industry and institutions are adopting advanced technologies for the development of sophisticated healthcare. Obstacles in adopting IMoT are important to look into and here proper challenges should be considered valuable. Every manufacturer uses their own proprietary protocol and therefore regarding the use of other companies it is difficult to use or perform. Connectivity is another issue in implementing IMoT and proper skill set and awareness [18, 34]. Furthermore environmental issues and concern also should be considered crucial and point to emphasize.
4.4 Wearable and wireless systems and healthcare in the context of AI IT organizations and institutions are moving toward more wearable device utilizations. Healthcare technologies and domain significantly linked with the wearable systems. Day by day smartwatch applications have increased and this is also helping in
70
P. K. Paul
remote monitoring of the healthcare systems and sector. Advanced wearable technology is most important and valuable in healthcare sector development [27, 35, 41]. Smartwatch has the ability to use and monitor hear rate of a person and many other features such as blood oxygen saturation based on the type of smartwatch. Finding Low blood oxygen saturation and other details are easily accessible using sensor and here is the way in smartwatch due to such features integration in the mechanism. PPG or photoplethysmography is dedicated in measuring variations in blood volume including composition and is backed by optical technology. Health organizations and professionals therefore use different technologies related to the healthcare for its solid enhancement. Bio patches are rising in healthcare industry and this helps toward sophisticated and intelligent healthcare development. Smart hearing aids are important in developing modern medical systems. AI is important in enhancing hearing aid development.
4.5 Smart pills and healthcare informatics Application of IoT is very worthy and crucial in complete and overall development, and in this regard smart pills are important. Smart pills are editable electronic products, which are essential in pharmaceutical bodies and organizations. Valuable information to the patients can be possible with smart pills. In 2017, the first smart pill was approved by FDA.
4.6 Miscellaneous impact of intelligent systems in healthcare There are different points and factors considered as worthy in Medical and Health Informatics development and enhancement in the context of Intelligent Systems and AI especially robotics, decision support systems, machine learning, deep learning, and so on [23, 24]: – Alarming systems and biological feedback considered as worthy in medical systems and bio science development due to several available features. Many companies have developed product which can alarm or able to give feedback after collecting data from different wearable devices. – In case of remote surgery or augmented reality-based surgery there are potential uses of AI and machine learning. Real-time operations become possible with AIsupported augmented reality and many companies have developed operation tools and system for this purpose. – Virtual physician assistant, virtual visit, and online assisting tool in medical and healthcare system are worthy in patient management and healthcare development. Online schedule managing tools is also an example in intelligent medical systems.
5 Advanced ICT and intelligent systems in sophisticated Healthcare 5.0 practice
–
–
71
Deep learning and machine learning tools are also helpful in assisting and finding diseases, and patient smartphone can be an example in relation to the patient actual data collection and record management [21, 39]. In metal healthcare development and mental cure AI and deep learning systems are worthy and increasing day by day.
This way data and real-time data are considered as most vital and useful in healthcare organizations, physicians, planners, decision-makers, medical professionals, and allied associates. Healthcare organizations are also putting their best in better healthcare services using intelligent systems and promotion.
5 Healthcare 5.0: the foundation Technologies are changing rapidly throughout the world and these changes are prime and noticeable in the areas of Information Science and Technology. The advent of communication technology has changed the communication systems, societal development, information transformation, person-to-person communication, and connectivity of different stakeholders significantly and helps in reaching smart and digital society. Latest development in communication-led IoT which is developing society at large and medical system is not an exception [16, 36, 42]. IoT and allied technologies bring us automatic, intelligent, and integrated healthcare system which is more convenient, flexible, and personalized. In a research Kalis et al. [15] appeared in HBR reported about potential intelligent system applicable areas and probable annual values by 2026 [15]. The IoT was first coined by Kavin Ashton in the year 1999 for an intelligent, advanced, and robust communication system development. With the IoT support the world becomes a global village [19, 38, 43]. The invention of 5G has changed IoT services significantly. It is worthy to note that 5G is not only responsible in advancing smartphone world but also smart society drastically and intellectually. Impact of 5G is worthy in advancing society at large and it will lead a new type of society and advancement with different communication opportunities. As far as communication world is concerned earlier 2G gives us only mobile communication and with the advent of 3G sharing of text, audio, video, and other multimedia become easy and effective. The 4G technology is widely spread up as of now, and it is dedicated in sharing different multimedia contents and this advancement and invention also applied to different sections including health and medical sciences. 4G gives both velocity and verbosity for prompt and super development of communication. 5G is in advanced stage and different countries have adopted 5G in different ways and this development comes with prompt delivery, virtual reality, intelligent communication, and proper security in real-time manner, and healthcare is an amazing example in this regard. Day by day wearable objects are emerging and this has
72
P. K. Paul
become a wearable computation or wearable technology for advanced and scientific communication. Electronic gadgets have changed significantly their operation and services with good human touch and man-machine interaction [5, 36]. Wearable technologies are rising globally and change the entire system of society, organizations, and institutions, and this change also affects the healthcare and medical system with sustainability. The development of IoT, 5G, and advanced communication system led Health 5.0 which has enhanced physician, healthcare, and medical organizations. Health 5.0 directly offers personalized healthcare systems, with intelligent support and governance. It is worthy to note that in future smart glasses will also be communicating for AR as well as electrodes for proper analysis and record of brain signal. Health 5.0 also gives the opportunity in better connectivity in man-machine interaction. The advancement of Healthcare 4.0 is also important to note which is supported by intelligent systems and mostly practiced as of now as depicted in Figure 3.
Reduce Paper work Automated & Efficent
Allows Data Sharing Linking with other organization
.0 He
1 re ca
th
al
h
alt
.0
2 re
ca
He
He
al
th
ca
re
.0
4.
0
Real time monitoring Involves AI Data analysis
3 re ca
h
alt
He
Involves Big Data With in a country
Figure 3: Different phases of healthcare systems supported by technologies (up to Healthcare 4.0).
6 Health 5.0: past, present, and future Data is the oxygen in the present context, and it is important in all types of organizations and institutions; healthcare is an alarming example of the same. Sophisticated data delivery and management lead better and enhanced healthcare services as well. Paradigm shifting from traditional to advanced segment is increasing in almost all the countries – not only developed. Like Europe and the United States, in Asia too healthcare becomes digital and it is shifting from healthcare industry 4.0 to a new one. Eu-
5 Advanced ICT and intelligent systems in sophisticated Healthcare 5.0 practice
73
rope is going to be the second largest market place within 2023 and Asia too if going to reach its actual stage as reported by McKinsy & Company. Quality healthcare and medical services required modern and advanced technology to offer better and healthy medical services with greater optimization. As traditional systems give different kinds of human error and failure, medical community is moving toward advanced and intelligent medical systems and services. The ideology of 5.0 lies on personalization, and this has become possible only with proper communication system. Here proper networking, high speed internet, robotics systems, intelligent devices, and automated systems enhance medical systems with proper development. Patient population is rising gradually everywhere and this scenario will rise more in near future. Healthcare 5.0 is a revolution in intelligent healthcare and medical delivery and ultimately toward digital transformation to the community and society. Actual care, remote care, and mechanical medical services give traditional and emergency medical service improvement and facilities [7, 38]. Therefore faster response becomes possible with intelligent medical services and medical professionals, and healthcare organizations are adopting such facilities and services for the betterment. Data, information, and knowledge (related to the healthcare) transformation become possible with implementation of advanced technologies to the medical system and particularly health 5.0 space. Health 5.0 brings citizen-centric and consumer-centric approach for the betterment of the healthcare and organization for easy identification of the diseases and its proper diagnosis. This ultimately enables healthcare organization in better and equipped position with potentialities and features. Electronic and digital healthcare system is rising thoroughly and everywhere and health 5.0 offers benefits like flexibility and accessibility. Here, services can be availed at any age, gender as well as ethnic group. Therefore organizations and institutions are moving toward implementing Health 5.0 which ultimately gives personalization, customer or patient-focused services, lifelong strategic alliances with all kind of medical stakeholders, and proper trust using advanced ICT. Figure 4 depicts the different generation and its focus.
6.1 Health 1.0: the concern of production The first phase of healthcare can be noted with birth or developing stage of the Industry; in this period, individual and organizations basically recognize medicine as a service industry and also assemble various services and products to the consumer or patient. In the nineteenth century modern health industry was evolved, and during this period the concern about public health and nutrition also emerged. Evidencebased treatment also emerged this time with the support of tools. This stage or period also allows survivability of patients and healthcare comes as industry where competitors can also be seen slowly [8, 37].
74
P. K. Paul
Figure 4: Different age and phases of healthcare segment in the context of ICT with Health 5.0.
6.2 Health 2.0: industrializing As far as Healthcare Phase 2 is concerned it focuses on industrialization and focuses on better collaboration among the organizations and strategic partnership. In this period an ecosystem was created for better technical support and collaboration among the healthcare organizations too. In in-patient clinic and hospitals this scenario is noticeable. In John Hopkins Hospital the first trend of industrialization and some other features can be found based on review on existing scenario. Value chain integration was an additional feature of this age concentrated on collaboration. End-to-end integration is considered as worthy and in this stage it is supported by different tools and technologies of computer viz. software, database, and basic networking.
6.3 Health 3.0: the age of automation Health 3.0 focuses on information and communication technology particularly automation, and in the third stage of automation gaining of productivity, efficiency improvement, cost-cutting, and waste management in the healthcare can be well noticed. Enterprise resource planning, automated patient check-in services, records management, and so on play a leading role. This also focuses on operating models for better healthcare organizational development and promotion. This stage or period basically focuses on cost-effective services using different technologies for the overall development of health industry. Health 3.0 empowers with better accessibility of health services. The automation at this stage leads to the following: – Elasticity – Speed – Cost-effectiveness
5 Advanced ICT and intelligent systems in sophisticated Healthcare 5.0 practice
– –
75
Integration Collaboration, etc.
Furthermore, Health 3.0 integrated the opportunity in automation of tools, systems as well as services.
6.4 Health 4.0: the age of digitalization Health 4.0 gives the opportunity in digitalization and in the fourth stage industry was recognized for its digitalization process. Digitalization results new health-related business models; here for the process of digitalization various basic technologies of IT are considered as worthy viz.: – Software technology – Web technology – Networking technology – Multimedia technology, etc. Healthcare 4.0 therefore gives the opportunities in better funding models, digital therapeutics, and effectiveness of the healthcare systems and organizations. Mass personalization, the proactive healthcare system is the new opportunity in this stage/age.
6.5 Health 5.0: personalization The fifth stage of Healthcare services is called Health 5.0 which is customer-centric and industrial models for better customer’s process. In this stage patient is considered as most vital stakeholder and here customer relationship management plays a leading role. Here intelligent technologies and emerging IT technologies are important to bring the system and services viz. cloud computing, big data, AI, robotics, IoT, and blockchain; customer wellbeing is empowered by such technologies. This age is supported by customer models for the overall development of the healthcare setups. Health 5.0 also empowers lifelong partnership with all the stakeholders and patient and for that different technologies are being used [21, 31]. Digital wellness is the need of hour and health 5.0 offers such benefits and opportunities for healthcare benefits to the patients, healthcare professionals, teachers and trainers, medical laboratories, and so on.
76
P. K. Paul
7 Suggestions with concluding remarks Digital healthcare totally depends on modern healthcare service supported by technologies especially IT and computing, and it is gaining immense popularity all over the world including in India. The traditional healthcare centers change rapidly with requirement in ICT and modern intelligent system. Though, day by day certain issues are also increasing in the areas of healthcare and digital technology implementation such as availability such as the accessibility, lack of resources, and poor and nonskilled medical practitioners. It is worthy to mention that digital healthcare remains to be an indispensable option for healthcare providers both in government and private sector in India. Health information technology including integration of latest technologies such as cloud computing, IoT, big data, analytics, and machine learning technology is the need of the hour for real implementation of Healthcare 4.0 and also toward Healthcare 5.0 in the national and global healthcare sector in order to offer quality healthcare with proper health information exchange and interoperability. Today a good number of healthcare organizations and institutions such as hospitals, medical practitioners, independent laboratories, radiology centers, and pharmacies are emerging widely way by deploying HI practice. As HI is an extension of bioinformatics with latest addition of tools and technologies in the digital healthcare, it offers the fusion of healthcare and health management, information technology, and administration. Healthcare 5.0 no doubt offers and can bring sophisticated patient health experience with clinical care, nursing services, more advanced and intelligent pharmacy services including public health services. It can be aptly determined and subtly delivered at the patients’ doorstep just-in-time. Like developed countries in the developing countries too many research and practicing projects are dedicated in building an advanced healthcare system. With proper initiative and steps, the sophisticated development of an intelligent healthcare society is possible where healthcare should be considered as prime importance and curable.
References [1] [2]
[3] [4]
Agrawal, Raag, and Sudhakaran Prabakaran. 2020. “Big Data in Digital Healthcare: Lessons Learnt and Recommendations for General Practice.” Heredity 124(4): 525–534. Aasheim, Cheryl L., Susan Williams, Paige Rutner, and Adrian Gardiner. 2015. “Data Analytics vs. Data Science: A Study of Similarities and Differences in Undergraduate Programs based on Course Descriptions.” Journal of Information Systems Education 26(2): 103. Alugubelli, Raghunandan. 2016. “Exploratory Study of Artificial Intelligence in Healthcare.” International Journal of Innovations in Engineering Research and Technology 3(1): 1–10. Balsari, Satchit, Alexander Fortenko, Joaquín A. Blaya, Adrian Gropper, Malavika Jayaram, Rahul Matthan, Ram Sahasranam, et al. 2018. “Reimagining Health Data Exchange: An Application Programming Interface-Enabled Roadmap for India.” Journal of Medical Internet Research 20(7): e10725.
5 Advanced ICT and intelligent systems in sophisticated Healthcare 5.0 practice
[5]
[6]
[7]
[8]
[9]
[10] [11]
[12]
[13]
[14]
[15] [16]
[17] [18] [19]
[20]
[21]
77
Behkami, Nima A., and Tugrul U. Daim. 2012. “Research Forecasting for Health Information Technology (HIT), using Technology Intelligence.” Technological Forecasting and Social Change 79(3): 498–508. Bhambere, Sailee, B. Abhishek, and H. Sumit. 2021. “Rapid Digitization of Healthcare – A Review of COVID-19 Impact on our Health Systems.” International Journal of All Research Education and Scientific Methods 9(2): 1457–1459. Bhattacharya, Indrajit, and Anandhi Ramachandran. 2015. “A Path Analysis Study of Retention of Healthcare Professionals in Urban India using Health Information Technology.” Human Resources for Health 13(1): 1–14. Dasgupta, Aparajita, and Soumya Deb. 2008. “Telemedicine: A New Horizon in Public Health in India.” Indian Journal of Community Medicine: Official Publication of Indian Association of Preventive and Social Medicine 33(1): 3. Dash, Satya Prakash. 2020. “The Impact of IoT in Healthcare: Global Technological Change & the Roadmap to a Networked Architecture in India.” Journal of the Indian Institute of Science 100(4): 773–785. Davenport, Thomas, and Ravi Kalakota. 2019. “The Potential for Artificial Intelligence in Healthcare.” Future Healthcare Journal 6(2): 94. Habes, Mohammed, Mahmoud Alghizzawi, Sana Ali, Ahmad SalihAlnaser, and Said A. Salloum. 2020. “The Relation among Marketing ads, via Digital Media and Mitigate (COVID-19) Pandemic in Jordan.” International Journal of Advanced Science and Technology 29(7): 12326–12348. Jiang, Fei, Yong Jiang, Hui Zhi, Yi Dong, Hao Li, Sufeng Ma, Yilong Wang, Qiang Dong, Haipeng Shen, and Yongjun Wang. 2017. “Artificial Intelligence in Healthcare: Past, Present and Future.” Stroke and Vascular Neurology 2(4): 230–243. Itumalla, Ramaiah. 2012. “Information Technology and Service Quality in HealthCare: An Empirical Study of Private Hospital in India.” International Journal of Innovation, Management and Technology 3(4): 433. Jain, Esha. 2020. “Digital Employability Skills and Training Needs for the Indian Healthcare Industry.” Kamaljeet Sandhu (Ed.) In Opportunities and Challenges in Digital Healthcare Innovation, pp. 113–130. IGI Global. Kalis, Brian, Matt Collier, and Richard Fu. 2018. “10 Promising AI Applications in Health Care.” Harvard Business Review, pp. 2–5. Kapadia-Kundu, Nandita, Tara M. Sullivan, Basil Safi, Geetali Trivedi, and Sanjanthi Velu. 2012. “Understanding Health Information Needs and Gaps in the Health Care System in Uttar Pradesh, India.” Journal of Health Communication 17(sup2): 30–45. Kar, Sujita Kumar, Shailendra K. Saxena, and Russell Kabir. 2020. “The Relevance of Digital Mental Healthcare during COVID-19: Need for Innovations.” Nepal Journal of Epidemiology 10(4): 928. Karthikeyan, N., and R. Sukanesh. 2012. “Cloud based Emergency Health Care Information Service in India.” Journal of Medical Systems 36: 4031–4036. King, John Leslie, Vijay Gurbaxani, Kenneth L. Kraemer, F. Warren McFarlan, K. S. Raman, and CheeSing Yap. 1994. “Institutional Factors in Information Technology Innovation.” Information Systems Research 5(2): 139–169. Kumar, Adarsh, Puneet Mahajan, Dinesh Mohan, and Mathew Varghese. 2001. “IT – Information Technology and the Human Interface: Tractor Vibration Severity and Driver Health: A Study from Rural India.” Journal of Agricultural Engineering Research 80(4): 313–328. Kumari, Trisha. 2019. “A Study on Knowledge and Attitude Towards Digital Health of Rural Population of India-Innovations in Practice to Improve Healthcare in the Rural Population.” International Journal of Emerging Multidisciplinary Research 3(3): 13–21.
78
P. K. Paul
[22] Madon, Shirin, Sundeep Sahay, and Randeep Sudan. 2007. “E-Government Policy and Health Information Systems Implementation in Andhra Pradesh, India: Need for Articulation of Linkages Between the Macro and the Micro.” The Information Society 23(5): 327–344. [23] Malhotra, Savita, Subho Chakrabarti, and Ruchita Shah. 2019. “A Model for Digital Mental Healthcare: Its Usefulness and Potential for Service Delivery in Low-and Middle-Income Countries.” Indian Journal of Psychiatry 61(1): 27. [24] Manne, Ravi, and Sneha C. Kantheti. 2021. “Application of Artificial Intelligence in Healthcare: Chances and Challenges.” Current Journal of Applied Science and Technology 40(6): 78–89. [25] Mishra, Saroj Kanta, Lily Kapoor, and Indra Pratap Singh. 2009. “Telemedicine in India: Current Scenario and the Future.” Telemedicine and e-Health 15(6): 568–575. [26] Mishra, Saroj Kanta, Indra Pratap Singh, and Repu Daman Chand. 2012. “Current Status of Telemedicine Network in India and Future Perspective.” Proceedings of the Asia-Pacific Advanced Network 32(1): 151–163. [27] Mony, Prem Kumar, and C. Nagaraj. 2007. “Health Information Management: An Introduction to Disease Classification and Coding.” National Medical Journal of India 20(6): 307. [28] Nega, Adane, and Alemu Kumlachew. 2017. “Data Mining based Hybrid Intelligent System for Medical Application.” International Journal of Information Engineering and Electronic Business 9(4): 38. [29] Orlikowski, Wanda J., and Daniel Robey. 1991. “Information Technology and the Structuring of Organizations.” Information Systems Research 2(2): 143–169. [30] Pai, Rajesh R., and Sreejith Alathur. 2019. “Assessing Awareness and Use of Mobile Phone Technology for Health and Wellness: Insights from India.” Health Policy and Technology 8(3): 221–227. [31] Pandey, Prateek, and Ratnesh Litoriya. 2020. “Implementing Healthcare Services on a Large Scale: Challenges and Remedies based on Blockchain Technology.” Health Policy and Technology 9(1): 69–78. [32] Paul, Prantosh Kumar, Dipak Chatterjee, and Minakshi Ghosh. 2012. “Medical Information Science: Emerging Domain of Information Science and Technology (IST) for Sophisticated Health & Medical Infrastructure Building-An Overview.” International Scientific Journal of Sport Sciences 1(2): 97. [33] Paul, Prantosh Kumar, Dipak Chatterjee, and Minakshi Ghosh. 2012. “Neural Networks: Emphasizing its Application in the World of Health and Medical Sciences.” Journal of Advances in Medicine 1(2): 93–99. [34] Paul, Prantosh Kumar, Rajesh Kumar Sinha, Jhuma Ganguly, and Minakshi Ghosh. 2015. “Health and Medical Information Science and its potentiality in Indian Education Sector.” Journal of Advances in Medicine 4(1and2): 21–37. [35] Paul, Prantosh, A. Bhimali, and P. S. Aithal. 2017. “Allied Medical and Health Science and Advanced Telecommunications: Emerging Utilizations and its Need in Indian Healthcare System.” Current Trends in Biotechnology and Chemical Research 7(1–2): 27–30. [36] Paul, Prantosh, P. S. Aithal, and A. Bhimali. 2018. “Health Information Science and its Growing Popularities in Indian Self Financed Universities: Emphasizing Private Universities – A Study.” International Journal of Scientific Research in Biological Sciences 5(1): 1–11. [37] Paul, P. K. 2022. “Aspects of Biosensors with Refers to Emerging Implications of Artificial Intelligence, Big Data and Analytics: The Changing Healthcare–A General Review.” Next Generation Smart Nano-Bio-Devices 332: 1–18. [38] Pingle, Shyam. 2012. “Occupational Safety and Health in India: Now and the Future.” Industrial Health 50(3): 167–171. [39] Rana, Ajay. 2017. “The Immense Potential of M-Care in India: Catering Better to Patients Needs in the Context of a Fragmented Healthcare System.” International Journal of Reliable and Quality E-Healthcare (IJRQEH) 6(4): 1–3.
5 Advanced ICT and intelligent systems in sophisticated Healthcare 5.0 practice
[40] Richardson, Jordan P., Cambray Smith, Susan Curtis, Sara Watson, Xuan Zhu, Barbara Barry, and Richard R. Sharp. 2021. “Patient Apprehensions about the Use of Artificial Intelligence in Healthcare.” NPJ Digital Medicine 4(1): 140. [41] Sahay, Sundeep, Eric Monteiro, and Margunn Aanestad. 2009. “Toward a Political Perspective of Integration in Information Systems Research: The Case of Health Information Systems in India.” Information Technology for Development 15(2): 83–94. [42] Srivastava, Sunil Kumar. 2016. “Adoption of Electronic Health Records: A Roadmap for India.” Healthcare Informatics Research 22(4): 261–269. [43] Yu, Kun-Hsing, Andrew L. Beam, and Isaac S. Kohane. 2018. “Artificial Intelligence in Healthcare.” Nature Biomedical Engineering 2(10): 719–731.
79
Jyoti Singh Kirar✶, Purvi Gupta, Aashish Khilnani
6 A study on enterprise collaboration in metaverse Abstract: Metaverse aims to build a fully immersive self-sustaining virtual world where humans can play work and socialize. The recent advancement of emerging technologies such as XR, AR, AI, and blockchain is enabling the manifestation of the metaverse. The work dynamic postpandemic is also changing with work from home being the new normal. This makes us question the future of working in a virtual setting enabled by the metaverse. It also brings into light the concerns of safety, security and privacy, challenges about scalability, and constraints of technology. In this chapter, we do a systematic literature review of the current research in the field of the metaverse and understand its key features, enabling technologies and applications in the field of socializing, working, and gaming. We investigate how the metaverse supports digital collaboration amongst enterprises in the virtual reality-enabled metaverse. Finally, we draw upon further research directions for building metaverse systems for enterprises. Keywords: Metaverse, digital collaboration, virtual reality
1 Introduction The word metaverse comprises two words meta meaning transcendence and verse which expands to the universe. It is an all-encompassing virtual environment scaffolded on technologies like virtual reality (VR), augmented reality (AR), and blockchain and has attributes of social media that sustain the creator, consumer, and gamer economy via cryptocurrencies and digital assets. The word was coined by Neal Stephenson almost 30 years back in his sci-fi novel Snow Crash. Neal mentions how the protagonist assumes a new digital identity which we now know as Avatars and interacts with different people in a virtual environment. However, recently metaverse has become a convergence of real time, globally interconnected virtual worlds where people can work, play, collaborate, shop, or simply hang out together in entirely new ways. Users can also purchase, create, trade, or sell digital assets or take trips to virtual spaces inside the metaverse [1].
✶
Corresponding author: Jyoti Singh Kirar, Banras Hindu University, Varanasi, e-mail: [email protected] Purvi Gupta, Banras Hindu University, Varanasi, e-mail: [email protected] Aashish Khilnani, Banras Hindu University, Varanasi, e-mail: [email protected] https://doi.org/10.1515/9783110981445-006
82
Jyoti Singh Kirar, Purvi Gupta, Aashish Khilnani
With the advent of increasing demands from users for a better experience in virtual reality and the accessibility and feasibility of new technology by enterprises, many tech giants such as Facebook and Tencent have announced their ventures into metaverse and proposed a changed business model for its advertisers as well [4]. Metaverse is regarded as web 3.0 which is an evolving paradigm of next-gen internet after web and mobile where users can live a digital life in an alternate virtual reality.
2 Development of metaverse The construction of the metaverse involves three successive phases: digital twins, digital natives, and surreality. Digital twin reflects the connection between a physical asset and its digital counterpart. The first stage is a mirror image of the physical world, that is, the digital presence in the virtual world is an imitation of the physical world and they exist parallelly. Digital twins offer a better quality of visuals than metaverse as of now, and they live on data. Anyone can access digital twins via phone or tablet as they need not be immersed in the virtual world, unlike the metaverse. In the second stage, the physical world and digital world combine to add value to the digital world. The second phase mainly involves content creation via digital presence intersecting physical presence. The value added by the avatars and/or online presence in this stage is pivotal. It is here that the intersection of both realities takes place. The seamless integration of the digital and virtual world takes place in step 3 where metaverse achieves its surreality. Here, the scope of the virtual world is greater than the physical world, bending creativity, and production to suit our needs. In metaverse, assets need not be tied to a physically existing asset like in digital twins [1].
2.1 Key features Metaverse is an immersive virtual open space. The defining quality of metaverse will be the feeling of sharing a presence enabled by embodied presence, the one where the person embodies the avatar and feel present in an environment. There are two information sources of the metaverse – the input from the physical world (information from the real-world projected on virtual space) and the output from the virtual world (content created by avatars, digital interaction). Previously explored are the four major pillars needed for forming and sustaining these 3D interconnected worlds [5]: Ubiquity of Access and Identity, Immersive Realism, Scalability, and Interoperability. However, sustainability and hyper spatiotemporality also constitute defining features of metaverse. We will now discuss in detail the key attributes or features that make up a metaverse system.
6 A study on enterprise collaboration in metaverse
1)
83
Ubiquity of access and identity: This should be readily available and accessible to all and allow users to move seamlessly from one virtual world to another within the metaverse. 2) Immersive realism: Metaverse systems are fully immersive in the sense that the real-time rendering of visuals in the digital space is realistic enough to offer psychological and emotional immersion in the environment. As human beings interact with their environment through their senses and bodies, this degree of immersive realism could be achieved and/or moderated through sensory perception formed by three components – visuals, sound, and touch. Visuals and sound affect the degree of immersion we feel and touch; however, depending on the person’s inference is also an important part of the interaction. Gestures, motion, and expressions also support the interactive immersion felt by a person within these virtual worlds [7]. 3) Scalability: Server architecture should enable massive incoming of humans into metaverse without compromising the experience and takes into consideration the number of users in a scene and the type, scope, and range of interactions between them. 4) Interoperability: Interoperability refers to the fact that identities are portable and can move across systems and spaces. By interoperability, we mean that users can move freely within different worlds (submetaverses) in a metaverse without losing the immersive experiences or flexibility. Besides, the digital assets used in rendering scenes are interchangeable for different scenes, that is, they are reusable for the reconstruction across distinct platforms. Metaverse is also heterogeneous in terms of the heterogeneous worlds with different implementations and data structures, modes of communication as well as the diversity of human existence and psychology. This could be achieved by new sensors that could offer more efficient and richer biometric authentication techniques such as hand motions, gestures, facial expressions, and so on that are unique to a user and their associated avatar [10]. 5) Hyper-spatiotemporality: Metaverse breaks the constraint of time and space; it is a time-space continuum parallel to the real world which is limited by the finiteness of space and irreversibility of time. Thus, users in metaverse can move freely within different worlds and experience alternate scenes. 6) Sustainability: The metaverse should support a consistent and closed economic loop and value system based on the decentralized architecture to remain persistent and offer a high level of independence to avoid power control in the hands of a few. It should also offer some scope for innovation and enthusiasm for the digital asset and content creation that is self-sustaining in the virtual world. A functional digital economy – it has an independent economic system that is in sync with the real world and usually functions with cryptocurrencies. The three main components for this are NFT, cryptocurrency, and blockchain technology.
84
Jyoti Singh Kirar, Purvi Gupta, Aashish Khilnani
2.2 Enabling technologies Metaverse integrates a plethora of technologies. In particular, AR/VR enables the creation of an immersive 3D digital world, 5G offers ultra-high reliable and ultra-low latency connections for metaverse devices and other hardware such as wearable sensors, brain-computer interface, AI enables the large-scale metaverse creation and rendering, blockchain, and NFT ensure that a self-sustaining economy with authentic ownership rights is in place. From the ownership point of view for data, blockchain comes into play. There is a slight difference between the metaverse and Web 3.0, the former focusing on the experience and the latter on the ownership and control of data. The construction of Web 3.0 comes after Web 2.0 and Web 1.0. In Web 1.0, the users were content consumers and in Web 2.0 users were both consumers and creators; however, with the emergence of Web 3.0, the users have full control over content creation via their digital avatars and other virtual assets opening a new plethora of services and experiences. With the popularity of smart devices and the growth of technology, the metaverse is stepping out of its infancy stage and giving rise to the demand for new applications and the birth of new information ecology [2]. XR – Extended reality VR – Virtual reality HMD – Head-mounted displays AI – Artificial intelligence NFT – Nonfungible token
AR – Augmented reality HCI – Human-computer interaction MR – Mixed reality NPC – Nonplaying characters D – Three-dimensional
Extended reality, or cross-reality, is an umbrella term for AR, VR, and mixed reality. It is a series of immersive technologies consisting of digital and electronic interfaces and environments where humans observe, interact, and participate in a fully or partially digital environment constructed by technology [14].
2.2.1 Virtual reality Virtual worlds are persistent online computer-generated artificial environments where multiple users in remote physical locations can interact in real time for work or play. Computer-generated simulations of 3D objects from the real world are projected digitally, making users feel completely immersed in the environment with the aid of specialized multisensory equipment such as HMDs and hand VR controllers. This experience is enhanced via moderating vision, touch, sound, light, movement, and interaction with 3D digital objects in VR.
6 A study on enterprise collaboration in metaverse
85
2.2.2 Augmented reality AR embeds digital information or assets into the physical spaces thus spatially merging the physical with the virtual world. The result of it is a spatially projected layer consisting of digital input in the physical environment that can be accessed via mobile phones, tablets, watches, etc. AR can also be displayed on VR headsets with semiimmersion. It is an enhanced version of the real world where things or digital assets are placed in the physical world with some context. This can be achieved through visual, sensory, or other sound elements.
2.2.3 Blockchain Blockchain technology enables the creation of NFT: unique pieces of data associated with photos, art, audio, video, and any other forms of digital assets. Right now, the ownership in games and content-creating communities is centralized, that is, the corporations can dismiss the ownership anytime they want to; however, the web 3.0 and NFT culture aims at decentralizing this form of ownership. They intend to pass on the ownership to users. An NFT or a nonfungible token is a nonreplaceable digital token/ certificate that represents the unique ownership of a digital asset by a user. One can create or trade NFTs within the metaverse. This is enabled by blockchain technology. As the real meaning of these platforms is created via interaction between users, it is only right that the experience is also controlled by the same. Public blockchains create digital property rights that allow for the creation of open digital markets where anyone can own and trade digital assets. Blockchain is a decentralized ownership system where the record of transactions in cryptocurrency is maintained across several computers that are linked to the peer-to-peer network. Tampering the ownership of a blockchain would require validation from peers (proof of work) which makes exploiting this technology almost impossible. As a result of more creative freedom and control in these decentralized virtual spaces, the user finds himself in a positive feedback loop wherein creativity is incentivized leading to better contributions in these platforms to create lasting virtual lives [3].
2.2.4 Artificial intelligence The use of artificial intelligence in metaverse spans to achieve the following objectives: (1) AI systems can be used to scan 2D or 3D user photos to plot various facial features such as expression, hair, texture, moods, and other human traits that can create more realistic and accurate avatars.
86
Jyoti Singh Kirar, Purvi Gupta, Aashish Khilnani
(2) Linguistic functionality: AI can help break down and translate English to other languages and vice versa, thus helping people from all backgrounds come to a global digital space. (3) Data learning is an important factor to ensure scalability with lesser disruptions caused on the part of humans. (4) Instinctive interface: The use of AI-enabled headsets is improving user experience by more accurately recreating the feeling of touch via the use of sensors. (5) Humanoids in the metaverse are digitally created NPCs acting as chatbots and assistants that guide the user inside the virtual world. They are fully created by AI and their actions and speech are fully dictated by an automated script [17].
2.2.5 Edge computing Edge computing as the name suggests aims to move the computer power closer to the edge, that is, it keeps the processing closer to where data is generated, thus reducing the need for data to travel among servers, clouds, or devices. It is a robust model in which data, applications, and processing are concentrated in devices, hardware, or gadgets in the network instead of existing almost entirely in the cloud. Edge computing is winning over cloud computing due to several reasons such as Low latency and reduced costs resulting from faster data travel; Data sovereignty is maintained as the data does not leave local servers thus reducing cyber-attacks; besides it enables wider reach as it does not need high-speed internet thus helping the user to reach previously inaccessible locations; Model accuracy as a result of increased bandwidth is ensured when data feedback loops are deployed at the edge [21].
2.3 Tools and platforms for the construction of metaverse In this section, we will discuss a variety of key tools and platforms currently being used in the development of the metaverse ecosystem. Currently, there are three important metaverse companies namely Unity, Epic Games, and Roblox that are working on multiple layers of the metaverse onion and serve as the largest platforms for game design and development. Gaming engines are providing the necessary resources to simulate everything virtually by creating a digital twin of the objects in the physical world to enhance the experience and affordance of users. They are helping thousands of designers and developers design and build games that are fundamentally virtual from the very beginning. They’re the only major tools for real-time 3D graphics. Besides, these companies also make and launch games themselves [11]. Since gamers are the ones simulating the real world for a long back, they are pivotal in leading others into the metaverse. Without the gaming engine, the metaverse cannot exist. It is fundamental to what everyone is building for the metaverse. For a
6 A study on enterprise collaboration in metaverse
87
great deal of time, the gaming engine was the only tool used for real-time 3D graphics. Moving ahead, game development companies are hoping to collaborate with industries to exchange asset creation to focus on more important things. This will soon enable asset exchange in the context of the creation of the metaverse. Designers won’t have to spend time rebuilding something that was already done by someone else. Companies aim to flesh out the metaverse by culminating AI, Game and user-generated content using these engines [8].
2.3.1 Unity Unity is a 3D game design and development environment that has a fully integrated 3D engine and studio for designing. VR, AR, and other mixed reality experiences in and out of the metaverse can be created using the Unity engine. The Unity components store is laced with a variety of plugins and assets which is why Unity is one of the most sought-after metaverse development tools. Several decentralization options in the metaverse are provided in the Unity asset store, including features such as edge computing, blockchain, AI agents, and other microservices [20]. The platform is also introducing other products to ease modeling, testing, and training AI systems. Unity is also supporting the creator economy by serving as a content creation tool for games, and avatars and generating other metaverse assets.
2.3.2 Unreal engine Epic Games’ 3D game design and development tool is Unreal Engine. It includes Metahuman Creator, the designing studios rendering photorealistic avatars, and an asset marketplace. The MetaHuman creator speeds up the process of creating these digitized humans by significantly reducing production and rendering time from months to hours with remarkable quality, fidelity, and realism. Unreal Engine 5 was recently launched with two main features Lumen and Nanite. Lumen is a fully dynamic global illumination and reflection system, whereas Nanite is the new and improved virtualized geometry system which possesses new levels of detail rendering methods. This engine’s use is most widely adopted for Hollywood film production and editing after game development.
2.3.3 Roblox Roblox is widely known as a game whereas it is a platform that provides a multitude of development tools for games such as avatar creation and scene design. Roblox is a game design and development tool which is at its nascent stage and is yet to be
88
Jyoti Singh Kirar, Purvi Gupta, Aashish Khilnani
adopted widely as a metaverse-building tool. Roblox highly depends on the creators to create and fuel the games on their platform while it serves as an advertising platform for the same. It provides a 3D engine and the necessary design-development tools for creating VR games and sustaining them with their marketplace tools and assets where the creators can share codes and trade assets [24].
3 Applications We will look at education, work, and social networking which will set the precedent for further studies into enterprise collaboration in the digital medium.
3.1 Education Although online learning is cost-efficient, flexible, and provides better access to more resources, the daily extended usage of synchronous online platforms leads to a phenomenon called zoom fatigue [15]. Asynchronous learning does not offer enough immersive interaction, collaboration, and motivation. E-learning courses face high dropout rates due to emotional isolation. All these limitations of 2D learning can be combated with 3D spatial immersive learning. Online learning in the metaverse will allow rich informal and hybrid learning experiences by breaking the final frontier of social connection and distance education. Telepresence and avatar body language will make virtual participation very effective by enabling blended learning pedagogies that foster deeper and lasting knowledge [16]. More importantly, it will democratize education, allowing worldwide participation, unbound by geographical locations. The fields of STEM education, surgeries in laboratory simulations, and safety training for physical and manufacturing work are pioneering applications of 3D spatial learning where students are co-creating their liquid, personalized curricula.
3.2 Social networking The recent coincidence to accelerate the construction of the metaverse is also a result of new ways of socializing given the pandemic. Socializing in virtual reality is very appealing besides it saves travel time and improves connectivity to a wider audience. Users can assume anonymity and still interact freely in these virtual spaces which improve privacy and free speech. People can interact with a higher degree of immersion in their avatars and have greater control over their activities. They can feel strongly connected to others in the space [25]. Changes in consumer behavior regarding online shopping via social media platforms using AR/VR technologies are underway which
6 A study on enterprise collaboration in metaverse
89
confirms that social mixed reality is here to stay [17]. The marketing industry will also see innovation in its strategies. Brands may use the metaverse to improve their marketing inventiveness by reaching and interacting with their target audience more accurately in the metaverse.
3.3 Gaming The gaming industry adopted the metaverse far earlier than any of the industries. The gaming side of metaverse aims to captivate the users and immerse them in experiences beyond physical reach. Users can play, have fancy avatars and assets, and can socialize while playing which is a big gap of user need that metaverse is filling. They have increased possibilities inside a 3D world. With the advent of a decentralized economy, blockchain, NFTs, and cryptocurrency, it is possible to own and capitalize on gaming assets as well, and gamers can mint NFTs and trade them for money. Decentralized gaming has empowered users to make better and more powerful decisions, inviting innovation and creativity, and boosting the creator economy with asset creation as well. However, tokens are hard to value in comparison to the real economy, and they are volatile [3]. The most prominent gaming metaverse platforms are Axie Infinity, Decentraland, Crypovoxels, Roblox, and Somnium Space. Here Decentraland is the leading metaverse space in gaming.
4 Constraints in metaverse –
–
–
Cybersecurity is a rising concern in the metaverse. If metaverse falls short of security early on, this could become a barrier to adoption. Because with a boost in the assets, infrastructure, devices, data, and applications, the question of cybersecurity is ever more important. The solution is to craft security into the metaverse systems from the start and let it not be an afterthought [9]. Unreliable monopolized platforms: Some people are wary of the vendors that provide the VR technologies as accessing Meta Workrooms requires users to make an account on Meta and given the ill reputation of Facebook for a data breach of users, many are unwilling to venture into VR workrooms. Besides, there are only four major metaverse platforms at the moment, with a combined total of 268,645 parcels of land. They are The Sandbox, Decentraland, Cryptovoxels, and Somnium Space. This will lead to the monopolization of platforms with Microsoft and Meta coming forward with their acquisition strategies to step up in the competition. Singular identity: Individuals should be given the autonomy and power to disclose whatever credentials they want to, both online and offline. One can have two avatars for different purposes and they might not reveal the same attributes; how-
90
–
–
–
Jyoti Singh Kirar, Purvi Gupta, Aashish Khilnani
ever, they must be tied to the same identity. As in the real world, people can assume a certain level of anonymity and still wander freely as and when they want. Allowing for security authentication at the beginning of entering the metaverse helps achieve anonymity to a certain extent as supported by the respective platforms as well as remove “bad actors,” “bots,” or “trolls” [10]. Cybersickness: Motion sickness and nausea are another barrier to entry for people with health concerns. In a study conducted at [22], it was found that cybersickness is affected by the content type and the choice of the platform where the content is experienced. More action games and videos resulted in nausea and dizziness, with a 360-degree TV being the least nauseating and VR setups being the most. Besides, user preferences affect the degree of motion sickness experienced by each person. A person inclined toward action games or wanting an adrenaline rush is less affected by motion sickness in VR as opposed to a person preferring static content over dynamic content. Sexual harassment: This is of alarming concern in these virtual spaces. In October 2021, a woman was reportedly groped in Meta’s Horizon worlds (a social version of Workrooms) leading us to further investigate the legal and ethical repercussions of virtual reality. The degree of immersion and telepresence make it seem more real. Inculcating safety while developing these platforms and giving proximity controls to the user can be a way of combating this grave issue. Brain and body interfacing: The technologies of the future may explicitly interfere with the brain and body by using body and brain sensing machines. These advanced features will improve the degree of immersion felt by a person and provide gaps for experimenting with opposing and conflicting experiences that may influence a person’s thoughts, memories, and emotions. Further research in VR and neuroscience is needed to understand the repercussions of the same while also looking at it from a safety and ethical point of view. Work needs to be done to understand, assess, and mitigate risks if any [19].
5 Enterprise collaboration in metaverse Metaverse is the natural evolution of the internet which is immersive and open to allow everyone to work, play, collaborate, and socialize. As a result of the coronavirus pandemic, the usage of digital devices increased tenfold, and a study on gamer behavior found that people spent more hours on their phones and playing games than they ever did before [8]. The way enterprises collaborate has also changed with more and more organizations opting for a hybrid work model. The pandemic has thus acted as a catalyst resulting in the adoption of online working policies to enable its workforce to collaborate, train, and work in digital settings. Big corporations like Spotify, Twitter, and Microsoft
6 A study on enterprise collaboration in metaverse
91
have allowed their employees to work from home indefinitely [6]. More companies want to explore the possibilities of using virtual offices full of avatars and holograms to work inclusively with employees engaged in remote work for greater productivity and learning. This reduces the 2D load that video conferencing services like Zoom and Microsoft Teams brought. One could don a VR headset and engage with others using their personalized avatars that offer more realism, efficiency, and personal touch. It is expected that by 2030, a third of enterprises will fully switch to VR training programs in the US.
5.1 Current solutions So far, we have only seen money being poured into the gaming industry while the corporate world is still at a nascent stage; however with the onset of the pandemic we have seen that the power technology holds to transform the workplace. To avoid economic doom, employees who were forced to stay out of their offices found a way to collaborate effectively online, providing a safe way for them to work. This provided the foundation for metaverse being used for tasks like onboarding employees and providing workers with a safe way to train remotely. The analysts at Emergen anticipate that the metaverse market would grow to approximately USD 830 billion by 2028 [27]. A start-up called Interplay is providing online and VR training for a variety of skill-based trades ranging from electrical, motor, and facilities maintenance. The VR courses on the platform simulate the physical world and offer learning in a 3D environment. It then proceeds to challenge its learners with a randomly generated reallife problem and expects workers to solve it based on their earlier learning. Microsoft has recently launched Mesh for Teams where users join a standard Microsoft Teams meeting as a personalized avatar of themselves. Organizations can build “metaverses” that are spaces within teams for users to mesh and mingle. It allows users to work effectively in VR by drawing in any digital information across enterprise platforms, emails, and knowledge management systems. Mesh is effective because it saves time and cost for users via its holoportation, hologram sharing, and visualization which accelerate decision-making and problem-solving. One can use Hololens 2 to improve the metaverse experience. Meta, on the other hand, has done a full dive into the metaverse, primarily by renaming itself from Facebook. It has come forward with Horizon Workrooms which is a platform where users can create virtual conference rooms. The acquisition of Oculus in 2018 supported the vision of Meta for venturing into the metaverse. Users can don an Oculus Quest headset and use an avatar to work, communicate, train, and interview coworkers [26]. Users can bring their laptops, desk, and keyboard into the virtual room with them, while also accessing their computers inside the mixed-reality rooms to share files or documents with other users. Users can have access to an infinite whiteboard inside the room which can be used to work, saved, and exported as an image.
92
Jyoti Singh Kirar, Purvi Gupta, Aashish Khilnani
The room doesn’t dissolve into oblivion, and it stays as long as the user would want it to. Users can customize each room for various purposes, such as brainstorming, presentations, and conversations.
5.2 Findings In this section, we will explore how the metaverse functions in different facets of the work industry and how can we improve the working quality.
5.2.1 Upskilling, educating, and training A study by PricewaterhouseCoopers to assess the impact of VR training showed that VR can help business leaders upskill their employees faster and more efficiently saving on the training budgets and dodging the difficulties brought in by the inability of the workforce to work offline. The VR learners were 4× faster in learning and more dedicated than their e-learning peers and 300% efficient and confident in applying the newly learnt skills [12]. The metaverse aims to make applied learning more accessible to everyone. Confidence is a key driver when learning and implementing soft skills and VR provide realistic immersion into the learning and working environments thereby boosting efficiency. As emotional connection increases in these simulated environments, collaboration and learning make one reach the “Aha” moment sooner. Lesser distraction, impatience, or overwhelming in VR worlds is also contributing to time-saving learning. The VR learning mode is cost-effective at scale with the headsets being a one-time investment and varying modules. This kind of learning can be particularly effective where high precision skills are required for the job. It can revolutionize the former methods of theoretical upskilling to a more environmental-based learning making use of both sensory and body responses alongside cognition [18].
5.2.2 Brainstorming and collaborating Sketching, whiteboarding, and presenting are vital parts of collaborating with the business teams. However prior work in VR for collaboration has rarely explored the design space of multiuser layouts. CollaboVR does exactly that to solve the problem [28]. It creates 3D models of the 2D designs that users have sketched and allows annotation using cloud architecture in which applications like Chalktalk (a software system to convert raw sketches to digital animations) are hosted on the server. It also allows real-time switching between different user arrangements and screen changes based on input for interactive collaboration.
6 A study on enterprise collaboration in metaverse
93
As organizations follow a process or a framework to better deliver results, we can also integrate project management tools such as Trello to work faster and introduce frameworks such as Agile development into the environment design itself rather than files and documents.
5.3 Constraints in working in metaverse Meta’s horizons only supports 16 people in VR as of now which is very less compared to the team sizes one usually works with. Besides, it is restricted to a single room, that is, the experience is isolated to a single room because of a lack of interoperability. Workplace collaboration is still far from being called a metaverse element because the recent construction is centered around rooms or spaces and not persistently connected spaces hence users cannot yet transition seamlessly among multiple experiences offered by various providers. Rooms could also have more personalized layout options with different camera perspectives. – Safety of workforce: Privacy, safety, and security are rising concerns in the metaverse. Employees admitted feeling anxiety as a result of their bosses monitoring them 24 × 7 in the VR. Malicious activities such as invisible eavesdropping and manipulating people into actual physical self-sabotage are also possible. Monetization would soon be introduced by advertisers on these virtual platforms leading to targeted advertising that tracks and affect user behavior via discriminatory targeted ads. This would invade employee privacy. Employers should rather use these surveillance privileges to better match the employees to the jobs matching their skills and expertise thus optimizing workforce efficiency. They can tailor the virtual working environment to improve collaboration amongst people by developing personalized learning modules and working experiences. – Price: Cost is another issue, not every enterprise can afford radical shifting to metaverse. Enabling accessibility for users is delaying the adoption of the metaverse by the regular public. Unless costs are cut down, accessing that the metaverse will not go mainstream. As of now 98% of websites on the internet are inaccessible to the disabled from a legal perspective; metaverse, however, provides the opportunity for the differently abled to experience life with a higher degree of freedom. – Accessibility: An important concern in the widespread adoption of AR/VR kits as working tools is their accessibility. The current forms of VR devices such as stereoscopic HMDs are not suitable for those having vision problems such as glasses, and people with mobility issues may find it difficult or impossible to engage with AR applications and/or VR devices (controllers). Issues like these may lead to social anxiety or embarrassment about using AR/VR or similar new technology in public until it becomes mainstream and is more widely adopted.
94
–
–
Jyoti Singh Kirar, Purvi Gupta, Aashish Khilnani
Scaling: Building enough infrastructure to scale metaverse systems is a very arduous task. There are many things to look at such as latency in the network, handling large amounts of data that is generated, poor design, and rendering of virtual rooms and avatars leading to bad user experience and loss of focus. With a greater number of users in a single room or scene, the quality of appearance and visual rendering gets affected [5]. This may affect the way employees document and share their work in scenes where many people are present. It is interesting to question how big virtual conferences can be organized within the metaverse. The scope of user interaction also needs to be considered as the users may not necessarily interact just in the region, they are present, and interactions like currency transactions may cross region boundaries and may end up incurring data latency [5]. Besides, asset generation is done via AI which requires a lot of computing energy to train. Thus, how can this trade-off of training AI systems effectively in the metaverse be made? [29] It is also important to note that algorithmic optimization can reduce compute energy needed to run these metaverse environments, but it is unsure if organizations would prefer efficiency over scalability. Cognitive load: AR technologies force employees to put in more cognitive effort as they are processing information from different channels and in different forms. It affects all six dimensions of workload – mental demand, physical demand, temporal demand, performance, effort, and frustration [23]. However, when dealing with VR the mental load is significantly less as the user is immersed in one medium entirely. Both the technologies impact work efficiency of employees forcing them to pay more attention to their surroundings than usual and take more time processing things mentally.
5.4 Future research directions The greatest challenge is to understand how metaverse cannot only recreate or simulate existing physical spaces but also enhance users’ agency and ability by enabling activities and access to facilities one cannot do in the real world. We must ensure that technology enhances working cooperatively in digital enterprises rather than hindering it hence we will look at further research directions from an accessibility point of view. – Platform scalability is when one can switch between AR/VR-enabled phones, tablets, desktops, and other forms of interfaces. Different platforms provide options to more people ensuring multiple means of engagement and expression. Understanding how interactions, presence, and embodiment get affected when a person switches from one platform to another will be helpful. – Social scalability is when digital applications allow more users, both remote and colocated to come together and collaborate. It looks at how inclusivity is tackled and how people are allowed to express and/or represent themselves. This includes
6 A study on enterprise collaboration in metaverse
–
95
language barriers as well. How can anonymity and accountability be maintained while socializing with other avatars to avoid irresponsible social behaviors? [13] Learning foundations: To avoid social embarrassment when exposed to new and unfamiliar technology, we need to look into the learning behaviors of users by setting up learning guides for easy adoption and better user experience, that is, formulating correct and comfortable onboarding and hardware setup procedures to minimize user hesitation and resistance.
One can assess how accessibility provides a rich avenue for further exploration.
6 Conclusion This paper proposes an extensive study of the metaverse architecture, its applications, and constraints. An overview suggests that metaverse is on for quick adoption because it helps one work, plays, and collaborates remotely effectively. In this paper, we also looked at how enterprises are collaborating digitally in the metaverse and why it is so effective for upskilling, training, and collaborating. Companies are looking to capitalize on this decentralized persistent immersive ecosystem because it enhances the way employees work remotely and improves their experience while also saving on costs: saving the commute to offices will conserve the energy of both the electricity and fuel, and digitizing documentation will save paper. Backtracking to paper production saves a lot of manufacturing energy and thus reduces carbon footprint. This mode of working leaps working on online video conferencing calls which are restricting, taxing, and lack emotional connection. Businesses can ship products effectively by testing out the prototypes in mixed reality first allowing them to get quick access to data insights. Metaverse also ensures inclusivity and accessibility to wider masses by allowing for opportunities otherwise not available in the physical world increasing the options available to a person with disabilities.
References [1]
[2]
[3]
Wang, Yuntao, Zhou Su, Ning Zhang, Rui Xing, Dongxiao Liu, Tom H. Luan, and Xuemin Shen. 2022. “A Survey on Metaverse: Fundamentals, Security, and Privacy.” IEEE Communications Surveys & Tutorials. Bhattacharya, Kakali. 2014. “A Second Life in Qualitative Research: Creating Transformative Experiences.” In Douglas J. Loveless, et al. (Eds.) Academic Knowledge Construction and Multimodal Curriculum Development, pp. 301–326. Hershey, PA: IGI Global. Petit, Nicolas, Thibault Schrepel, David Teece, and Bowman Heiden. 2022. “Metaverse Competition Agency: White Paper.” Available at SSRN.
96
[4]
[5] [6]
[7] [8] [9] [10] [11]
[12] [13] [14] [15] [16] [17] [18] [19]
[20]
[21] [22]
[23]
[24]
Jyoti Singh Kirar, Purvi Gupta, Aashish Khilnani
Kraus, Sascha, Dominik K. Kanbach, Peter M. Krysta, Maurice M. Steinhoff, and Nino Tomini. 2022. “Facebook and the creation of the metaverse: radical business model innovation or incremental transformation?.” International Journal of Entrepreneurial Behavior & Research 28(9): 52–77. Dionisio, John David, N. William, G. Burns III, and Richard Gilbert. 2013. “3D Virtual Worlds and the Metaverse: Current Status and Future Possibilities.” ACM Computing Surveys (CSUR) 45(3): 1–38. Wiggers, Kyle. 2022. “How the Metaverse could Transform Upskilling in the Enterprise.” https://ven turebeat-com.cdn.ampproject.org/c/s/venturebeat.com/2022/01/26/how-the-metaverse-couldtransform-upskilling-in-the-enterprise/amp/. Traill, D. M., J. D. Bowshill, and P. J. Lawrence. 1997. “Interactive Collaborative Media Environments.” BT Technology Journal 15(4): 130–140. “VB Special Issue – January 2022.” VentureBeat, venturebeat.com/vb-special-issue-metaverse/. Traill, D. M., J. D. Bowshill, and P. J. Lawrence. 1997. “Interactive Collaborative Media Environments.” BT Technology Journal 15(4): 130–140. Sawers, Paul. 2022. “Identity and authentication in the metaverse. VentureBeat.” Wiggers, Kyle. January 26, 2022. “How the Metaverse Will Let You Simulate Everything.” VentureBeat. https://venturebeat.com/2022/01/26/omniverse-ability-to-simulate-anything-selfdriving-cars-energy-power-consumption/. PricewaterhouseCoopers. “What Does Virtual Reality and the Metaverse Mean for Training?” PwC. https://www.pwc.com/us/en/tech-effect/emerging-tech/virtual-reality-study.html. Roesner, Franziska, and Tadayoshi Kohno. 2021. “Security and Privacy for Augmented Reality: Our 10-year Retrospective.” In VR4Sec: 1st International Workshop on Security for XR and XR for Security. Mystakidis, Stylianos. 2022. “Metaverse.” Encyclopedia 2(1): 486–497. Bailenson, Jeremy N., 2021. “Nonverbal overload: A theoretical argument for the causes of Zoom fatigue.” Mystakidis, Stylianos. 2021. “Deep Meaningful Learning.” Encyclopedia 1(3): 988–997. Sivasankar, G. A. 2022. “Study Of Blockchain Technology, AI and Digital Networking in Metaverse.” IRE Journals 5(8): 110–115. Scavarelli, Anthony, Ali Arya, and Robert J. Teather. 2021. “Virtual Reality and Augmented Reality in Social Learning Spaces: A Literature Review.” Virtual Reality 25: 257–277. Guna, Jože, Gregor Geršak, Iztok Humar, Jeungeun Song, Janko Drnovšek, and Matevž Pogačnik. 2019. “Influence of Video Content Type on Users’ Virtual Reality Sickness Perception and Physiological Response.” Future Generation Computer Systems 91: 263–276. Xu, Minrui, Wei Chong Ng, Wei Yang Bryan Lim, Jiawen Kang, Zehui Xiong, Dusit Niyato, Qiang Yang, Xuemin Sherman Shen, and Chunyan Miao. 2022. “A Full Dive into Realizing the Edge-enabled Metaverse: Visions, Enabling Technologies, and Challenges.” IEEE Communications Surveys & Tutorials. Taaffe, Ouida. 2022. “Why the Metaverse Will Depend on Advances in Edge Computing.” Raconteur. February 21, 2022. https://www.raconteur.net/technology/edge-computing-customer-experience/. Humar, I., J. Guna, G. Geršak, J. Song, J. Drnovšek, and M. Pogačnik. 2019. “Influence of Video Content Type on Users’ Virtual Reality Sickness Perception and Physiological Response.” Future Generation Computer Systems 91: 263–276. Xi, Nannan, Juan Chen, Filipe Gama, Marc Riar, and Juho Hamari. 2023. “The challenges of entering the metaverse: An experiment on the effect of extended reality on workload.” Information Systems Frontiers 25(2): 659–680. “Metaverse Development Tools, Metaverse in Unity, Metaverse in Roblox.” Eagle Visionpro, 19 Dec. 2021, eaglevisionpro.com/what-are-the-metaverse-development-tools/.
6 A study on enterprise collaboration in metaverse
[25] Socializing in the Metaverse – Science, Translated. sciencetranslated.org/socializing-in-themetaverse/. [26] Socializing in the Metaverse – Science, Translated. sciencetranslated.org/socializing-in-themetaverse/. [27] “VB Special Issue – January 2022.” VentureBeat, venturebeat.com/vb-special-issue-metaverse/. [28] He, Zhenyi, Ruofei Du, and Ken Perlin. 2020. “Collabovr: A Reconfigurable Framework for Creative Collaboration in Virtual Reality.” In 2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 542–554. IEEE. [29] Special Issue Metaverse. “VB Special Issue – January 2022.” VentureBeat, venturebeat.com/vbspecial-issue-metaverse/.
97
Pawan Whig, Shama Kouser, Ankit Sharma, Ashima Bhatnagar Bhatia, Rahul Reddy Nadikattu
7 Exploring the synergy between augmented and virtual reality in healthcare and social learning Abstract: The market for augmented and virtual reality (AR/VR) in healthcare is growing at a rapid pace, according to the report. The entire Global Augmented and Virtual Reality in Healthcare Market is expected to reach USD 18.7 billion by 2028, up from USD 1.5 billion in 2021, at a compound annual growth rate (CAGR) of 30.2%, owing to increased penetration of connected devices in the healthcare sector. The stakes in the healthcare business don’t get any higher than this: its human life on the line. The utilization of the most modern cutting-edge solutions is critical to the healthcare system’s efficacy. Healthcare (more so than other industries) is more open to new technology due to public interest and engagement as well as funding. Even on the surface, VR and AR in healthcare appear to be a logical fit. These technologies provide practical answers to the healthcare system’s many issues as well as countless diversified chances for their deployment in a variety of fields, such as general diagnostics and medical education. This chapter discusses in detail the AR and VR in the healthcare sector, and the case study at the end of the chapter makes it very useful for researchers working in the same field. Keywords: Healthcare, AR, VR, CAGR, technology
1 Introduction The market for augmented and virtual reality (AR/VR) in healthcare was estimated at USD 2.3 billion in 2021 and is anticipated to increase at a compound annual growth rate (CAGR) of 26.88% from 2022 to 2030 to reach USD 19.6 billion [1]. According to the recent estimate, the healthcare AR and VR business is expected to grow to around USD 9.7 billion in value over the next 5 years [2]. By 2027, this particular market, which is presently valued at close to USD 2.7 billion, will have increased by a factor of almost 3.5 as shown in Figure 1.
✶ Corresponding author: Pawan Whig, Vivekananda Institute of Professional Studies-TC, New Delhi, India, e-mail: [email protected] Shama Kouser, Department of Computer Science, Jazan University, Saudi Arabia Ankit Sharma, Ashima Bhatnagar Bhatia, Vivekananda Institute of Professional Studies-TC, New Delhi, India Rahul Reddy Nadikattu, University of Cumberland, USA
https://doi.org/10.1515/9783110981445-007
100
Pawan Whig et al.
Augmented Reality and Virtual Reality in Healthcare market size (by region in USD) 2016–2028
2.01
2016
2017
2018
2019
North America
2020 Europe
2021
2022
Asia Pacific
2023
2024
Latin America
2025
2026
2027
2028
Middle East & Africa
Figure 1: AR and VR market size by 2028.
The causes of this occurrence are many. Most importantly, AR and VR have the potential to allow a range of new healthcare modalities from enhancing the training of doctors and other medical professionals to expanding their capacity to practice medicine through telehealth and telemedicine [3]. Technology has opened up a completely new way to look at a medical practice. One can understand the difference between VR, AR, and MR (mixed reality) as shown in Figure 2.
VIRTUAL REALITY (VR) Fully artificial environment
Full immersion in virtual environment
Augmented Reality (AR) Virtual object overlaid on real-world environment
The real world enhanced with digital objects
Figure 2: Difference between AR, VR, and MR.
Mixed Reality (MR) Virtual environment combined with real-world
Interact with both the real world and the virtual word
7 Exploring the synergy between augmented
101
There are countless uses for this technology as it is refined. This technology will make it possible to provide high-quality education without being constrained by location or available resources [15]. Additionally, the greater collaboration will be possible in the virtual world. Because of the realism that AR and VR make possible, it won’t be long until experts can engage with one another and with patients in real time, potentially even performing surgeries or physical tests as necessary. One of the most powerful AR/VR platforms in the world, Microsoft’s Hololens platform, is already being used for some of this collaborative work. What drawbacks exist? One is that greater dependence on technology always leaves one open to risk. In light of the growing use of the cloud and digital transmission of patient data, how will developers assure security and privacy? Furthermore, a healthcare career values people. Is the healthcare industry harming itself by eradicating the human element of the patient-physician connection by replacing physical reality with virtual and AR technology? The hardware segment, which accounted for about 67.8% of the market’s revenue in the past and is predicted to grow well in the years ahead, is expected to dominate the market in terms of components [4, 5]. The rising demand for various devices, including 3D sensors, smart glasses, and head-mounted displays, will propel market expansion in the years ahead. The healthcare industry uses each of these tools extensively. These gadgets are used for training and simulation. These tools are also widely utilized for carrying out different types of procedures [6]. The application of these tools for diagnostics will also be important. The Global VR Device Shipment share is shown in Figure 3.
Global VR Device Shipment Share by Vendor 40%
43%
30%
20%
21.5%
19.4% 10%
12.9% Sony
Oculus (Facebook)
HTC
3.2% Microsoft
Other
Figure 3: Global VR Device Shipment share.
The surgery application category is anticipated to command the biggest market share over the next several years. They are employed in the performance of minimally invasive operations. The technology will eventually be used in operations due to ongoing
102
Pawan Whig et al.
improvements in the field. To bring innovative items to the market, many businesses are forming partnerships and collaborating [7]. Based on the technology being employed, the AR category is anticipated to outpace the VR segment in terms of market share. In terms of revenue, AR previously held a market share of roughly 59% and will increase over the next several years [8]. Various areas in which AR/VR are very popular nowadays with its share shown in Figure 4. Healthcare and medical devices
38% 28%
Education 24%
Workfore development and training 21%
Manufacturing
19%
Automotive Marketing and advertising
16%
Logistics/transportation
16% 15%
Retail/ecommerce Military and defense
13%
Commercial real estate
13% 10%
Residential real estate Tourism Other
5% 4%
Figure 4: Various areas in which AR/VR apart from gaming.
2 Latest market update In recent years, there has been an upsurge in demand for wearable technology. The rising usage of VR and AR has also improved how customers experience exercise. The adoption of this technology has produced more affordable and easily accessible healthcare facilities [9, 10]. In the upcoming years, it is anticipated that the adoption of various wearable devices, including rings, fit bands, and goggles, will drive market expansion. In the upcoming years, the market expansion will be driven by the usage of these devices to treat patients’ mental health. The industry is anticipated to expand well in the next years due to how well it treats depression and makes patients’ surroundings safer. The usage of these devices also improves doctor-patient communication and
7 Exploring the synergy between augmented
103
aids in better understanding of the course of therapy, both of which will spur market expansion in the years to come. In the upcoming years, the adoption of AR and VR by surgeons and psychiatrists is anticipated to propel market expansion [11, 12].
2.1 Restraints Due to several technological constraints in the market, the use of VR and AR technologies will be constrained. VR may be useless for treating a few medical disorders, which will prevent the market from expanding in the upcoming years. Other factors affecting market expansion include the specs of the computer being used and its resolution.
2.2 Opportunities In-patient psychiatrists are heavily utilizing these technologies to treat anxiety and depression because there has been an increase in the use of AR and VR in the diagnosis of various diseases and it also aids in planning a proper path for its treatment. As a result, the demand for these platforms and technologies is expected to grow in the coming years. Because these technologies have shown to be very helpful in executing difficult procedures, the demand for these platforms is anticipated to increase significantly in the healthcare sector over the course of the projected year.
2.3 Challenges The high cost of augmented and VR systems is one of the main obstacles preventing the sector from expanding. The platforms for AR that are employed in the healthcare industry are quite costly and intricate. Because of the numerous security and privacy issues surrounding the usage of AR, industry development will be constrained in the years to come.
3 Impact of Covid-19 on the healthcare market for augmented reality and virtual reality The worldwide AR and VR in the Healthcare market were worth USD 2.0 billion in 2020, and it is predicted to increase at a CAGR of 27.2% between 2021 and 2028. Among the key determinants expected to drive the growth and adoption of AR and VR innovations in the healthcare sector include technological advancements and digitization in
104
Pawan Whig et al.
health coverage, favorable regulatory proposals, soaring healthcare spending, increasing usage in surgeries, and clinical experience [13]. Pre- and post-pandemic impact on AR and VR is shown in Table 1. These technologies have a wide range of uses in medicine such as surgery, diagnosis, rehab, education, and learning [14]. The impact analysis of Covid-19 is concisely presented in Figure 5. Table 1: Pre- and post-Covid impact on AR/VR. Pandemic impact
Post-Covid outlook
According to prior predictions, the market for augmented reality and virtual reality in healthcare was predicted to reach more than USD million in , but instead, it is increased by .% from to .
Due to the shift in trends toward AR and VR applications and acceptance in the healthcare sector, the market will grow by % from to .
The primary drivers of market expansion are the rising demand for services including telemonitoring, medical training and education, and patient management.
The advancement of technology in software and hardware systems is lowering costs and increasing the user and developer experience. This, in turn, will accelerate the use of augmented and virtual reality.
Furthermore, during a pandemic emergency, there is a quick spike in the need for rapid digitization, training of healthcare specialists, and an increase in usage by the healthcare sector.
Key healthcare firms are increasing their investment in this industry to improve patient outcomes, education, and communication.
Impact Analysis of Covid-19 Market Impact
The Information Technology sector will see MIXED impact due to COVID–19 outbreak and is expected to register a neutral growth rate compared to the global GDP growth
IMPACT
Information Technology
Pandemic Impact on Market
This market will have POSITIVE IMPACT due to the spread of COVID-19
DIRECT
Market growth will ACCELERATE at a CAGR of over
Incremental growth
Growth for 2020
35%
$ 125.19 billion
26.35%
Expected time by when the impact on market will normalize
Q3–2021 Q1–2022
[Best Case] [Worst Case]
Figure 5: Impact analysis of Covid-19.
IMPACT
Global Augmented Reality and Virtual reality Market 2020–24
Market growth in 2020 likely to INCREASE compared to 2019
Augmented Reality (AR) and Virtual reality (VR) Market estimates to be revisited and updated in Q3–2020, based on the revaluation of the impact as the pandemic spread plateaus. The up date will be available free of cost to all cusstomers.
7 Exploring the synergy between augmented
105
4 Healthcare industry and fresh opportunities The following section discusses some of the key ways that VR is transforming the healthcare industry and opening up fresh opportunities to enhance various medical treatment modalities.
4.1 Treatment of pain When a patient is engaged in VR, the somatosensory cortex and the insula, which are responsible for pain, are less active. By relieving pain in this way, VR pain management can help patients bear uncomfortable medical procedures.
4.2 Therapy Physicians and medical experts are using VR to treat patients who have phobias. Patients are forced to confront their concerns in VR-controlled surroundings, which train them to overcome their fears. Virtual simulation is also being used in clinics and hospitals in India to assist people to cope with terrible occurrences in the past. They are designed to encounter genuine situations within immersive surroundings to help patients cope with worries. The VR-enabled solution is causing quite a stir in the Indian healthcare industry, where it is effectively assisting those suffering from stress and depression in their recovery.
4.3 Cognitive retraining Doctors frequently utilize VR chronic pain therapy to see patients executing complicated real-world tasks after they have suffered from a chronic stroke or brain injury. Certain duties are reproduced within the virtual world via VR, allowing patients to recover quickly. Cancer rehabilitation suits were developed in collaboration with AIIMS, one of India’s premier healthcare facilities, to provide an all-in-one rehabilitation solution. Doctors frequently utilize VR to watch patients execute complicated real-world tasks after they have suffered from a chronic stroke or brain injury. Certain duties are reproduced within the virtual world via VR, allowing patients to recover quickly. The AR/VR healthcare ecosystem is shown in Figure 6.
106
Pawan Whig et al.
Investment/ Financing
Raw Material
Region/Country
Components
AR / VR in Healthcare
Products, Parts and Devices
Applications
Service and Solutions Figure 6: AR/VR healthcare ecosystem.
5 Rehabilitation of the physical VR is benefiting patients suffering from phantom limb discomfort tremendously in the Indian healthcare industry. VR has made caretakers’ jobs simpler by allowing patients to accomplish tasks with their missing limbs, relieving pain, and stress. VR is considerably benefiting individuals suffering from phantom limb discomfort in the healthcare industry. VR has made caretakers’ jobs simpler by allowing patients to accomplish tasks with their missing limbs, relieving pain, and stress.
5.1 Improved surgical techniques With the inclusion of sophisticated technology solutions, AR in medical education has elevated and enhanced as India advances with better communication and education facilities. Surgeons employ VR-created 3D models to plan procedures, improve accuracy, and reduce the risks involved with difficult surgery. With the inclusion of sophisticated technology solutions, medical education and AR-aided surgery have risen and enhanced as India advances with greater communication and education facilities. Surgeons employ VR-created 3D models to plan procedures ahead of time, improve precision, and reduce the risks involved with a difficult surgery.
7 Exploring the synergy between augmented
107
5.2 Autism and Alzheimer’s treatment The use of VR in treating conditions like autism and Alzheimer’s the first immersive, all-in-one autism suite in the world, co-created by Lady Harding Hospital, including VR games for autism. It includes cognitive exercises, games, excursions, and training.
5.3 Better medical treatment techniques By enhancing the actual world in a three-dimensional paradigm, VR is not only helping doctors identify issues more accurately, but is also giving medical students the chance to gain first-hand experience in an immersive setting. Healthcare practitioners can safely practice, develop, and evaluate their medical abilities through the use of artificial representations of virtual environments that mirror real-world scenarios. Due to its ability to lower human mistake rates and enable learners to produce more correct output, it has become crucial for our society. To better manage patients with serious illnesses including cancer, speech disorders, spinal injuries, and multiple sclerosis, Indian hospitals have largely used this strategy.
5.4 VR in the medical sector The healthcare industry has taken note of VR since it offers so many advantages. Over the past 10 years, it has seen exponential development and is now steadily rising to redefine its potential and effect in the healthcare business. The advancement of VR will enhance the healthcare sector and provide a brighter future for the field.
5.5 Future of AR/VR in the healthcare sector VR has made strides and is predicted to have an extremely promising future. The Indian healthcare sector is incorporating it to improve and become a more practical answer for both healthcare professionals and patients. Key technologies used as a human interface in industry 4.0 are shown in Figure 7.
108
Pawan Whig et al.
Cyber Security
AR/VR
IoT
Industry 4.0 Cloud Computing
Simulations
Big Data
Robotics
Figure 7: Key technologies used as human interface.
6 Virtual reality versus augmented reality Despite being a technique that emerged decades ago, the AR paradigm is still new to many individuals. Confusing the term VR with AR is still very popular. The biggest distinction between the two is that, with a different headset, VR creates the world in which immerse ourselves. It is interactive and all see is part of a digitally created world through pictures and sounds. In AR (AR), on the other hand, our universe becomes the context through which objects, pictures, and the like are put. All see is in the real world, and wearing a headset might not be specifically required. Pokémon Go is the clearest and most mainstream example of this notion. There is, though, a mixture of these realities called blended or MR as well. For example, this hybrid technology makes it possible to see virtual objects in the real world and to create an environment where the tangible and the digital are essentially indistinguishable.
7 Exploring the synergy between augmented
109
6.1 History of VR It’s a bit fuzzier as true VR takes hold of our brains as an all-encompassing simulacrum. As for other scientific breakthroughs, the vision undoubtedly originated with science fiction, primarily the short story “Pygmalion’s Spectacles” by Stanley G. Weinbaum in 1935, in which a physicist creates a pair of glasses that will “make it so that are in the story.”
6.1.1 Beginning of virtual in the 1980s and early 1990s 1838: The stereoscope was invented by Charles Wheatstone. The stereoscope gave such a chance to the spectator. For the tools use today, this gadget paved the way: cinematography, lighting, and others. 1849: David Brewster upgrades the stereoscope and makes a “lenticular stereoscope” in this manner. To create the first portable 3D viewer, he used the effects of his experiments and physical optics. 1901: L. Frank Baum publishes a novel and, for the first time in history, references AR-like technology. The Master Key: An Electrical Fairy Tale is a book featuring a young boy who is passionate about electricity and electronics. When the Demon of Electricity was summoned, he gave him a gift – the “character marker,” a pair of glasses that exposed the latent character defects of humans. 1929: Ed Link produces the “Link Trainer” flight simulator. Thanks to the use of pumps, valves, and other equipment, this simulator helped pilots to obtain an accurate depiction of how it feels to fly an aircraft. This was a successful effort to introduce a virtual-reality prototype. 1935: In his novel Pygmalion Spectacles, Stanley G. Weinbaum describes a pair of glasses that allow the user to explore simulated worlds with the assistance of holographic pictures, smell, touch, and taste. Science fiction anticipated, as it always was, what we now have. 1939: The View-Master is created by William Gruber-simple stereoscopic viewers that make you look at two similar cross-eyed images to see a single 3D image. The View-Master was affordable to the consumer market and could be seen in the bedroom of almost any boy.
6.1.2 Virtual reality in the 1950s and 1960s 1952: Sensorama, the first VR-like device of immersive multimodal technology that included a stereoscopic color monitor, odor emitters, a sound system, and fans, was invented by Morton Heilig. Thanks to the capacity of Sensorama to involve users in a wide-angle stereoscopic image, Morton succeeded in attracting the full attention of the audience.
110
Pawan Whig et al.
1960: Morton Heilig invented the first head-mounted monitor once again. The customer was given a stereoscopic 3D image and stereo sound by the proprietary Telesphere Mask. Can it share a VR gear resemblance? 1961: The headlight is rendered by Comeau and Bryan. This wearable system monitored the movement of the head and projected each eye on a screen. It had options for magnetic tracking and a remote camera that matched the movement of the head. No computer simulation was done, but the device was partially identical to current VR helmets. 1968: The Sword of Damocles, another VR head-mounted show, was invented by Ivan Sutherland and his pupil, Bob Sproull. Ivan was already known for his successes in the development of computer graphics, and his expertise helped him create this gadget. It showed computer-generated wireframe spaces, and the picture perspective relied on the data for head tracking. 1969: Myron Krueger creates a series of computer-generated worlds. They referred to the individuals within the community that behaved. The technologies developed, for instance, and allowed individuals to interact directly with each other. He called this experience “artificial reality.” 1974: Myron Krueger is designing the Video place Augmented Reality Lab. From his experimentation with simulated surroundings, his concept was created. The lab was developed by Myron to enable people to interact without gloves in “artificial reality” that would monitor gestures and other equipment. There were all the required hardware components at the Video place that allowed the user to be placed in the virtual environment. 1982: On TV, Dan Reitan uses AR. AR gained mass acceptance thanks to this guy, and the following technology is being used even today. To add graphics to a weather broadcast, Dan used space and radar cameras. This digital weather chart was the first time that AR could be used publicly by people. 1987: Thanks to Jaron Lanier, the name “virtual reality” was officially born. He had previously created VPL testing, and it was the first company to market VR products. Specialized software was also distributed that allowed VR applications to be created. The devices developed at VPL research were fairly primitive: the EyePhone, the DataGlove for data entry, and even 3D image renderers and stereo sound simulators were just other head-mounted monitors. The DataGlove was later licensed by Mattel to create the Power Glove for Nintendo, but it was not a success.
6.1.3 Virtual reality in the 1990s and 2000s 1990: The word “augmented reality” is introduced by Tom Caudell. He worked at Boeing and came up with an alternative to the diagrams used to direct staff in the area. He recommended equipping staff with head-mounted wearables that would
7 Exploring the synergy between augmented
111
project the schematics of the aircraft on reusable screens. With the assistance of a computer machine, the displayed photographs may conveniently be edited. 1991: Virtuality Party produces VR arcade machines at video game stores that can be located. The computers had a fast reaction time and equipped players with multiplayer games with stereoscopic vision cameras, game controls, and the ability to collaborate. A bunch of hardware devices were already powered by VR gaming in the 1990s, such as VR headsets, graphics rendering subsystems, 3D trackers, and exoskeleton-like wearable parts. This pattern is already underway. 1992: Lawnmower Man, the premiere of the show. The plot was based on a scientist’s fictitious story that used VR on a mentally ill patient. This was another case of how VR became a natural part of the film business through the mainstream. 1993: A VR headset is being developed by SEGA. The version was intended to supplement the consoles and arcades of the Sega Genesis and Saturn, but only one arcade was released. 1994: Julie Martin delivers the first-ever “Dancing in Cyberspace” theatrical AR show. Digital artifacts and environments were manipulated by the performers, providing an immersive picture. 1995: Nintendo is introducing the patented VR-32 unit, which will later be known as the Virtual Boy. Nintendo reported at the Consumer Electronics Show that their new device would provide players with a beautiful experience of engaging with VR. Virtual Boy was the first home VR system, and they released the first home VR product ever, which was a huge risk for the company. 1996: The first-ever AR device appears with CyberCode 2D markers. It was focused on the 2D barcodes that even low-cost cameras mounted on mobile devices could identify. The device was able to evaluate the tagged object’s 2D location and has been a base for a variety of AR applications. 1998: For the NFL game localization, Sportvision applies AR. With the assistance of an AR overlay, the perspective of the spectator was improved. Smoothly drawn on the ground was a yellow first-down marker. It gave a guide to the state of play to the audience. 1999: The first AR wearable equipment for BARS soldiers has been published. The Battlefield Augmented Reality System aimed to help soldiers enhance battlefield vision, communications, location-identification of enemies, and overall situational awareness. AR is being used by NASA to guide the X-38 spacecraft. The car was fitted with an AR-powered navigation dashboard. 2000: Released by ARToolKit. This was a groundbreaking library tracking computer that allowed AR applications to be developed. It is an open-source and hosted on GitHub right now. A wearable EyeTap system has been developed. EyeTap perceives the eye of the user as both a display and a camera and, with computer-generated data, enhances the environment the user sees.
112
Pawan Whig et al.
6.1.4 Virtual reality today The world of VR has seen significant strides over the past 10 years, mainly from the resulting tech giant battle – Amazon, Apple, Facebook, Google, Microsoft, Sony, and Samsung have developed divisions of VR and AR. However, as it appears to come with a hefty price tag attached, buyers are still on the fence about VR tech. 2010: A concept for what would become the Oculus Rift headset for VR was developed by Palmer Luckey. 2014: Shortly after the first shipment of kits went out through the Kickstarter campaign, Facebook acquired Oculus VR for around USD 3 billion. There was a complaint brought against Facebook and Oculus for stealing business secrets. 2013: Valve Corporation has discovered a way for Oculus and other manufacturers to view lag-free VR content and distribute it freely. Along with the HTC Vive headset and controllers, Valve and HTC confirmed their collaboration in 2015 and launched the first version in 2016. 2014: For the video game console PlayStation 4, Sony unveils Project Morpheus or PlayStation VR. 2015: Google is launching Cardboard, a stereoscopic do-it-yourself viewer where a person holds their phone on their head inside a literal piece of cardboard. It resolved the question of the price tag, but is it just a headset for VR? This is debatable. 2016: The production of augmented reality goods by hundreds of enterprises. Most of the headphones had dynamic binaural audio, but there was also a shortage of haptic interfaces. 2018: Oculus unveiled the Half Dome at the Facebook F8 Developer Conference-a headset with a 140° field of view.
7 Case study The assessment of the brain-computer interface model is a crucial step in creating engineering systems to guarantee the effectiveness and quality of the system. The suggested concept seeks to replace heavy machinery-filled traditional mechanical laboratories with virtual technologies that would aid students’ comprehension of and engagement with machines. How to assess the system such that student knowledge could be shown was a difficulty. After extensive investigation, it was discovered that a brain-computer interface (BCI), which is frequently used in applications like the one our team was working on, is the best method to accomplish that. BCI is a technology that establishes a direct connection between the brain and an outside device, such as a computer or robotic arm, using electrodes. A BCI system can be established in one of two ways: invasively or noninvasively. The electrodes must be surgically implanted under the patient’s skin for the invasive method. On the other hand,
7 Exploring the synergy between augmented
113
noninvasive treatment does not need to be carried out by attaching electrodes to the patient’s head. The most well-known method for developing a noninvasive BCI system that records brain activity is electroencephalography (EEG). The primary factors driving EEG’s popularity and the reason the team chose to adopt it are its ease of use, portability, affordability, and high temporal resolution. Other options exist, but they are less effective. Applications for BCI include stress detection, motor imagery exercises, and spinal cord rehabilitation. In a conventional BCI, information is taken from the brain and sent to an external device to carry out a specific job. The main purpose of the proposed system’s BCI with EEG is to extract the signals and categorizes them to obtain particular information as shown in Figure 8 and its distribution is shown in Figure 9. import mne # Importing numpy import NumPy as np # Importing Scipy import scipy as sp # Importing Pandas Library import pandas as PD # import glob function to scrap files path from glob import glob # import display() for better visualizations of DataFrames and arrays from IPython.display import display # import plot for plotting import matplotlib.pyplot as plt import math from skimage.restoration import denoise_wavelet from scipy.signal import savgol_filter from scipy.signal import medfilt import seaborn as sns import yet d_frame=pd.read_csv(‘../input/eeg-dataset-collected-from-students-usingvr/EEG_Dataset/Subject00_0.csv’) d_frame.drop(columns=‘TimeDate’,inplace=True) # converting the raw file to data frame plt.plot(d_frame[‘RAW’][0:]) # visualizing the first 10000 values of channel ECG ECG plot.label(‘Time’) plt. title(‘The first three seconds of subject_01 FP1 Channel’) d_frame.head()
114
Pawan Whig et al.
2000
1000
0
–1000
–2000 0
20000
40000
60000
80000
Figure 8: BCI with EEG is to extract the signals.
d_frame[‘RAW’].hist() plt.xlabel(‘Time’) plt. title(‘The Distribution of the EEG FP1’) The Distribution of the EEG FP1 40000 35000 30000 25000 20000 15000 10000 5000 0 –2000
–1000
0 Time
Figure 9: Distribution of the EEG.
1000
2000
7 Exploring the synergy between augmented
115
8 Processing of signals The signal processing algorithms employed in the proposed system are the same as those used by the authors of this study. To eliminate artifacts, the filters are defined using the Scipy library. To reduce background noise, the raw signals were first filtered with a third-order median filter. The filtered signals were then filtered again with low-pass and highpass filters. These filters were designed using a Butterworth filter of order 5 with cutoff frequencies of 0.5 and 50 HZ for the low and high filters, respectively. The Butterworth filter is a signal processing filter that has a frequency response in the range of the dataset. Windowing the clean signals were split using a sliding window after they had been filtered. Some steps were involved in the procedure. First, a 4 s wide window was utilized to loop over the time-domain signal, separating it into 4 s pieces. The window width was chosen based on the research. Both approaches were used to determine whether the sliding window should be overlapping or nonoverlapping. As a consequence, the overlapping approach was more efficient and produced greater accuracy. It began with a 50% overlap. It is then experimented with different settings until the best accuracy is found at a 4 s sliding window and 3 s overlap. Following that, the segments were transformed from the time domain to the frequency domain using the fast Fourier transform.
9 Conclusion Everything in the world will be digitalized as a result of the power of computers and the current technological advancements. Thousands of articles are produced each year that employ technology in many aspects of life, including education. Because the coronavirus pandemic halted normal daily activities on Earth, the team came up with the notion of employing technology to provide an effective manner of continuing the educational process as it was before the epidemic. The answer is illustrated by the construction of a heavy machine lab in VR to assist engineering students in maintaining their practical expertise through remote learning. The suggested system proceeded through three stages: creating the engine, utilizing the Unity VR engine to bring the engine into the VR world, and testing the system by designing a BCI system. The BCI measured students’ interest by utilizing their EEG signals while using the VR engine. As a result, it is possible to determine if the suggested system is as successful as the actual lab on campus. In conclusion, the findings of our initial model were notable since students loved using it and showed a strong interest in it. Furthermore, the BCI system can anticipate a student’s interest with an accuracy of up to 99% using his or her EEG output.
116
Pawan Whig et al.
References Whig, Pawan, Arun Velu, and Rahul Reddy Naddikatu. 2022. “The Economic Impact of AI-Enabled Blockchain in 6G-Based Industry.” In Dutta Borah, M., Singh, P., Deka, G.C. (Eds.). AI and Blockchain Technology in 6G Wireless Network, pp. 205–224. Singapore: Springer. [2] Alkali, Yusuf, Indira Routray, and Pawan Whig. 2022. “Strategy for Reliable, Efficient and Secure IoT Using Artificial Intelligence.” IUP Journal of Computer Sciences 16(2): 1–9. [3] Whig, Pawan, Arun Velu, and Pavika Sharma. 2022. “Demystifying Federated Learning for Blockchain: A Case Study.” In S. Kautish & G. Dhiman (Eds.). Demystifying Federated Learning for Blockchain and Industrial Internet of Things, pp. 143–165. IGI Global. [4] Whig, Pawan, Arun Velu, and Ashima Bhatnagar Bhatia. 2022. “Protect Nature and Reduce the Carbon Footprint With an Application of Blockchain for IIoT.” In S. Kautish & G. Dhiman (Eds.). Demystifying Federated Learning for Blockchain and Industrial Internet of Things, pp. 123–142. IGI Global. [5] Whig, Pawan, Arun Velu, and Rahul Ready. 2022. “Demystifying Federated Learning in Artificial Intelligence With Human-Computer Interaction.” In S. Kautish & G. Dhiman (Eds.). Demystifying Federated Learning for Blockchain and Industrial Internet of Things, pp. 94–122. IGI Global. [6] Jupalle, Hruthika, Shama Kouser, Ashima Bhatnagar Bhatia, Naved Alam, Rahul Reddy Nadikattu, and Pawan Whig. 2022. “Automation of Human Behaviors and Its Prediction Using Machine Learning.” Microsystem Technologies 15: 1–9. [7] Tomar, Ujjwal, Nisha Chakroborty, Himanshu Sharma, and Pawan Whig. 2021. AI Based Smart Agricuture System. Transactions on Latest Trends in Artificial Intelligence 2(2): 1–15. [8] Whig, Pawan, Rahul Reddy Nadikattu, and Arun Velu. 2022. “COVID-19 Pandemic Analysis Using Application of AI.” Healthcare Monitoring and Data Analysis Using IoT: Technologies and Applications 1: 1–12. [9] Anand, Mayank, Arun Velu, and Pawan Whig. 2022. “Prediction of Loan Behaviour with Machine Learning Models for Secure Banking.” Journal of Computer Science and Engineering (JCSE) 3(1): 1–13. [10] Chopra, Gaurav, and Pawan Whig. 2022. “Using Machine Learning Algorithms Classified Depressed Patients and Normal People.” International Journal of Machine Learning for Sustainable Development 4(1): 31–40. [11] Nadikattu, Rahul Reddy, Sikender Mohsienuddin Mohammad, and Pawan Whig. 2020. “Novel Economical Social Distancing Smart Device for Covid-19.” International Journal of Electrical Engineering and Technology (IJEET) 10: 1–13. [12] Rupani, Ajay, Pawan Whig, Gajendra Sujediya, and Piyush Vyas. 2017. “A Robust Technique for Image Processing Based on Interfacing of Raspberry-Pi and FPGA Using IoT.” In 2017 International Conference on Computer, Communications and Electronics (Comptelix), pp. 350–353. IEEE. [13] Whig, Pawan, and Syed Naseem Ahmad. 2012. “Performance Analysis of Various Readout Circuits for Monitoring Quality of Water Using Analog Integrated Circuits.” International Journal of Intelligent Systems and Applications 4(11): 103. [14] Whig, Pawan, and Ajay Rupani. 2020. “Novel Economical Social Distancing Smart Device for COVID19.” International Journal of Electrical Engineering and Technology 2: 1–14. [15] Whig, Pawan, Shama Kouser, Arun Velu, and Rahul Reddy Nadikattu. 2022. “Fog-IoT-Assisted-Based Smart Agriculture Application.” In S. Kautish & G. Dhiman (Eds.). Demystifying Federated Learning for Blockchain and Industrial Internet of Things, pp. 74–93. IGI Global. [1]
Nafees Akhter Farooqui✶, Madhu Pandey, Rupali Mirza, Saquib Ali, Ahmad Neyaz Khan
8 Exploratory study of the parental perception of social learning among school-aged children based on augmented and virtual reality Abstract: Social learning encompasses acquiring new habits while seeing and imitating others. Social encounters lead to the curiosity to know which facilitates both positive and negative learning. This phenomenon generally occurs in the classroom, but it can also happen in commonplace social interactions. It can take place in many formal, informal, and nonformal settings both consciously as well as unconsciously. According to the assumption of social learning, new behavior can be scooped up by emulation and imitation of other people present in the environment. According to this, learning is a cognitive function that happens within a social setting and may happen solely through explicit teaching or observing, even in the absence of bodily replication or explicit reward. Vicarious reinforcement is the term for the mechanism through which the acquisition of knowledge happens through observation of predicated incentives in addition to behavior. On the other hand, frequently rewarding a given behavior increases the likelihood that it will continue; on the other hand, sporadically punishing a certain action often increases the likelihood that it will stop. The idea goes beyond conventional behavioral suppositions, which only consider validation as a factor in behavior; instead, it addresses the prominent and central position that many internal dynamics play in the maturing individual. There have been strong connections observed between augmented reality (AR) and virtual reality (VR) with those of cognitive processes called observational or social learning. The present society has witnessed significant advancements in technology, and “AR” is one such result of this development. Both AR and VR use simulations of real-world environments to potentially improve or supersede them. Utilizing the cameras on cell phones, AR typically enhances one’s environment by overlaying technological features on a live stream. “VR” is an entirely intense encounter that substitutes a virtual experience for the actual world. In AR, a virtual world is created to dwell in ✶ Corresponding author: Nafees Akhter Farooqui, School of Computer Applications, BBD University, Lucknow, India, e-mail: [email protected] Madhu Pandey, School of Liberal Arts, Era University, Lucknow, India, e-mail: [email protected] Rupali Mirza, School of Liberal Arts, Era University, Lucknow, India, e-mail: [email protected] Saquib Ali, School of Basic Science, BBD University, Lucknow, India, e-mail: [email protected] Ahmad Neyaz Khan, Integral University, India, e-mail: [email protected]
https://doi.org/10.1515/9783110981445-008
118
Nafees Akhter Farooqui et al.
the existing realm to provide users with more information about genuine reality without much effort. For instance, when a smartphone is pointed at a snippet of malfunctioning installation, commercial AR apps might instantly provide insights. VR is a comprehensive, realistic model that completely substitutes the customer’s actual life with a digital one. These imaginary worlds are absolutely artificial, so they are frequently created to be grandiose and iconic. For instance, a client of VR might fight in a virtual fighting circle beside a fictional caricature of Mike Tyson. Since both AR and VR are intended to provide the customer with a recreated world, every idea is distinct and has a variety of applications. Owing to its capacity to provide digital configurations that add practical, real-life contexts, AR is progressively and constantly being adapted by companies in complement to infotainment settings. Keywords: acquisition, augmented reality, conventional behaviour, social learning, virtual reality
1 Introduction Virtual reality (VR) is being described more as a stimulus that creates sensory perception and brings the user closer to reality in which he/she can use his/her environment to experience all his/her senses. “Focus on VR in mental health, showing the efficacy of VR in assessing and treating different psychological disorders as anxiety, schizophrenia, depression, and eating disorders.” Hence, VR is being used extensively in many fields such as education, military, architecture, learning social skills, surgery, and many more upcoming ones which can be further utilized to develop many industries like entertainment, traveling, and media [1]. VR successfully replaces the real stimulus with the artificial one it provides. The artificial experiences created by VR are much more effective than the real ones; such artificial stimuli can be used for providing psychological treatment of many psychological issues such as in treating fears, phobias, and depression. VR exposure therapy has shown good results in treating patients in stress situations in which psychological treatment can be administered to them by their therapists [2]. With technological advancement, augmented reality (AR) systems have been utilized in the same fields like in entertainment, media, and education as well as in the area of medicine. AR is also being used by therapists to cure psychological dysfunctions. However, AR system uses geospatial datum, providing virtual elements to the user who allows an amalgamation of images using animation and graphics [3]. AR is also being used by psychologists to administer AR exposure therapy to cure multiple psychological issues. One observes that AR and VR help in observational or social learning. With the advent of technology, AR developed more recently as a result of technological development. AR has captured the entertainment, media, and animation industry more in comparison to VR as the former provides more of a real-life experience.
8 Exploratory study of the parental perception of social learning
119
In recent times, technology is growing by leaps and bounds and will only be on the rising in near future. There is no part of human life that is not being taken to the next level by technology. The area of animation is also an integral part of the growing technological advancement. Children who watch animation have a quest for higher realism and real-life experience through visuals by watching animated series and movies like Doraemon, Superman, and Cinderella. The utilization of the next level of technology to experience real-life images through computers makes children addicted to animation brought to them by AR. It is one of the recent advancements that have completely transformed the world of animation. Till some years back, animation was only a viewing experience. In recent times, children can feel animation and also live in the environment created by it. They use equipment like Sony PlayStation and Google Cardboard to experience VR. This generally requires headgear or similar kinds of equipment. However, AR animation gives a headset-free experience to children who can experience it on smartphones or any other screen. There are many mobile games such as Pokemon Go that use AR technology.
Snapchat
Google Street View
Google Glass
Photography and Editing
AR Technologies
AR Maintenance
Google ARCore
Pokemon Go
Interior Decoration Apps
Figure 1: Augmented reality technologies.
Thus, we observe that AR technology shows in figure 1 does not only affect children through a next-level animation experience, but it has also enriched the lives of adults by providing multiple interfaces for entertainment and for a better professional work experience. In recent society, the experience of adults and children is on the rise via such technology.
120
Nafees Akhter Farooqui et al.
2 Background of the augmented reality in the education and social learning system AR is an emerging technology that merges the physical world with virtual images, graphics, and sound that permits digital images to be displayed in the physical world [4]. The New Media Consortium reported that “the powerful significance of the concept of blending information and therefore the real world in an increasingly experiential environment has pushed AR to the forefront in the realms of business, technology, entertainment, branding, and education” [5]. AR shows promise for building the training capacity and work processes of individuals, groups, and organizations [6]. For instance, augmented applications create opportunities for college kids to utilize simulated practice examinations and allow physicians to view anatomical structures prior to surgeries [7]. AR has the potential to become the subsequent mass medium intertwining the brands people love with their everyday life [8]. In fact, “many of the newest and most potentially transformative developments in virtuality – social media, augmented reality devices, geolocative services – don’t have anything to do with creating alternate worlds and everything to do with adding another layer of (virtual) reality to every day (real) life” [9]. Cutting-edge AR applications are being utilized in marketing campaigns of several well-known businesses within the retail industry such as Ray-Ban which utilizes a virtual mirror application allowing users to try on their latest styles of glasses and embedded on food packaging from companies like Starbucks. AR technology supports businesses by enhancing the five essential phases of commerce: “design, discovery, details, desire, and delivery.” AR has the potential to revolutionize education. The potential of AR for learning is its capacity “to enable students to work out the world around them in new ways and engage with real issues in a context with which the students are already connected” [10]. There are numerous samples of how schools are implementing AR in classrooms, for instance, in education, the Georgia Institute of Technology and, therefore, the Massachusetts Institute of Technology are working to enhance student learning through AR gaming simulations [11]. Further, AR has been utilized to form complex concepts in mechanical engineering more easily understood by students. AR is predicted to play a more significant role in teaching and learning over the next few years. The present teaching models may be successful, and new visualization technologies are poised to enhance the learning experience and increase student understanding [12]. As teaching and learning may be a crucial process, many computer-based technologies are proposed to provide new experiences in these activities. Over the past 20 years, many researchers and educators have worked on ways to bring AR and VR into the educational system. For instance, students and researchers may use virtual heritage as a medium in their studies of historical events. Studies have shown that AR
8 Exploratory study of the parental perception of social learning
121
can enrich teaching and learning practices in the educational sector [13]. For instance, assess the training experience gained between a group of children who learned astronomy through a PC-based application and a group of students who learned the same subject using a projector-based mixed reality (MR) application [14]. The analysis of the study shows a big difference in the understanding of the subject’s concept by both groups. While the PC group focused more on the surface details of the planets, the MR group was concerned more with the direction or movements of the Earth. This provides insight into the cognitive differences of the students, given different learning experiences. Similarly, augmented learning is an on-demand learning technique where the training environment adapts to the needs and inputs of learners [15]. Another work implemented AR to show natural sciences to preschoolers where the observation result shows a very positive impact on a group of students who used the AR materials over traditional materials [16]. By using AR, the scholars learn more, and they achieve more learning goals than with the non-AR method. Furthermore, it had been also reported that the teachers who participated in the study can easily implement the AR technology despite having no experience with the technology before. In a study, researchers examined VR and AR within social cognition contexts including schools and cultural events, as well as strong community engagement ideas as well as real-life and social-exhilarating networking approaches. They employed constructivist approaches, social cognitive theory, collaborative learning, and role theory under an effective learning and teaching frame to develop a conceptual basis for potential augmented world realities’ pedagogical systems. Several examples of learning from real-world situations that were made bigger were looked at, and some possible future study topics were suggested, such as usability, the interaction between real and digital settings, and the basics of a restructured learning model [17]. Despite the current increase in academic interest in AR, various scholars have given different interpretations. They claimed that it would be highly beneficial for educationalists, investigators, and innovators to approach AR as a mechanism rather than a specific technology. Researchers subsequently made an effort to determine the distinct characteristics and capabilities of AR systems and related implementations. An AR system’s educational strategy and the synchronization of the technological interface, pedagogical strategy, and experiential learning may be more significant. As a result, they describe three types of educational strategies that highlight “roles,” “tasks,” and “locations” and suggest that various types of AR strategies may aid in teaching. While AR presents great learning possibilities, it also presents additional difficulties for teachers. The extensive quantity of additional data that students receive in AR settings, the variety of technical tools they incorporate, and the challenging activities they perform may cause cognitive congestion in the students [18]. The participation of learners in the academic pursuits of science, technology, engineering, and mathematics may be increased through the use of collaborative learning exercises. However, throughout their time spent studying, pupils do not generally have the opportunity to engage in socialization with one another. AR activities have
122
Nafees Akhter Farooqui et al.
the potential to facilitate societal connections among children. Additionally, these games provide players the opportunity to engage with virtual information while still participating in spontaneous dialogue in the actual environment. However, very little is currently known about how to create social AR games for educational purposes. In this segment of the study, an AR social learning game that allows primary school kids to practice arithmetic together was investigated. The analysts simply created and refined the game ideas on the basis of prior research and co-design sessions. They also carried out a usability survey to investigate how participants would respond and communicate with one another while playing the game. The insights expand our knowledge of the interpersonal tendencies that students exhibit in both collaborative and competitive contexts, including AR. Based on the results, the researchers came up with a number of design ideas that could be used in the future to make AR social learning games [19]. Researchers conducted a study in which they divided cohorts of participants based on scholastic focuses, and other educational parameters with due consideration to the prior findings or review of the literature. They drew the conclusion that AR games for educational purposes often have a beneficial impact. They observed the predominant common impacts of AR educational games and manifested an improvement in learning efficiency as well as optimization during the learning encounters in terms of contentment, enthusiasm, and entertainment. These were the outcomes that were flagged up in the majority. It was discovered that AR learning games enhanced social connections, especially cooperation among learners. Formative assessments and the establishment of milestones were the game components that were frequently used for the themes and designs of AR games. The most common applications for AR technologies were additional educational resources, 3D models, including face-to-face encounters [20]. Individuals with special educational needs may be able to grasp practical mathematical concepts and skills. Students with special educational needs are unable to get the most out of the mathematical teaching and pedagogical practices that are offered in regular classes. The teaching of mathematical logic to kids with special educational needs may be individualized and tailored, owing to technological advancements. In the area of special education, researchers and instructors have been making attempts to integrate technology into the mathematical curriculum for these pupils in order to enhance the learners’ academic performance. The research presents information that indicates various possible explanations for why the efficiency of desktop educational approaches is much better than that of conventional techniques. The tabletop is presented as a unique technological advancement device that provides new methods of connecting and employing information systems as an “allusion.” In addition, it boosts students’ mental strength and encourages them to work together more effectively. The results show that the tabletop is a useful method that could work well in situations with special pedagogical needs [21].
8 Exploratory study of the parental perception of social learning
123
There is a substantial repository of studies that pertain to the applications of AR for training in elementary and intermediate school systems all around the world. Consequently, there has been very few numbers of research conducted on the topic of AR in conjunction with game-based learning. It is yet to be intensively explored and know how implementation games related to education might influence participants’ enthusiasm, accomplishments, or accolades; yet learning based on games has the potential to make possible new types of instruction and to revolutionize the learning experience. There have been researches which provide the findings of a systematic evaluation of the literature on AR-based learning techniques in compulsory education. One such research was conducted, which took benefits, drawbacks, instructional affordances, and/or efficacy learning games across a variety of primary and secondary school courses as a parameter for analysis. There was a total of 21 studies that were assessed, with 14 of them concentrating on primary education and 7 on secondary education. That research was published between 2012 and 2017 in 11 indexed journals. The primary results from this study provide an overview of the most recent scholarship about learning games based on VR in the context of compulsory schooling. As those educational games can affect the students’ attendance, information transfer, skill development, hands-on digital experience, and positive attitude toward their own education, trends and a vision for the future were also the main highlights. This review aimed to lay the groundwork for educators, technology developers, and other stakeholders to be involved in the development of literacy programs for young children by offering new insights with effective advice and suggestions on how to increase student motivation and improve both the learning outcomes and the learning experience by incorporating virtual games based on learning into their teaching [22].
3 Social learning theory There has been a rise in the recognition of social learning theory (SLT) as a crucial factor in the facilitation of positive behavioral change. The essential premise of this theory is that we acquire knowledge through our social encounters with other people. Individuals learn to act in a certain way by mimicking the actions of others around them, which they observe. When individuals have favorable experiences or are rewarded for their imitation of others’ behaviors, they are more likely to adopt and embrace such behaviors themselves. Bandura states that in order to successfully imitate, one must replicate the motor actions of others. In recent years, SLT has emerged as one of the most important theories of learning and growth. It is founded on many of the cornerstone principles of established theories of education as it addresses all three stages, namely attention, memory, and motivation. This theory has been dubbed a link between behaviorist and cognitive approaches to learning [23].
124
Nafees Akhter Farooqui et al.
Moreover, Bandura thinks that explicit reinforcement cannot contribute to all forms of learning in this aspect. Because of this, he included a social component in his theory, stating that individuals might acquire various skills and habits just by observing others around them. The parts of the theory suggest three general rules for how people can learn from each other.
3.1 General ideas about SLT People surmise that the rules of social learning will always work in the same manner. Anyone at any time in life can learn from what they see. As long as individuals can meet potentially great, prominent models who have access to resources, a person can always explore something new through modeling [24]. “Social learning theory (SLT)” says that people learn from each other in three waysthat shows in figure 2.
Observing
Social Learning Theory Implementation Ways
Modeling Copying
Figure 2: Conceptual model of SLT.
Following these guidelines, learning may take place without a corresponding behavioral shift. As a result, their learning may not be reflected in their performances in the same way that behaviorists argue that learning must be expressed by a long-term change in behavior. But a modification in behavior is not guaranteed after learning. Bandura provided evidence that cognitive processes are involved in learning, and SLT has grown more cognitive in its understanding of human learning over the last three decades; these arguments are supported by Newman and Newman [24].
4 Behaviors acquired through imitation Models are individuals who are the subject of supposition, and the act of gaining knowledge through observation is referred to as modeling. This point is corroborated by New-
8 Exploratory study of the parental perception of social learning
125
man et al. [24]. If a person sees favorable and desirable results during the initial phase of social learning, then the person will move on to Bandura’s postulated second and third phases of social learning, which are imitation and behavior modeling. If an educator, for example, goes and watches a program in-world and if they find the session to be entertaining and informative and they appreciate the way learners behave, then it is more likely that they will wish to present a program throughout too. They are then able to make use of the conduct that they saw in order to replicate and emulate the instructional strategies of other educators in the real world [25]. Early literature supported the idea that much behavior may be acquired in essence through imitation. Instances of how this may be shown include seeing parents reading to their children, watching arithmetic exercises, or witnessing someone behave boldly in a scary circumstance [26]. In light of this, models may also be used to teach violence. Numerous studies show that when kids see violent or antagonistic role models, they grow more violent themselves. According to this perspective, modeling and observation have an impact on one’s moral reasoning and conduct. Consequences come from learning how to make good ethical decisions and how to avoid making bad ones [27]. According to the available research, there are three different conceptions of SLT. To begin, one kind of learning known as observational learning allows individuals to learn by the act of observing others. Second, one of the most essential factors in learning is one’s state of mind, which is often referred to as intrinsic reinforcement. Lastly, it says that learning does not always result in an alteration in behavior and that this is because of the modeling process [27].
5 Observational learning Bandura’s renowned research study, known as the Bobo doll experiment, was done in 1961 to explore trends of behavior, at least in part, by SLT, and that comparable behaviors were learned by people modeling their own behavior after the acts of models. Bandura’s findings from the Bobo doll experiment altered the direction of contemporary psychology, and he is largely recognized as helping to move the emphasis of psychological science away from pure behaviorism and toward cognitive psychology. The experiment is widely regarded as one of the most famous and widely appreciated of all psychological tests [24]. The research was notable because it contradicted behaviorism’s assertion that all behavior is motivated by reinforcement or incentives. The youngsters were neither encouraged nor rewarded for beating up the doll; they were just repeating the conduct they had seen. Bandura referred to this phenomenon as observational learning and identified the components of efficient observational learning as attention, retention, reciprocation, and motivation. He established that toddlers learn and mimic ac-
126
Nafees Akhter Farooqui et al.
tions that they see in others. During this procedure, he discovered three fundamental theories of observational learning: – A live model is a real person showing or enacting bad conduct – A verbal educational approach that includes behavior descriptors and interpretations – A symbolic model in which actual or fictitious activities are carried out in novels, movies, TV series, or internet content [27]
6 Impact of advanced technologies on the education system 6.1 Importance of smartphones and their impact on the education system Digital technologies are continually transforming the field of education. Smartphones are becoming more and more a part of our daily lives and have entered the educational field. Mobile learning has been used to improve teaching methods more and more recently [28]. The term “mobile,” according to Shuib et al. [30], denotes the prospect of activities taking place in many places, at distinct periods, and accessing items through a variety of devices, like intelligent phones or tablets. Mobile learning is reported to be more entertaining and helpful when applied in preschool education [29]. It is also said to be more effective at amusing and entertaining small kids. The previous studies recommend research on the possible use of mobile devices in schooling. Current research has lagged in the rate of its use, which is surprising when considering the fast expansion and improvement in smartphones [29, 30]. Smartphone learning may occur in whatsoever setting, be it a traditional classroom, a home setting, a public transportation vehicle, or even in front of an educational display. The ability of the student to communicate, interact, contribute, and develop utilizing resources that are widely available is more significant than the flexibility of the learning tools. The M-learning concept will change the education system and adopt the new generation techniques that will improve the learner’s performance in less time and good technological person. M-learning is a hybrid of information and communications technologies that enables training to be received at any time and in any location. Mlearning also facilitates efficient learning. Students participating in M-learning could utilize their smartphones both within and outside of the school to acquire learning materials, communicate with others, or develop a product. In addition to this, Mlearning knows how to take care of the management of school institutions and supports to improve communications between organizations.
8 Exploratory study of the parental perception of social learning
127
6.2 Gamification and importance in education Gaming components in a nongame setting as the definition of gamification. The classroom is one possible setting for this to occur. Many teachers think that incorporating gaming into the classroom will be helpful [28]. When asked to compare game-based learning to more classic means of education, pupils overwhelmingly chose the former [28]. In addition, the widespread use of smartphones has made gamification like any activity or procedure a lot simpler [31]. On the other hand, there is no assurance that gamification will make the learning objectives more transparent or simpler to learn. This is because some studies found evidence to support the beneficial effects of gamification [33], whereas other research could not demonstrate such benefits [32]. The gamification of the learning is very beneficial for the preschooler and their social activity will improve with the help of virtual concepts of the characters. The modern education system added this system for the playgroup and nursery students which improves the personality of the students in this growing generation. Therefore, the AR concept is introduced for the modern school and preschool students’ learning process.
7 Augmented reality AR requires understanding the reality-virtuality transition [35], where actuality is physical while virtual is computer images. Those parts form an MR. AR is a composite reality that dominates the physical environment [36]. AR shows users the real world with a virtual object projected [34]. Researchers have employed three AR system properties [34]: 1. Combines real and virtual world 2. Interactive in real time 3. Displayed in 3D Carmigniani et al. [37] provided a detailed technical explanation of AR, identifying three primary platforms used for AR: head-mounted display, portable display, and spatial display. Only in recent years, though, handheld displays become commonplace; today’s smartphones and tablets feature robust central processing units and camera hardware, making them an exciting platform for AR applications [28, 37]. Despite the benefits that could be gained from using this technology, teachers still face obstacles when trying to implement the mint method of instruction (NMC Horizon Report, 2014). Several studies have shown that AR can improve classroom instruction. According to Radesky et al. [28], the use of AR has greatly aided student learning by allowing them to perceive abstract concepts more easily in 3D. As can be seen from Huang et al. [38], all the surveyed pupils had a positive experience with AR. Further, Yilmaz [39] found that both educators and learners have favorable impressions of the benefits of this technology. There are some examples of augmented reality shown in figure 3.
128
Nafees Akhter Farooqui et al.
(a)
Figure 3: Augmented reality (a) displayed in 3D (b) instrument for the augmentation.
8 Virtual reality VR includes advanced interface technologies, immersing the user in environments that can be actively interacted with and explored. The user can also accomplish navigation and interaction in a three-dimensional (3D) synthetic environment generated by the computer, using multisensorial channels. In this case, diverse kinds of stimuli can be transmitted by specific devices and perceived by one or more of the user’s senses. There are three fundamental ideas involved in VR: immersion, interaction,
8 Exploratory study of the parental perception of social learning
129
and presence. Immersion can be achieved using a head-mounted display (HMD), trackers and electronic data gloves that support user navigation and interaction, aiding the exploration of the environment and the manipulation of objects in an easy way. Interaction means communication between the user and the virtual world. Presence is a very subjective sense, but fundamental to all VR applications, in which the user is physically inside of the virtual environment, participating in it. The virtual revolution has emerged to impart VR simulation technology for clinical and medical purposes since 1995. Although VR has emerged since the 1950s when Morton Heilig invented Sensorama which enabled users to watch television in 3D ways, today, technological advances in the areas of power, image processing and acquisition, computer vision, graphics, speed, interface technology, HMD devices, a variety of software, body tracking, AI, and internet of things have given rise to building cost-effective and functional VR applications capable of operating on mobile devices and personal computers. In context, VR states: “It typically refers to the use of interactive simulations created by computer software and hardware to engage users with an opportunity in an environment that generates feelings like real-world events and objects.” In another definition, VR is interpreted as: VR systems are deployed in the form of a concert to perform sensory fantasy or illusion that constructs a believable simulation of reality. Comprehensively, VR can be defined to replicate real-life situations using an immersive learning environment, high visualization, and 3D characteristics by involving physical or other interfaces like motion sensors, haptic devices, and HMD in addition to computer mouse, keyboard, and voice and speech recognition. In general, the user interacts and feels that it is the real world, but the focus of interaction is the digital environment. Hence, VR systems have been widely applied in phobia, neuroscience, rehabilitation, disorders, and different forms of therapeutic issues for students learning and healthcare to uplift society in productive manners incorporating serious games and other techniques. VR had been used in different modes of the education system. It has been used in the preschoolers as well as the school education system to enhance modern education and improve the learning process. But there are many drawbacks of this system that will impact the health and activity of the preschoolers which has been changed. Most of the kids work like zombies and they show themselves like cartoon characters in his/her life which causes trouble for the parents in managing them. But according to most of the studies, parents use several methods to control; them and improve their daily activities. They also want to make their child aware of the modern education system and technologies that enhance their ability. There are several theories that advocate for the use of interactive media like VR in rehabilitation and education since it can give fully regulated and more beneficial learning and practice experiences than the real world can. For example, comprehensive review theory (CRT) is an integrative strategy that emphasizes tailored training
130
Nafees Akhter Farooqui et al.
of real-world activities via a variety of strategies for recovery of daily functioning [40]. CRT is grounded in neuropsychiatric and cognitive psychology models [39]. Further than the world of video games, virtual worlds have the potential to be used as a dynamic classroom. Virtual learning environments (VLEs) [40] and educational virtual environments (EVEs) [41] are gaining popularity in classrooms. For example figure 4 shows the virtual environment of the library. A MediaWise report that synthesizes the findings of several research studies concludes, “Video games are natural teachers,” despite some controversy surrounding the use of computer models in educational contexts, particularly among teachers and developmental psychologists who have questioned the appropriateness of “virtual” experiences for children [42]. Because of their interactive nature, youngsters are not only interested in them but also actively participate in them; they offer ample opportunities for training and reward-proficient play. Given these realities, it is possible that computer games may have a large influence, and some of which will be deliberate on the part of the game’s creators while others will be unintentional [43].
Student accessing Virtual Library
Figure 4: Virtual environment of the library [45].
8.1 Formats and design elements of VR technology This section reflects a general understanding of the available formats of VR, and which one is best accepted for healthcare. In addition, Table 1 summarizes the design elements for implementing VR [44].
8 Exploratory study of the parental perception of social learning
131
Table 1: Design elements in virtual reality. Elements
Description
Situated learning
Familiar circumstances that can be recognized by a user
Debriefing
Opportunity to interact and focus by a participant on healthcare analysis and reflections
Navigation
Components that can guide and sequence the directions
Identical elements
Visual representation of healthcare artifacts accurately
Stimulus variability
Range of relevance indicating objects found in healthcare
Feedback
Prompts to facilitate progression through an activity
Social context
Collaborative environment to synchronize contribution of participants
9 Augmented reality and virtual reality in education system Since the 1990s, scholars had investigated the feasibility of using interactive technologies in the classroom. Due to their interactive virtual nature, the ability to exchange intelligence in novel and discuss regarding the possibility of offering immersive experiences that grow opportunities for learning that would otherwise be limited by cost or physical distance, AR/VR technologies are a promising addition to the burgeoning field of education technology. However, it is only relatively lately that AR/VR devices and applications have become economical and user-friendly enough for these solutions to be practically adopted in classrooms. AR/VR can display information more interactively than two-dimensional media. Advanced VR systems can actively involve users in a virtual world where they can connect with virtual items and other people in real time. Hands-on learning that resembles real-world events or simplifies difficult knowledge is achievable with this type of encounter. For instance, learners can observe microscopic objects in 3D or stand in a physics simulation. VR also lets viewers examine prerecorded 360-degree visual experiences, either still images or video. This less immersive (but typically cheaper) technique might be useful when the appearance or sense of being present is the most significant part of an experience, such as touring a historical place. AR and MR allow people to interact with virtual items in their physical surroundings. In situations when users must engage with virtual items while being aware of their physical surroundings, this is most useful. For instance, students could use digital overlays to learn how and where to fix a complex machine or perform a medical procedure. Like VR, AR lets users observe static virtual objects or information in phys-
132
Nafees Akhter Farooqui et al.
ical space. When the object has the maximum instructional value, such as installing a virtual model of a sculpture or historic artifact in a classroom or overlaying text or photos on a historical site, this is most beneficial. AR/VR tools can transform how students of all ages and disciplines study. AR/VR tools can improve learning results in K-12 and higher education, according to multiple studies. Virtual worlds can provide learning environments like traveling to a remote location or conducting experimental studies. This experience can also allow students to visit another planet, or period in history, or manipulate larger microscopic items. AR/VR experiences can also engage students in hands-on, gamified learning in many topics, which supports cognitive growth and classroom engagement. Primary schools’ teachers and managers are adopting virtual technologies to advance their abilities. AR/VR can supplement, complement, or completely replace traditional classrooms. Classroom enhancement is a most prevalent primary and secondary school usage. Teachers can take kids on virtual field trips or let them engage with 3D models using AR. For mixed and remote learning, schools are using AR/VR. Online and hybrid learning during the COVID-19 epidemic showed the benefits of location-independent teaching methods. Virtual worlds let learners actively participate in distance learning and interact with instructors and classmates in real time. Mobile AR allows learners to see artwork on their wall or an object in their living room, while AR/VR-based virtual labs let them conduct hands-on investigations anywhere. Remote learning with total immersion VR reduces disturbances, enabling pupils to focus. AR/VR technologies can engage children with autism spectrum disorder, attention-deficit hyperactivity disorder, dyslexia, or other cognitive or learning difficulties in the classrooms and virtually. Virtual could assist autism kids with fears. It can improve eBooks and flashcards for learning of disabled pupils.
10 Methodology The research is a survey-based study that focuses on the social cognitive learning of children between 6 and 13 years of age. The parents of these children were surveyed on seven questions based on their child’s social behavior and learning. A total of 41 respondents participated in the survey. Participants had the option to give more than 1 answer to the same question. Some questions were of a descriptive nature, to which respondents had to respond in an elaborative and expressive form. After the data collection was done, the results were graphically represented for every question, and the result was calculated accordingly.
133
8 Exploratory study of the parental perception of social learning
11 Results and analysis A. What types of cartoons/television content does the child watch? Select the option(s)? According to the graphical representation of the survey related to the relevance of the caricatures and the multimedia content, 27 respondents as parents out of 41 have chosen the content to be meaningful and useful as it adds to their child’s learning. Nineteen of the 41 parents chose content that was meaningless, hypothetical, and unrelated to reality. Six parents have graded the content as aggressive, and two parents have chosen the option of the content as an adult. One parent marked the child’s content as uncivilized. There was also the option of giving a descriptive response to this question, for which the answers were recorded as the children watched more mythological content, fictional content, fairy tales, and children’s blogs. Given the available data, most parents believe that their children watch meaningful and reality-based content on AR, which aids in their children’s social and individual learning. Figure 5 shows the graphical representation in percentage. Meaningless/Hypothetical
19 (46.30%) 27 (65.90%)
Meaningful/Reality based 6 (14.60%)
Aggressive/Violent 2 (4.90%)
Adult uncivilized
1 (2.40%)
Children’s Blog
1 (2.40%)
Cartoons
1 (2.40%)
I don’t have child
1 (2.40%)
fairy tales too
1 (2.40%)
Most of the time she uses to wat...
1 (2.40%)
Fictional
1 (2.40%) 0
10
20
30
Figure 5: Elicits the relevance of the cartoon/television contents in real world.
B. After/while watching cartoon/television content shows, what does a child gets? Based on the survey’s schematic portrayal, the digital content and cartoons make children feel active and cheerful. Among the sample participants, 21 chose the option of their kids being active and 20 selected the option of being cheerful or good in spirit after or while watching the multimedia content. Three parents out of 41 marked the option of impulsiveness. Five guardians responded that their children are being aggressive for the content. Twelve samples chose to skip meals, 5 parents indicated that their child interferes with sleep, and 11 parents believe that their child lacks concentration. One parent answered that their child loves to eat while watching digital con-
134
Nafees Akhter Farooqui et al.
tent. Therefore, it can be said that mainly parents believe that AR contributes to their children’s good mental health and keeps them in good spirits as well as cheerful. Figure 6 shows the graphical representation in percentage.
Cheerful/Good in sprit
20 (48.80%)
Active
21 (51.20%) 3 (7.3%)
Impulsive
5 (12.20%)
Aggressive
7 (17.10%)
Forgets homework
12 (29.30%)
Skipping meals Hampening sleep
5 (12.20%) 11 (26.80%)
Fails to concentrate on other.. Not applicable
1 (2.40%)
Love to eat while watiching...
1 (2.40%)
0
5
10
15
20
25
Figure 6: Impact on children after watching cartoon contents.
C. Rate child’s involvement As indicated by the pictorial depiction of the research connected to the relevancy of the online media content, parents believe that their child is strongly involved and enthralled while watching the television. Thirteen parents out of 41 believed that their children are strongly involved while 12 parents responded to the option of “somewhat” on their child’s captivation on digital content. Eight parents believe that their children have less involvement whereas six participants believed that their kids were strongly engaged. Only two respondents believe that their children were very less involved. Hence, this data on children’s engrossment in television content influences their cognitive patterns and styles. It shows in figure 7. D. Does the child imitate the cartoon character/content which he/she watches? Owing to the report’s visual representations, the audiovisual material and cartoons strongly form an impression on children’s psychological state. A total of 61% of parents responded “yes” to the question related to the imitation of the characters that the kids watch on television. Twenty-nine percent of parents have marked the option of “no” and the remaining have selected the other options of N/A or sometimes. Therefore, AR impacts the social learning of children. Figure 8 shows the behavioural changes in percentage.
135
8 Exploratory study of the parental perception of social learning
Very Less
2 (4.90%)
Less
8 (19.50%)
Somewhat
12 (29.30%)
Strongly
13 931.70%)
Very Strongly
6 (14.60%) 1 (2.40%)
Not Applicable
5
0
10
15
Figure 7: Involvement of the children after watching cartoon contents.
YES NO 29.3%
Sometimes Some Time Not Applicable
61%
Figure 8: Behavioral changes after watching the cartoon/television contents.
E. Rate from 1 to 5 the usefulness of that character/content in the real life According to the demonstration of figure 9, the bar charts based on the survey related to the application of the television shows and characters, 21 participants out of 41 find “somewhat” relevance of the characters in the real world. Ten out of 41 parents also believed that the usefulness of some of the television content was less relevant and 7 parents were of the opinion that the utility of the characters in the actual world was very low, and subsequently, 6 respondents thought that characters were strongly useful.
136
Nafees Akhter Farooqui et al.
Very Less
7 (17.1%)
Less
10 (24.4%)
Somewhat
21 (51.2%)
Strongly
6 (14.6%)
Very Strongly
0 (0%) 0
5
10
15
20
25
Figure 9: Rating of the cartoon/television character that influences most of the children.
F. What child has learned from virtual reality? According to the survey, parents are of the opinion that their children have learned a variety of things from digital content. Out of 41 respondents, 24 believed that their child had learned creative ideas from television shows. Twenty participants, among them, also chose the option of a broad vision for different things. Nine respondents among the 41 participants had the opinion that their children have learned the appropriate manner of talking, whereas 17 believed that the digital content has enhanced their kids’ vocabulary. Figure 10 represent the learning contents from VR technologies in percentages. 24 (58.5%)
Creative ideas Broad vision for didfferent...
20 (48.8%)
Appropriate manner of talking
9 (22%)
Vocabulary
17 (41.5%)
Pronounciation
11 (26.8%)
Foul Language
5 (12.2%)
Local Traditions
7 (17.1%) 11 (26.8%)
Customs and culture 8 (19.5%)
History Mythology
12 (29.3%)
Develop self-awareness and...
1 (2.4%)
Genearal Awarenes
1 (2.4%)
Dose not speak
1 (2.4%)
Nothing useful
1 (2.4%)
Figure 10: Learning contents from virtual reality technologies.
8 Exploratory study of the parental perception of social learning
137
Parents can also learn how to pronounce words as well as customs and culture from these shows. Twelve participants selected the option of learning and being aware of the mythology from the online content. Seven respondents had the opinion that their children are getting awareness about local traditions, whereas eight thought that their kids have become sensitive about history. Few respondents believed that their child had learned nothing useful, developed general awareness of the world, or developed self-awareness. G. Do you think a child has got a sedentary lifestyle as an outcome of watching content online? The poll reveals that the negative effect of watching online media content mainly leads to changed dietary patterns, as 12 out of 41 parents marked this option. Ten parents among them also reported the children’s vision problems as well as manifestations of irritating behavior like shouting. Nine respondents also selected the option of nutritional deficiencies besides other options. Three parents were of the opinion that watching online content made their children have a sedentary lifestyle, which is making them obese. Some of the parents have not found many issues as they said that no such problem exists and their child is active and interested in other activities as well. Figure 11 shows the effects on lifestyle of children. Obesity Vision Problem Irritation
3 (7.3%) 10 (24.4%) 10 (24.4%)
Nutritional deficiencies Changed dietary patterns Others No Not applicable
9 (22%) 12 (29.3%) 3 (7.3%) 6 (14.6%) 2 (4.9%)
Figure 11: Effects on the sedentary lifestyle of children.
12 Conclusion Implementing augmented or virtual technology can help children develop better interpersonal and conversational competencies. From the obtained poll, it can be concluded that parents believe their children have learned a range of things through digital entertainment. One of the key tenets of social learning is that individuals may pick up novel behavior through observation, analysis, reflection, and imitation of those already present in their social context. VR makes use of cutting-edge interface technology to place the viewer in a believable and interactive world with a wide and diverse perspective.
138
Nafees Akhter Farooqui et al.
The user can move around and interact with a computer-generated, synthetic, 3D world by means of many senses. Children’s positive mental health, physical health, and social-emotional development are all enhanced by exposure to digital information because the kids pick up on and internalize a wide range of innovative or original ideas, local traditions, and cultures. While AR has many great applications for learning and growing as a person, it also has numerous undesirable results. Some of the negative and damaging effects include youngsters being more likely to use foul language, dramatize, feel agitated or aggressive, have trouble sleeping, change their eating habits, and even become slackers. AR-based learning offers numerous great consequences despite certain drawbacks; hence, it should be included with improved materials. Hence, due to the novelty of AR, additional research into its utility as a pedagogical tool for kids with their individual needs is required.
References Freeman, D., S. Reeve, A. Robinson, A. Ehlers, D. Clark, B. Spanlang, and M. Slater. 2017. “Virtual Reality in the Assessment, Understanding, and Treatment of Mental Health Disorders.” Psychological Medicine 47(14): 2393–2400. https://doi.org/10.1017/s003329171700040x. [2] Botella, Cristina, Javier Fernández-Álvarez, Verónica Guillén, Azucena García-Palacios, and Rosa Baños. 2017. “Recent Progress in Virtual Reality Exposure Therapy for Phobias: A Systematic Review.” Current Psychiatry Reports 19: 7. https://doi.org/10.1007/s11920-017-0788-4. [3] Carmigniani, Julie, Borko Furht, Marco Anisetti, Paolo Ceravolo, Ernesto Damiani, and Misa Ivkovic. 2010. “Augmented Reality Technologies, Systems and Applications.” Multimedia Tools and Applications 51(1): 341–377. https://doi.org/10.1007/s11042-010-0660-6. [4] Yuan, M. L., S. K. Ong, and A. Y. C. Nee. 2008. “Augmented Reality for Assembly Guidance Using a Virtual Interactive Tool.” International Journal of Production Research 46(7): 1745–1767. https://doi. org/10.1080/00207540600972935. [5] Johnson, L, K. Haywood, A. Levine, H. Willis, and R. Smith. n.d. New Media Consortium, and Educause. The 2011 Horizon Report. [6] McWhorter, R. R. 2014. “A Synthesis of New Perspectives on Virtual HRD.” Advances in Developing Human Resources 16(3): 391–401. https://doi.org/10.1177/1523422314532126. [7] Zhu, Egui, Arash Hadadgar, Italo Masiello, and Nabil Zary. 2014. “Augmented Reality in Healthcare Education: An Integrative Review.” Peer Journal 2(July): e469. https://doi.org/10.7717/peerj.469. [8] Layar. 2010. Layar Augmented Reality Platform: Information for brands and publishers. Retrieved from http://www.slideshare.net/layarmobile/layar-information-for-brands-and-publishers. [9] Dholakia, Nikhilesh, and Ian Reyes. 2013. “Virtuality as Place and Process.” Journal of Marketing Management 29(13–14): 1580–1591. https://doi.org/10.1080/0267257x.2013.834714. [10] Klopfer, Eric, and Josh Sheldon. 2010. “Augmenting Your Own Reality: Student Authoring of ScienceBased Augmented Reality Games.” New Directions for Youth Development 2010(128): 85–94. https://doi.org/10.1002/yd.378. [11] Scratch, Filed under AR. 2012. “AR SPOT: An Augmented-Reality Programming Environment for Children.” Augmented Environments Lab March 30, 2012. http://ael.gatech.edu/lab/research/author ing/arspot/.
[1]
8 Exploratory study of the parental perception of social learning
[12] [13]
[14]
[15] [16]
[17]
[18]
[19] [20]
[21]
[22]
[23]
[24] [25] [26] [27] [28]
139
Liarokapis, F., and Eike Falk Anderson. 2010. “Using Augmented Reality as a Medium to Assist Teaching in Higher Education.” Eurographics. Noh, Zakiah, Mohd Shahrizal Sunar, and Zhigeng Pan. 2009. “A Review on Augmented Reality for Virtual Heritage System.” Learning by Playing. Game-Based Education System Design and Development 50–61. https://doi.org/10.1007/978-3-642-03364-3_7. Robb, Lindgren, and J. Michael Moshell. 2011. “Supporting Children’s Learning with Body-Based Metaphors in a Mixed Reality Environment.” In Proceedings of the 10th International Conference on Interaction Design and Children, June. https://doi.org/10.1145/1999030.1999055. Klopfer, Eric. 2008. Augmented Learning: Research and Design of Mobile Educational Games. Cambridge, Mass: MIT Press. Cascales, Antonia, Isabel Laguna, David Pérez-López, Pascual Perona, and Manuel Contero. 2013. “An Experience on Natural Sciences Augmented Reality Contents for Preschoolers.” Virtual, Augmented and Mixed Reality. Systems and Applications 103–112. https://doi.org/10.1007/978-3642-39420-1_12. Scavarelli, Anthony, Ali Arya, and Robert J. Teather. 2020. “Virtual Reality and Augmented Reality in Social Learning Spaces: A Literature Review.” Virtual Reality May. https://doi.org/10.1007/s10055020-00444-8. Wu, Hsin-Kai, Silvia Wen-Yu Lee, Hsin-Yi Chang, and Jyh-Chong Liang. 2013. “Current Status, Opportunities and Challenges of Augmented Reality in Education.” Computers and Education 62 (62): 41–49. https://doi.org/10.1016/j.compedu.2012.10.024. Vladimir Geroimenko. 2019. Augmented Reality Games II: The Gamification of Education, Medicine and Art. Cham: Springer International Publishing Imprint, Springer. Li, Jingya, Erik D. van der Spek, Loe Feijs, Feng Wang, and Jun Hu. 2017. “Augmented Reality Games for Learning: A Literature Review.” Distributed, Ambient and Pervasive Interactions 612–626. https://doi.org/10.1007/978-3-319-58697-7_46. Pérez-López, David, Antonia Cascales-Martínez, María-José Martínez-Segura, and Manuel Contero. 2016. “Using an Augmented Reality Enhanced Tabletop System to Promote Learning of Mathematics: A Case Study with Students with Special Educational Needs.” EURASIA Journal of Mathematics, Science and Technology Education 13(2). https://doi.org/10.12973/eurasia.2017.00621a. Pellas, Nikolaos, Panagiotis Fotaris, Ioannis Kazanidis, and David Wells. 2018. “Augmenting the Learning Experience in Primary and Secondary School Education: A Systematic Review of Recent Trends in Augmented Reality Game-Based Learning.” Virtual Reality May. https://doi.org/10.1007/ s10055-018-0347-2. Muro, M., and P. Jeffrey. 2008. “A Critical Review of the Theory and Application of Social Learning in Participatory Natural Resource Management Processes.” Journal of Environmental Planning and Management 51(3): 325–344. https://doi.org/10.1080/09640560801977190. Newman, Barbara M., and Philip R. Newman. 2007. Theories of Human Development. Mahwah, N.J.: Lawrence Erlbaum Associates. Bandura, Albert. 1986. Social Foundations of Thought and Action: A Social Cognitive Theory. Upper Saddle River, NJ: Prentice Hall. Green, Michael, and John A. Piel. 2016. Theories of Human Development: A Comparative Approach. New York: Psychology Press, Taylor & Francis Group. Nabavi, Razieh. 2012. “Bandura’s Social Learning Theory & Social Cognitive Learning Theory.” ResearchGate 2012. https://www.researchgate.net/publication/267750204_Bandura. Barma, Sylvie, Sylvie Daniel, Nathalie Bacon, Marie-Andrée Gingras, and Mathieu Fortin. 2015. “Observation and Analysis of a Classroom Teaching and Learning Practice Based on Augmented Reality and Serious Games on Mobile Platforms.” International Journal of Serious Games 2(2). https://doi.org/10.17083/ijsg.v2i2.66.
140
Nafees Akhter Farooqui et al.
[29] Radesky, J. S., J. Schumacher, and B. Zuckerman. 2014. “Mobile and Interactive Media Use by Young Children: The Good, the Bad, and the Unknown.” Pediatrics 135(1): 1–3. https://doi.org/10.1542/ peds.2014-2251. [30] Shuib, Liyana, Shahaboddin Shamshirband, and Mohammad Hafiz Ismail. 2015. “A Review of Mobile Pervasive Learning: Applications and Issues.” Computers in Human Behavior 46(May): 239–244. https://doi.org/10.1016/j.chb.2015.01.002. [31] Brigham, Tara J. 2015. “An Introduction to Gamification: Adding Game Elements for Engagement.” Medical Reference Services Quarterly 34(4): 471–480. https://doi.org/10.1080/02763869.2015. 1082385. [32] Koivisto, Jonna, and Juho Hamari. 2017. “The Rise of Motivational Information Systems: A Review of Gamification Research.” SSRN Electronic Journal 45: 191–210. https://doi.org/10.2139/ssrn.3226221. [33] Kara, Nuri, Cansu Cigdem Aydin, and Kursat Cagiltay. 2012. “Design and Development of a Smart Storytelling Toy.” Interactive Learning Environments 22(3): 288–297. https://doi.org/10.1080/ 10494820.2011.649767. [34] Billinghurst, Mark, Adrian Clark, and Gun Lee. 2015. A Survey of Augmented Reality. Hanover, Massachusetts: Now Publishers. [35] Paul, Milgram, Haruo Takemura, Akira Utsumi, Fumio Kishino, et al. 1994. “Augmented Reality: A Class of Displays on the Reality-Virtuality Continuum.” Telemanipulator and Telepresence Technologies 2351: 282–292. [36] Olwal, Alex. 2009 “Unobtrusive Augmentation of Physical Environments: Interaction Techniques, Spatial Displays and Ubiquitous Sensing.” [37] Carmigniani, Julie, Borko Furht, Marco Anisetti, Paolo Ceravolo, Ernesto Damiani, and Misa Ivkovic. 2010. “Augmented Reality Technologies, Systems and Applications.” Multimedia Tools and Applications 51(1): 341–377. https://doi.org/10.1007/s11042-010-0660-6. [38] Huang, Yujia, Li Hui, and Ricci Fong. 2015. “Using Augmented Reality in Early Art Education: A Case Study in Hong Kong Kindergarten.” Early Child Development and Care 186(6): 879–894. https://doi. org/10.1080/03004430.2015.1067888. [39] Yilmaz, Rabia M. 2016. “Educational Magic Toys Developed with Augmented Reality Technology for Early Childhood Education.” Computers in Human Behavior 54(January): 240–248. https://doi.org/ 10.1016/j.chb.2015.07.040. [40] Pan, Zhigeng, Adrian David Cheok, Hongwei Yang, Jiejie Zhu, and Jiaoying Shi. 2006. “Virtual Reality and Mixed Reality for Virtual Learning Environments.” Computers and Graphics 30(1): 20–28. https://doi.org/10.1016/j.cag.2005.10.004. [41] Mikropoulos, Tassos A., and Antonis Natsis. 2011. “Educational Virtual Environments: A Ten-Year Review of Empirical Research (1999–2009).” Computers and Education 56(3): 769–780. https://doi. org/10.1016/j.compedu.2010.10.020. [42] Loftin, R.B., F.P. Brooks, and C. Dede. n.d. “Virtual Reality in Education: Promise and Reality.” In Proceedings. IEEE 1998 Virtual Reality Annual International Symposium (Cat. No.98CB36180). https://doi.org/10.1109/vrais.1998.658491. [43] Walsh, D. Gentile, D. Gieske, J. Walsh, M. Chasco, and E. Ninth. 2004. Annual MediaWise Video Game Report Card. Minneapolis, MN: National Institute on Media and the Family. [44] Lemheney, Ed. D., Alexander J., William F. Bond, M.D. Jason, C. Paden, Matthew W., M. S. LeClair, Jeannine N., M. S. N. Miller, Mary T., and M.H.A. Susko. 2016. “Developing Virtual Reality Simulations for Office-Based Medical Emergencies.” Journal for Virtual Worlds Research 9(1). https://doi.org/10. 4101/jvwr.v9i1.7184. [45] Beheshti, Jamshid. 2012. Virtual Environments for Children and Teens. Virtual Reality in Psychological, Medical and Pedagogical Applications. September. https://doi.org/10.5772/51628.
J. P. Patra✶, Manoj Kumar Singh, Yogesh Kumar Rathore, Deepak Khadatkar
9 An innovative application using augmented reality to enhance the teaching-learning process in school education Abstract: This chapter presents an application prepared from augmented reality (AR) in the field of school education for normal and mentally weak students. It is observed that schools in rural areas are not able to use information and communications technology tools or computer graphics models for teaching-learning, and when the teacher teaches students in books and explains any image object, the details of objects are not clear to the students due to the unavailability of 3D models. We can use this application in schools in rural areas, where physical subject-wise models are not available or are fewer. Due to the nonavailability of models based on the subject, it is difficult for the students to understand the detailed information related to the subject, so through this application, the detailed information of the subject can be explained very well by showing the model according to the subject to the students. Through this application, it will be easy for those students, whose mental development has not been done very well, who are given education in a special school different from normal schools, in which they can learn sign language as well as objects. In this type of education, it is difficult to identify such objects used in daily life, which they cannot touch, see, or do not have that object in front of them. In that case, an application created in AR will prove to be a boon for students. Because through this application students can understand what they had difficulty with earlier which will help them in living daily life. Therefore, this technology will be helpful in the education of students and will be a better option for the students to learn. It can be used by anyone such as a teacher, parent, or student themselves. The use of this application will be very easy because for this we will be using Android-based devices like mobile or tablet, which is generally available everywhere. This effort and research will enhance the teaching-learning process in school education and will prove to be very helpful for normal students and mentally weak students. Keywords: Augmented reality(AR), 3D Models, school education, practical-based learning, unity software
✶
Corresponding author: J. P. Patra, Shri Shankaracharya Institute of Professional Management and Technology, India, e-mail: [email protected] Manoj Kumar Singh, Yogesh Kumar Rathore, Deepak Khadatkar, Shri Shankaracharya Institute of Professional Management and Technology, India https://doi.org/10.1515/9783110981445-009
142
J. P. Patra et al.
1 Introduction Augmented reality (AR) is a technology that is based on the principles and algorithms of computer vision to enhance the quality of real objects such as sound, video, and graphics. AR technology was first introduced in 1968 by Evan Sutherland, the father of computer graphics, at Harvard. In this technique, he created the head-mounted display system, which at that time the university, companies, and national agencies wanted to use for their better future. Through this technique, a sample of the real object that is available in the world is prepared and added to the computer, which can be used for interaction or teaching tasks. Explanation and understanding are better using this imaginary object that is not available at the instant [1]. In a way, AR shows such information that any field and its related strongly coupled object which is imaginary is shown on the screen. An example of this is Google Class, a wearable computer with a head-mounted display. In the present time, along with the availability of technology, there has been a lot of change in the way of taking education and educating, which is moving toward a new direction. In the present modern times, different types of mediums are available for teachers to teach students, so in the present time, teachers need to be aware of these new mediums [2]. This task is becoming very complicated because, in the present time, the availability of mobile phones, tablets, laptops, and computers and the availability of information confuse the students a lot. For this reason, using the mobile, tablet, and so on available at present is a chance to improve the system of education in a new way, in which AR will also play its role very well, due to which the cost of it takes to develop infrastructure that can be saved [3]. In the field of education, teachers are always trying this and looking for such resources and tools so that the students get excited about the subject they teach and also get interest in learning. Therefore, in the present time, AR is in front of them as a kind of tool, from which students always get excited and take education [4]. Nowadays, due to the availability of mobile devices, tablet devices, and so on with both the teacher and student, these mediums can be used for teaching and learning, in which the students can be taught by securing the sample application according to the subject [5]. It becomes easier for the students to discuss the topics taught to them with the teacher and with their classmates as each student has his/her mobile device with them, in which samples of the related subject are available which can be seen as AR is a gift [6]. In Figure 1, it can be seen clearly that by placing the camera in front of that book, the 2D image is available in the 3D form. According to the experience of the teachers, it can be said that the curiosity and engagement of the students increase when we display the standard presented by AR in front of the students, which makes it easy to
9 An innovative application using augmented reality
143
Normal Incident Ray
Reflected Ray
θ θ Mirror
Figure 1: Augmented reality.
understand the subject. Because of this, when students get detailed information on the standard form of related subjects, they enjoy reading very much, and at the same time, they take great interest in the related subjects because all the doubts in their minds get resolved [7]. Now with the use of AR, students become more interested in their lessons and other learning activities and are ready to enhance their critical thinking and to think deeply about any kind of problem and their solution. Given the immense potential of AR, it can be used for children from school to graduate college and other educational institutions. This technique can be used for any kind of discovery, learning a topic about the continents or review of the subject, or any mathematical model and model of physics in detail [8]. It has often been seen that not every person or child can visit every major museum or city in the world physically, but now with the help of AR, children or elders can see any part of the world with the help of virtual reality (VR) on their mobile. They can watch on a phone or tablet. Therefore, AR is a boon for all such people who cannot roam around every area of the world. With the help of AR, they may observe these uncommon sea animals through the model with the aid of augmented reality and comprehend all the information about them that is not feasible to see in person or without a microscope. Hence, this type of important advantage provides us with AR. The government of India has decided to develop Atal Tinkering Lab under the important scheme to provide the best education to school-level students [9]. With the help of Atal Tinkering Lab as per the plan of the government, the school students in the institute learn and experiment with various scientific ideas under the modern education system and develop their thinking on the side. In this scheme, the government aims to lead the school students toward practical learning so that the students can develop skills and they can use them to increase
144
J. P. Patra et al.
their knowledge, hence, the use of AR in this area too. This will make it easier for the students to understand the related subject [10]. Similarly, the Government of India has made the National Education Policy 2020; under the new education policy, the government has kept this objective that every school should be provided such facilities so that they can get their teaching done according to the present time in which the school is being connected to the internet at present. Smartphones and tablets are being made available so that teachers and students understand new technology and its usefulness in teaching accordingly. Therefore, this effort can also be used to evaluate the students or to explain the subject in detail through the use of AR. The government will develop schools into smart classrooms in a phase-wise manner to promote digital learning for the overall development of schools and with the support of online resources and make practical learning the basis for promoting the skill development process. So it can be possible that AR can provide an important contribution to improving school education.
1.1 Current challenges in school education With the integration of new technology in school education, few of them face several problems. Some of them are summarized in the following sections.
1.1.1 Lack of basic facilities The old method is being used for teaching by the teachers in the schools, whereas in the present time, various types of new techniques are available which should be used for teaching. This method is useful for achieving social goals because of the use of traditional education but is less useful in a child-centered approach as of the present times. Therefore, it has been found that basic facilities like a library or laboratory and other equipment are not available for training as per the present requirement in the training institutes for imparting vocational education training according to the present time. Due to this, the training that the teacher should get is not available to them, due to which they do not know the use of teaching technical education. It is also seen that most of the schools are run in rented buildings. All the above reasons affect the quality of teachers and lead to their weaknesses. Due to this, the use of new techniques used in teaching gets reduced, which is not good for teaching, whereas continuous development and use of new tools are essential for teaching. To adopt changes in teaching methods according to time, the suggestion of researchers should always be used because professional development activities should be optimally distributed in teaching and among teachers’ groups. Teaching should in-
9 An innovative application using augmented reality
145
clude periods of practice or the use of coaching and other tools available in the field of study so that the use of all these tools is displayed among the students and teachers and the communication of knowledge is done for the development of society. To develop education as a whole, there is a need to develop teaching material and methodology as well as technical and practical teaching. There is an urgent need to develop individualized or curricular education using the currently available technical teaching to improve the education and teachers as a whole.
1.1.2 Lack of training in technology-based teaching The present time is of information and communication technology, due to which there has been a lot of change in the medium of teaching, which is one of the best means due to which this technology has become an important part of our education system. Computers and mobiles have played a very important role in the use of information and technology. It is being used in abundance in reading and learning through the use of PowerPoint slides for computer reading and other such technical applications which are easily available, which has proved to be a boon for the education world. Nowadays, with the use of mobile, the use of AR is becoming very easy in teaching and in understanding the subject to students. The use of information and communication technology improves the quality of education and encourages learners to learn in a fast and accurate manner and to understand the action response in detail. Information and communication technology is developing as a new and skill-developed educational system in the field of education [11]. But the problem is that at present, teachers at all schools have not been able to learn computer-based teaching or mobile-based technology under the new facility available or their training has not been done in that way. Due to this, the present teachers are unable to provide technical-based education, whereas due to technology, interest in education and learning is developing through the use of hardware, software, internet, or projector. The biggest reasons for making the change are teachers’ resistance to new changes and teachers’ low interest in using information and communication technology in education. The current reason behind this is also the availability of resources that are not available in every school and college. There is also a lack of facilities for training based on new technology in the training institute of teachers [12]. As a result of which teachers currently do not know how to promote technologybased teaching and what tools to use for education. Therefore, teachers must be provided with proper technical knowledge and training.
146
J. P. Patra et al.
1.1.3 Challenges to implementing education policy The constraint in the formulation of any education policy is that each child belongs to a different region, and in a country as diverse as India, a child may have a different local language and a different school language. It is also possible that if a student from outside his state studies in the same school, it becomes difficult for him to learn how to learn the language of another state. Therefore, it is important to keep in mind here how the education policy should be made so that everyone gets an equal opportunity to learn. Policymakers should keep in mind that when the system of education is designed, it should take into account the Indian philosophy and the various cultures of the country and take into account the various cultures of the country and integrating them with emphasis on skill development and education on an ethical basis by linking the modernly available teaching system. The system should be formed and made in the interest of the country so that there can be overall development because the diversity of the country is also a hindrance to its basic education.
1.1.4 Why the schools need augmented reality? In schools, there are a group of subjects like science, physics, and mathematics. It is generally seen that by teaching these subjects in class, the seriousness or detailed information of the subject is not understood by the children. Due to ambiguity, their interest is reduced in the classroom, so AR can provide an important role in this area so that they can have a clear picture in their brain about which object or which model is correct and get detailed information about it: – There is also a reason that due to the clarity of the subjects, based on the information given in the books, the students memorize their theory, their properties, their theorems, and so on, or memorize it specially, but they forget this information at the time of their examination. Go or mix it due to confusion. Therefore, if modern technology like AR is used, then the students will have a lot of clarity which will help them to remember it [13]. – There is no doubt that the students who are the best in their class also have complete clarity about the complexities of all the subjects of the class; they also have ambiguity in the complexity of the subject [14]. – Therefore, understanding the above reasons, one can conclude that because of the complexity of the students’ subjects and due to the ambiguity in their mind, they do not take much interest in that subject and they are limited to having it only. And his/her attention remains one and only for good business by being close to himself/herself. This has such a disadvantage that if a student is excellent in a subject like science or mathematics, then due to ambiguity on that too his/ her interest in it decreases [15].
9 An innovative application using augmented reality
147
1.1.5 Right understanding of tools for related subjects Nowadays, due to the availability of AR, the information on subjects such as correct models or correct formulas or detailed information on subjects such as mathematics, science, or physics can be understood clearly. And it has also been seen that by establishing its possibility and its relation to the students physically and professionally, they understand clearly the use and need of that subject in the mind of the students. Therefore, AR is developing as a kind of right tool in the field of education.
2 Applications of augmented reality in the real world 2.1 To support practical-based learning There are two important tasks to be done to increase the use of AR for practical-based learning: in which to store the data of a related subject in the application and to train the teacher of the concerned subject in detail on how to use AR for teaching them. With the help of AR, students can discuss subject-related information and any such possibility which is not clear to the students can be resolved. Also with the help of AR, any kind of training can be given. But the students get clarity about the subject. One considers AR to be such a tool for teaching that anybody can make good material at a very low cost which can be used to train the students and whose advantage is that the training is given with its help. In this, students get clear information related to the subject very well.
2.2 Augmented reality in science center City Science Center where science-related formats and creative models are available is very entertaining and informative for the students. If AR is used in this area too, it will become more interesting and informative. Therefore, with the use of AR, both communication and entertainment of knowledge are possible in this form.
2.3 Automotive training and design AR can be used to clearly understand any type of model so that such students who take training related to motor and vehicle manufacturing have clear information about that subject. Because different types of models do not need to be available for
148
J. P. Patra et al.
learning in every training institute, this can be done to provide training in the area of AR and improve innovative designs such as model making in AutoCAD [16].
2.4 Augmented realities for gaming and fun In the field of game development, game developers are experimenting with more immersive and experiential features in the game. One of the most popular examples is Pokémon Go which was released in 2016, allowing us to experience AR by importing the game characters into our physical environment. Sometimes AR is just created for fun or to engage with customers [17].
2.5 Augmented realities in entertainment and medical AR is also making a significant contribution to the field of medicine and it is the digital characters or models that are difficult to understand and used to train in this field. It can be used through AR to easily understand its detailed information and relate it to other models. It provides a cheap alternative to the learners and also helps in providing such models which are often not available. So, in this field also vented reality is making a significant contribution and it can be used to take training even better [18].
3 Literature survey According to the research, most of us are visual learners, which means learning and understanding concepts stood out best when provided with visuals. Using AR, skills and knowledge can be achieved and it also gives great learning skills [19]. AR provides us with a better representation of anything that a human can learn better by visuals rather than reading. The characteristics of AR motivated researchers to do work in the field of biology, chemistry, physics, and many more [20]. Many other training learning methods are also available but the potential that AR has made it is far beyond all [21]. A systematic review of a decade of using AR is given below. In the field of education, AR has created many milestones and still going on. AR is “increasing motivation (24%) and facilitating interaction (18%).” Handheld devices are used to portray the knowledge which was still there in the book [22]. The main advantage of handheld devices is their portable nature and the ubiquitous nature of camera phones. The two major disadvantages are that the user has to hold it always in front of him/her and the second disadvantage is that the viewing quality of natural eyes is far better than the camera point of view [23]. Smartphones, PDAs, and tablets with
9 An innovative application using augmented reality
149
Figure 2: A handheld AR system displaying a three-dimensional model of a dinosaur.
cameras, digital compasses, GPS units for their six degree-of-freedom tracking sensors, and fiducial marker systems are used as handheld displays in AR (Figure 2) [24].
4 Problem identification A student’s life includes studying, yet not everyone finds studying enjoyable. The students become disinterested due to the lengthy sentences and boring material. To tackle this issue, the suggestion is to switch from the traditional approach to learning to a new one that makes use of AR as a learning tool. To assist pupils acquire new ideas, some literary works have included AR technology. The real and digital environments can be combined to make studying more enjoyable for pupils. Cited the ScienceDirect study to assess the efficacy and outcomes of utilizing AR in teaching. It can infer from their paper that employing AR as a learning tool aids students in learning certain subjects more successfully. This is primarily accomplished by seeing abstract ideas as 3D objects. As a result, AR can be utilized to help students understand the fundamentals of abstract ideas. Students also find AR to be intriguing, which inspired them to study more. Due to the more engaging experience, the pupils may readily recall and remember information. Because students can view the replicas of real objects while learning and practicing, AR is generally a more effective learning medium.
5 Solution statement Nowadays education touches a new height. The traditional way of teaching is not enough for today’s students. Now many ICT tools are available to deliver the material in a better way. Many advanced technologies are available in the classrooms also.
150
J. P. Patra et al.
Using new technologies, teaching can be made very interesting and students can learn things in a very efficient way. Students in today’s world have very high imagination and the creativity to take things into reality. One of the major challenges to connecting modern technologies to education is the lack of suitable models and techniques. To deal with this problem, it is helpful in a predevelopment phase to generate a step guide and start work based on it: – First of all, objectives have to be defined, and make road map that how AR can be added to the curriculum. – What are the educational advantages (enthusiasm, better engagement, increased simulation, improved knowledge retention, and learning time optimization)? – Understanding of the subject that how to use AR in the content of the syllabus and how it can aid in teaching methodology. – Create meaningful educational scenarios by thinking about the pedagogy and a convenient modern didactic schema to meet individual learning needs. – Goals are to be set first and then success criteria and measurement criteria are to be defined.
6 Methodology 6.1 Experimental setup Like many other applications, security systems require database storage. Vendors must, however, make a careful choice in terms of their database partner due to the unique requirements of the security sector. Here in the product, the recovery is done by restoring the copy of the previous state of the database. This backup is archived in a typical magnetic tape. After getting data backed up, it reconstructs the current state by reapplying and redoing the transaction which was already committed. The details of the committed transaction will be obtained from the log file where every metadata is available. The backup process is continued until a safe backup is ready. AR application in education system provides the following information: – 3D model: Included is a 3D model that is over the 2D image and provides an interactive vector model. – Video player: Included is a video player that may be accessed by pointing the camera in the direction of the movie. It has the Play, Pause, and Reset buttons.
9 An innovative application using augmented reality
151
6.2 Teaching materials Images from NCERT(National Council of Educational Research and Training) of physics class 12th are gathered, and a corresponding video to the model is obtained in order to create course materials utilizing AR. Learners can obtain in-depth knowledge of the model by using AR and zoom and rotate functions, which would help in offering interactive learning to users.
6.3 Planning and developing the system In this study, basically, two stages are there: the first being to develop an AR system and proper content of teaching is to be developed and in the second stage efficient deployment of the teaching material in the AR system is done. Much of the software is there to do the deployment task. One of the software is the blender.
6.4 Developing the 3D models During the modeling phase, only the required number of faces, edges, and vertices are added to the object to accelerate computation and execution. Models are made using a blender and vector, and most of them are animated to aid with learning and encourage interaction.
6.5 Unity software Multiplatform deployment is made simple and effective by the game creation engine Unity. It includes a powerful rendering engine, high-quality games, and interactive content, and a graphical integrated development environment that makes it possible to create games that can be easily deployed to consoles like the PS4, Xbox One, PC, Mac, and Linux as well as the Web, iOS, Android, and Linux. To help users, and cut down the amount of time needed for game design as well as the complexity and expense of our efforts, Unity offers a wealth of documentation, projects, and tutorials.
6.6 Constructing the system using Unity System interaction is created with some necessary function in accordance to meet the learner’s requirement and need for fulfilling their learning and observation need. Vuforia SDK for Unity provides many resources, including a camera, an image converter,
152
J. P. Patra et al.
a video background renderer, an image converter, an object tracker, and a device database. The plate form for the AR will be established by the target management system, which is ultimately made by the Vuforia Engine. First, the input image is to be uploaded; after the image uploading, the resource can be targeted by the device. First, the target manager downloads the target in the unity editor format for the target development option. Then, it imports the target’s unity package into the unity project, arranges the targets in a scene, and puts virtual buttons and game objects in the targets. There is a Unity Inspector Panel for the modification of the object settings, public variable values, configuring component characteristics, and establishing relationships between objects.
6.7 Algorithm used Most important and basic feature: Being able to point the camera to an image and see information floating over it. Overview of functional requirements (modules): Start camera: By opening the application, the user first gets access to the camera. Detect object: The object will be identified by pointing the camera at it. Gather information: After detecting the object, all the related information should be loaded. Collect sensor data: When the object is getting detected, the mobile will store the sensory data of the mobile. Create AR objects: Based on the object detected, it will create an AR object and will show it on the screen. Place AR objects: According to the collected sensor data, the object is shown over the image after doing the following check accordingly. Video player option: The user has the option to “show video player” on the application’s left side. In front of the camera, it will show the video. By selecting this option, users can access the play, pause, and reset buttons on the right side of the application’s screen. Toggle option: The user can switch between the AR camera and the 3D model viewing screen. The screen provides a separate view of the 3D model, so users can view, zoom, and rotate accordingly.
User
Figure 3: Sequence diagram. 3: 3D Viewer
2: Video Player
1.1.1.1: Display 3D model over Image
1: Camera points to image
2.2.2: Display Video
2.2: Search for Video
Video Player
3.3.3: Shpws 3D Model (rotate, zoom options)
3.3: Transfers 3D Model to scene2
3D Model
1.1.1: Acknowledge of the Search
1.1: Search for 3D Model
Image Target
3D Vierwer
9 An innovative application using augmented reality
153
154
J. P. Patra et al.
Users of the system should be able to retrieve 3D models via the camera in the direction over the image (Figure 3). The 3D models comprise animation and labels to make visuals more interactive and informative. Along with this, a video player is provided with all the general play, pause, and reset features. This system supports some type of user privileges, learners. Learners will have access to these features. The learners should be able to do the following functions: – Get 3D model – Rotate – Zoom-in – Zoom-out – Get video player – Show player – Play – Pause – Reset – Toggle between 3D model viewer and AR camera
7 Results Figure 4 shows a 3D representation of Rutherford model over an image.
Figure 4: Three-dimensional representation of Rutherford model over the image.
In Figure 4, it can be seen clearly that the AR camera is placed over the book and then the picture of the Rutherford model can be shown in 3D view. Here camera fetches the picture from the book and then converted them into a respected AR format (Figures 5 and 6).
9 An innovative application using augmented reality
155
Figure 5: AR conversion of a standing wave on a circular orbit.
Figure 6: AR conversion of a picture of Coulombs model.
8 Conclusion and future scope AR allows teachers to assist students in grasping complex topics. Teachers may enrich classroom experiences, teach new skills, encourage students’ minds, and get students enthused about pursuing new academic interests by utilizing the engagement and experimentation that AR technology provides. Because AR allows lecturers to display a three-dimensional representation of topics and incorporate interacting components which make textbook materials more interesting, the institute will be remarkable and
156
J. P. Patra et al.
more engaging. Through the use of the application developed, students will get a better interactive way of learning and memorizing information. AR might alter how people utilize computers. A lot of unrealized potential exists for AR in education. Interfaces for AR enable seamless interaction between the real and virtual worlds. Using AR technologies, students engage in natural interactions with 3D information, objects, and events. According to the data from a national survey, 90% of instructors concur that VR and AR technologies are extremely efficient at giving pupils unique and individualized learning experiences. One of the biggest challenges teachers have is getting and keeping students’ attention. AR and VR technology will not only help teachers do this but also help them teach in a more interesting, effective method that also makes the students’ learning experience easier and more enjoyable. There is rising interest in classes that have included VR and AR in their curriculum. Studies also show that the majority of students, 97%, genuinely said they would attend a class or course with AR. Many regards AI, AR, and VR as the future of education, especially in light of the COVID-19 situation, when students were required to learn from home, as well as the unavoidable need to overhaul the educational system.
References Sharma, A., and K. Gandhar, and S. Seema. 2011. “Role of ICT in the Process of Teaching and Learning.” Journal of Education and Practice 2(5): 1–6. [2] Edwards-Stewart, A., T. Hoyt, and G. Reger. 2016. “Classifying Different Types of Augmented Reality Technology.” Annual Review of CyberTherapy and Telemedicine 14: 199–202. [3] Saxena, N. 2017. “The Role and Impact of ICT in Improving the Quality of Education: An Overview.” International Journal of Engineering Sciences and Research Technology 6(3): 501–503. [4] Bacca, J., S. Baldiris, R. Fabregat, and S. Graf. 2015. “Mobile Augmented Reality in Vocational Education and Training.” Procedia Computer Science 75: 49–58. [5] Bottani, E., and G. Vignali. 2019. “Augmented Reality Technology in the Manufacturing Industry: A Review of the Last Decade.” IISE Transactions 51(3): 284–310. [6] Geroimenko, V. (July 2012). Augmented Reality Technology and Art: The Analysis and Visualization of Evolving Conceptual Models. In 2012 16th International Conference on Information Visualisation, pp. 445–453. IEEE Computer Society. [7] Carmigniani, J., B. Furht, M. Anisetti, P. Ceravolo, E. Damiani, and M. Ivkovic. 2011. “Augmented Reality Technologies, Systems and Applications.” Multimedia Tools and Applications 51(1): 341–377. [8] Del Cerro Velázquez, F., and G. Morales Méndez. 2021. “Application in Augmented Reality for Learning Mathematical Functions: A Study for the Development of Spatial Intelligence in Secondary Education Students.” Mathematics 9(4): 369. [9] https://tinker.ly/what-is-atal-tinkering-lab-why-every-school-should-have-a-tinkering-lab/. [10] Sharma, S., and P. Sharma. 2015. “Indian Higher Education System: Challenges and Suggestions.” Electronic Journal for Inclusive Education 3(4): 1–4. [11] Bhattacharjee, B., and K. Deb. 2016. “Role of ICT in 21st Century’s Teacher Education.” International Journal of Education and Information Studies 6(1): 1–6. [1]
9 An innovative application using augmented reality
[12] [13]
[14] [15] [16] [17]
[18] [19] [20] [21] [22] [23] [24]
157
Kumar, P., and S. Azad. 2016. “Teacher Education in India: Some Policy Issues and Challenges.” International Journal of Advance Research and Innovative Ideas in Research 2(6): 1217–1224. Iatsyshyn, A. V., V. O. Kovach, Y. O. Romanenko, I. I. Deinega, A. V. Iatsyshyn, O. O. Popov, S. H. Lytvynova. 2020. Application of Augmented Reality Technologies for Preparation of Specialists of New Technological Era. 2547: 181–200. Boboc, R. G., F. Gîrbacia, and E. V. Butilă. 2020. “The Application of Augmented Reality in the Automotive Industry: A Systematic Literature Review.” Applied Sciences 10(12): 4259. Lee, K. 2012. “Augmented Reality in Education and Training.” TechTrends 56(2): 13–21. Majeed, Z. H., and H. A. Ali. 2020. “A Review of Augmented Reality in Educational Applications.” International Journal of Advanced Technology and Engineering Exploration 7(62): 20–27. Liono, R. A., N. Amanda, A. Pratiwi, and A. A. Gunawan. 2021. “A Systematic Literature Review: Learning with Visual by the Help of Augmented Reality Helps Students Learn Better.” Procedia Computer Science 179: 144–152. Chen, P., X. Liu, W. Cheng, and R. Huang. 2017. “A Review of Using Augmented Reality in Education from 2011 to 2016.” Innovations in Smart Learning 1: 13–18. Saidin, N. F., N. D. A. Halim, and N. Yahaya. 2015. “A Review of Research on Augmented Reality in Education: Advantages and Applications.” International Education Studies 8(13): 1–8. Soltani, P., and A. H. Morice. 2020. “Augmented Reality Tools for Sports Education and Training.” Computers and Education 155: 103923. Wu, H. K., S. W. Y. Lee, H. Y. Chang, and J. C. Liang. 2013. “Current Status, Opportunities and Challenges of Augmented Reality in Education.” Computers and Education 62: 41–49. Elmqaddem, N. 2019. “Augmented Reality and Virtual Reality in Education. Myth or Reality?” International Journal of Emerging Technologies in Learning 14(3): 234–242. Herron, J. 2016. “Augmented Reality in Medical Education and Training.” Journal of Electronic Resources in Medical Libraries 13(2): 51–55. Kesim, M., and Y. Ozarslan. 2012. “Augmented Reality in Education: Current Technologies and the Potential for Education.” Procedia-Social and Behavioral Sciences 47: 297–302.
Vikas Gupta
10 How do augmented and virtual reality influences visitor experiences: a case of heritage tourism sites in Rajasthan Abstract: Although technology has evolved quickly, it is still incredibly underutilized in many sectors. The new status quo has, nevertheless, increased awareness of technology across many industries, particularly travel and hospitality. As a result, technology is being used in more inventive and profitable ways that help both businesses and consumers. The travel and leisure business is being impacted by technology like augmented and virtual reality (AR/VR) in a wide range of ways. Virtual tours offer alternatives to the pandemic melancholy by enabling users to see locations they have always desired to visit while remaining in their own place. VR and AR solutions are increasingly being used in various tourism sectors, particularly at galleries, historical monuments, amusement parks, and theaters, in Asian cities including Tokyo, Beijing, Kuala Lumpur, Shanghai, Delhi, and Jaipur [6, 44]. The virtual world produced by these innovations is used as a cutting-edge advertising tool to draw visitors to these tourist spots and develop joint marketing destinations [6]. The availability of a wide range of AR and VR tools at the locations also aids the players in coming up with fresh ideas for revitalizing the location to enthrall more visitors and improve their overall quality of service. Therefore, AR and VR techniques are now being utilized more frequently to increase the overall experiences of tourists at a location, making a more interactive and diversified experience simpler and allowing them to engage with visitor attractions in unique ways [17]. However, until recently, relatively few studies have attempted to examine the combined significance of AR and VR approaches from the tourists’ value co-creation perspective. Consequently, this chapter looks into the potential for incorporating AR and VR into the tourism experiences at the well-known historical monuments in Rajasthan with the goal of recommending a framework based on value co-creation. This will also imply that active implementation of AR and VR at historical monuments can help co-create value for the tourist’s experiences before, during, and after their visit. This study will also present a case study on the prominent tourist destinations in Rajasthan where AR and VR technologies are being used to improve the visitor’s overall destination experience. Keywords: Tourist, visitor, augmented reality, virtual reality, Rajasthan, value co-creation
Vikas Gupta, The University of the South Pacific, Laucala Campus, Suva, Fiji Islands, e-mail: [email protected]; [email protected] https://doi.org/10.1515/9783110981445-010
160
Vikas Gupta
1 Introduction The application of virtual reality (VR) and augmented reality (AR) in the tourism sector is a significant subject of study for both researchers and industry professionals. Through interactive experiences, VR and AR enable engagement with virtual and augmented settings that stimulate the user’s imagination [15, 23]. Travel is not exempt from the effects of these innovations, which have already had a considerable impact on several industries, including healthcare [21, 26] and defense [40, 32]. The planning of destinations and interactions, the removal of barriers that make remote areas more accessible and usable, the admonition and guidance of tourists, the protection of fragile locations [39], improved recreation [18], as well as the capacity to boost traveler’s interplay across the globe are just a few possible impacts of AR and VR on the tourism sector [17]. Each of these effects is connected to how tourists connect with a place and what they do there, yet there do not seem to be many investigations on how AR and VR could be used to co-create value and enhance the tourists’ experience and satisfaction. Technology advancements, across all their varied forms, usually have immediate and longlasting effects on the tourism industry. Information and communications technology (ICT) advancements have apparently changed the tourism industry in various ways, affecting everything from customers’ needs to facility management [4]. For instance, many travelers use websites to search for information about destinations [13], and many tourism-related companies and institutions have a digital presence [33]. The increased penetration of technologies has impacted the way tourist destinations advertise and commercialize their historical sites [4]. Additionally, as a result of advancements in social networking sites, the relevance of value co-creation has increased along with the growing consumer engagement in developing services and products. VR and AR have also improved tourists’ overall destination experiences [48]. Because of this, a number of studies have also been helpful in the area of historic sites over the past several years, with a significant rise in the research taking the need for more customized itineraries and tourist experiences into consideration [14, 24]. Numerous scholars have examined how information affects people’s travel choices [2, 14]. Because tourism is an intangible industry, advertisers mainly rely on visuals in marketplaces [29]. Marketing professionals are coming up with creative ways to utilize pictorial depiction as technology improves to generate a favorable image of the destination and boost tourism in a more demanding and competitive international market [1, 11]. Due to the immersive experience they provide and their capacity to convey the feelings of a distant location or encounter, AR and VR have significant potential for tourism advertising. AR and VR have always had the potential to significantly reduce the risk perceptions of intangibility as a marketing device, assisting travelers in making more knowledgeable selections and having more reasonable expectations [26]. According to Miranda et al. [10] and Gttentag [15], VR is a simulated environment that perfectly captures a traveler’s immersive experience in the digital domain by simulating the real-world environment perceptually. It also enables the traveler to
10 How do augmented and virtual reality influences visitor experiences
161
maneuver the digital environment that includes all the pertinent information about a historical destination (such as antiquity, the importance of the location, and relevant sites). Numerous VR tools have been designed for this objective, including a headmounted VR display and virtual three-dimensional (3D) virtual expeditions inside this destination setting, to enhance the entire experience of tourists. On the contrary side, AR is a method that enables the substitution of artificial visuals with genuine ones, enhancing the details of the environment around the tourist and creating interaction more engaging and authentic. In contrast to the immersive 3D experience offered by VR, AR emphasizes depicting the virtual intelligence that is superimposed on the actual world without hiding it [17]. VR and AR innovations are progressively being applied in various industries of tourism and hospitality, particularly at exhibitions, historical monuments, amusement parks, and theaters, in Asian cities like Incheon, Shanghai, Kuala Lumpur, Kyoto, Jodhpur, and New Delhi [14, 16, 45]. These innovations are employed to construct a simulated reality that is being utilized as a cutting-edge promotional tool to draw visitors to such tourist hotspots [6, 18]. The availability of a diverse range of AR and VR tools at the venues also assists the players in coming up with fresh ideas for revitalizing the location to enthrall more visitors and improve their overall user experiences [6]. As a result, AR and VR techniques are increasingly being employed to improve the general perceptions of tourists at a place, providing a more interactive and varied experience and allowing them to engage with visitor attractions in unique ways [43]. The usage of 3D printing mobile applications [8, 12], which could be used to generate a clone of an individual that can then be utilized in generating a replication that can also be printed and handed to the tourist at the destination, is one excellent case [17]. Since previous studies [7, 14] showed that data on traveler engagement for co-creating significance for the services and products remains extremely limited and appears to lack any constant speculative content, more focus has recently been placed mostly on the value of the co-creation concept [14, 36, 53]. Additionally, considering the users’ perceived value is crucial for adopting AR and VR techniques and achieving maximum acceptance rates and utilization intentions at tourist destinations [41]. According to the earlier research, AR and VR have been effectively deployed at places of cultural significance in Asian cities to enhance the whole visitors’ experience. Additionally, methods like 3D printing and head-mounted VR displays were extensively implemented at heritage places, galleries, theaters, and art exhibitions to replicate artifacts [23], provide a 360° view of the site [26], and conserve and enlighten [32]. However, remarkably few endeavors have been made to examine the combined significance of AR and VR approaches from the tourists’ value co-creation perspective. Consequently, this chapter will look into the potential for incorporating AR and VR technology into the visitors’ experience at well-known historical monuments in Rajasthan to recommend a model that relies on value co-creation. Additionally, with the help of a case study, it will propose that the robust implementation of VR and AR in historical monuments and heritage sites can help co-create value for the tourist’s experience: before, during, and after their visit.
162
Vikas Gupta
2 Literature review 2.1 Use of AR and VR in tourism for influencing visitor perceptions Since VR is a technique and a phrase that is frequently used for tourism learning and research, a variety of technological developments over the past decade have resulted in substantial advancements in technology employing this innovation [23, 34]. A plethora of researchers have already discussed the commonalities and contrasts between AR and VR, which implies that VR is frequently referred to as a notion linked to AR [20, 30]. Other authors [14, 32] have defined VR as a projection approach where the physical environment entirely vanishes, and the user is deeply involved in a virtual environment. Mixed reality (VR and AR) has also been addressed by several authors [38, 46]. This concept applies to integrating a physical world with digital content using technological resources, and it could differ from the real-world situation without overall digital engagement. With VR, individuals may explore and communicate with an entirely computer-generated 3D environment employing one or all of their five human senses. Conversely, AR is an enhanced kind of VR that distinguishes itself by seamlessly integrating computer-generated images with the real world. This technology frequently includes 3D objects and graphics [9]. Mashable [32] explains AR as an indirect or direct live stream of an actual, physical situation. Computer-generated sensory perceptions, such as video, audio, GPS tracking, or visuals, are used to enhance some aspects of these experiences. So, while enjoying the product, technology increases a user’s impression of reality. The accessibility of AR and VR applications at tourist places has been rising as they acknowledge the significance of these advancements in visitor experiences [54]. Attractions use AR and VR to display digital signage like visual arrows to aid visitors in navigating the area [45]. To give visitors detailed and underpinning historical knowledge about the places, historical monuments have adopted AR and VR [3]. Although AR and VR are relatively new concepts, they have already been expanding and are anticipated to do so moving ahead [44]. The tourism sector is expected to take the lead in adopting AR and VR, according to Seal [44]. Because of the potential of the pertinent technology to meet passengers’ requirements, the adoption of various innovations in tourist destinations is growing vital. The millennials will not be content with their travel and will negatively perceive the tourist attraction if the site cannot meet their evolving needs. Recognizing travelers’ essential technical motivations and their sense of attachment to technology is critical for destination marketing. In addition, various travelers adopt technologies at varying periods due to differences in their levels of innovation. Even though the accessibility and deployment of AR and VR in tourist sites have received a lot of interest, the co-creation of AR and VR applications in the tourism industry has been overlooked. Previous research [24, 49] essentially discusses the significance of AR and VR in the tourism business and looks at
10 How do augmented and virtual reality influences visitor experiences
163
important AR and VR aspects that influence tourists’ satisfaction with their trip encounters. This chapter seeks to understand why visitors choose to use AR programs at tourist locations and will present a model framework for how AR/VR platforms could be used to co-create value at Rajasthan’s cultural heritage sites.
2.2 Value co-creation for tourists at historical sites Previous studies [14, 15] have shown that allowing tourists to participate in value cocreation at historic monuments enhances their overall visitor experiences. Additionally, Janda et al. [21] reaffirmed that efficiency is anticipated to benefit long-term economic advantages when tourists actively participate in the value co-creation processes at a destination. It was discovered that the degree of satisfaction brought on by the various experiences at a tourist attraction influences the desire to return [14, 31]. Additionally, it was discovered that tourists joyfully co-create their encounters through participation, personalization, and co-production [28, 34], and historical monuments improve these interactions by creating conducive environments for participation. Additionally, Loureiro [28] advocated for the tourists to become emotionally invested in their experiences through the value co-creation process. Due to ICT, today’s consumers are well-informed, knowledgeable about offerings, and connected. Additionally, they are much more demanding but want their desires and requirements to be met in the manner they choose, not just how institutions provide. They seek to collaborate with organizations to produce value for both sides jointly. Consequently, staying current with the times and making warranted maneuvers in the sites of cultural significance is important. To do this, it is crucial to consider the various concepts (i.e., what value will be co-created, with whom it is going to be co-created, what assets would be used, and what are the methodology) within the value co-creation framework as suggested by Sarijärvi et al. [41]. VR and AR technologies can help with tourism management and planning because they have exceptional calibration capabilities [28]. The development of management strategies, including moving the stress from zones with extensive utilization to those with low use, involves evaluating tourists’ routines of dimension, space, and location. The best tools for this are 3D simulations [35]. Additionally, as VR and AR technologies advance, the events industry will discover a way to leverage these developments to promote enjoyable tourism destinations [27]. It is discovered that hedonic experiences and emotional engagement play significant roles in influencing prospective visitors’ intentions and behavior to travel to a particular location and inspiring visitors to make the trip [51]. In a related vein, it has been discovered that navigating a simulated environment produces pleasurable feelings, a sense of flow, and emotional attachment. These feelings positively impact behavioral intention and broaden the interactive and engaging experience, which benefits the requirements of visitors. The perception of investors and customers is continuing to increase due to the development of VR and AR innovations, and VR and AR are now
164
Vikas Gupta
being conceptualized and put into practice primarily to satisfy the demands of visitors in the future. This chapter examines the key developments in the travel and tourism industry and highlights the promising importance of AR and VR technology in meeting the requirements of future travelers. Strategic planning and appropriate management can be used to recognize recent developments in the tourism sector. Its significance is further expanded because the almost realistic, simple, and in-depth navigations produced by VR are easily accessible to travelers to assist in their tourism interaction processes. The development of various simulations that allow for the VR experience, where potential guests can see a place in advance, like in the scenario of specific destination marketing organizations, is another way to identify the pattern [37, 50]. Since it allows important information on crucial elements that play an essential role in the exploration phase of the purchasing phase, VR and AR solutions are primarily found during the initial stages of the customer purchase process in the tourism domain [14]. Additionally, the ideal digital setting makes it possible to create virtual destinations at a reasonable cost that are recognized in simulations and commercial tourist sites. One is the “Sensorama Simulation,” which provides fun, simulated motorbike rides across New York using 3D visuals, scents, noises, wind, programmed vibrations, and other elements [52]. Additionally, heritage sites in Rajasthan, for example, Jantar Mantar and Jaisalmer Fort, are utilizing VR technology to provide a 360-degree view of the heritage monuments for the enhanced experiences of the tourists [45]. It is crucial to integrate a suitable mix of AR and VR technologies at sites of cultural significance to co-create values and give tourists an improved and more enriching travel experience. The conceptual framework for this co-creation structure is shown in Figure 1, where several co-creation techniques are integrated with various AR and VR technological platforms that are accessible. Players, including AR/VR developers, managers of sites of cultural significance, local governments, and pertinent tourist businesses, can contribute to generating the desired output by employing the appropriate approach with AR/VR platforms. The resultant output appears in the form of high tourist intentions to return, increased revenue, new target market development, revitalization of current markets, enduring visit emotions, and improved overall tourist satisfaction. The tourist’s pre-, during, and post-visit experiences can also be divided into three categories. In order to educate tourists about the key locations and improve their visit intentions, local government at the national heritage sites should provide VR/AR technologies during their pre-visit stage. Through the utilization of anonymized data, AR/VR platforms may be used to enhance their experiences while they were on-site. This can further improve the tourist’s engagement and hedonic perception [24]. Additionally, platforms like AR and VR can be used to explicate hard-to-understand information. The study by Jung et al. [24], which indicated that the tourists appreciated their experiences through the implementation of VR, has validated this. Additionally, incorporating different technologies (such as a combination of AR and VR techniques) during the trip may result in enjoyable and memorable encounters for the tourists [42].
10 How do augmented and virtual reality influences visitor experiences
Study Objectives
CO-CREATING VALUE – Getting feedback from visitors about the most important categories – Delivering a unique and improved visitor satisfaction – Developing AR/VR games along with others – Developing integrated AR/VR interactions – Using 3D printing to jointly create consumer experiences – Using social VR, enabling sharing on social media – Producing artifacts with 3D printing – Using VR/AR to co-create memorable experiences – Enabling tourist participation using AR/VR – Enabling a hedonistic experience that is participatory – Enabling affective and psychological engagement through realistic AR/VR interactions – Permitting tourists to bring personalized souvenirs
– – – – – – – – –
UTILIZING AR/VR Head mounted VR screens AR/VR gamification VR projections 3-D printing 3-D souvenirs Audio guides Pocket PC’s Smart glasses Smartphone Applications
STAKEHOLDERS IN TOURISM/HOSPITALITY SECTOR – – – – –
Management of Heritage sites Tourism players Marketers of AR/VR Platforms State government Central government
OUTPUT – – – – – – –
Enhanced re-visit intensions of tourists Enhanced revenue Development of new target segments Renewal of existing sectors Enhanced experiences of visitors Focus on Moments of truth Memorable interactions
Figure 1: Conceptual model for VR/VR technologies at Rajasthan’s cultural heritage sites to co-create value.
165
166
Vikas Gupta
According to research [14], using AR/VR technology at sites of cultural significance increases tourist income, offers visitor perspectives, attracts new customer segments, and encourages tourists to return more frequently. Creating custom souvenirs can enhance positive word of mouth and attract new market segments. Customized AR/VR interactions could therefore be seen as a critical stage in the value co-creation process. According to Gupta et al. [14], value co-creation using AR/VR technologies may help promote social engagement by enabling online content and increasing network awareness of the historic cultural site. Therefore, for their upcoming activities, places of cultural heritage must employ AR/VR innovations like wearable instrumentation, 3D printers, and voice guides. There is a potential that the consumers would not be as critical of how a service or product is made or used. In that instance, a specialist who can direct and inform the client of how to use the item or service may transmit the necessary expertise. Consequently, the cultural and historical sites must offer a seamless and enjoyable procedure to ensure tourist satisfaction and increase their intentions to return since visitors play an essential role in the co-creation experience through information sharing and word-of-mouth promotional strategies. Integrating completely immersive and superimposed digital products that provide a distinctive and remarkable impression of the location’s exhibits and entities is also recommended for sites of cultural significance. This may further enable visitors to customize the information and take 3D images as mementos for enduring interaction. The creative remodeling and strengthening of culturally historic sites may also benefit from customization, increasing their revenues and broadening their potential customers [33]. To transform the tourists’ theoretical emotions into concrete ones through the usage of AR/VR technologies, it is crucial to co-create the entire experience of the historical monument as stated in the conceptual framework. Visitors to these sites may save and use the knowledge while they are at residence, establishing an essential learning dimension. In summation, it is advised that sites of cultural significance develop user-friendly VR/AR applications that integrate the upgrading of artifacts and monuments through digital data [19, 50]. It is also advised to allow tourists to choose their favorite items, customize them, and 3D print them as mementos after the encounter. This is anticipated to assist cultural heritage sites by generating more revenue, better intentions for repeat visits, and new target consumers. Implementations are also used to start providing big data to places of cultural significance which can then be analyzed for tourist knowledge and insight. According to Saxena [44], “personalization leads to opportunities of cultural reconfiguration and strengthening through the distinctiveness of the approach,” which, in return, produces enhanced encounters and key success factors; thus, personalization is crucial to the co-creation system.
10 How do augmented and virtual reality influences visitor experiences
167
2.3 Case study of Rajasthan: use AR/VR technologies for value co-creation at cultural heritage sites India is among the wealthiest countries when it refers to heritage places. Twentyseven historical monuments and structures have been designated as UNESCO World Heritage Sites by UNESCO. But several of the nation’s historic structures and landmarks are deteriorating. Modern technology is significantly boosting tourism and aiding in safeguarding this legacy for future generations. Utilizing the most cutting-edge design technologies from US-based 3D design tech firm Autodesk, the Rajasthan government took the lead in protecting monuments. The local government has launched the very first effort to digitize and document the historic structures in the territory, with assistance from the Department of Information Technology and Communications. For this project, Jantar Mantar, Albert Hall Museum, Udaipur City Palace, Hawa Mahal, Albert Palace, Jaipur City Palace, and seven gateways of the walled city of Jaipur have all received actual size built 3D digital models. Amer Fort and Kumbhalgarh Fort are now being examined as well. The state administration’s goal to survey all current structures and buildings and to produce a 3D digital representation of the area is reflected in this AR/VR project granted to Autodesk. The genuine experience of the cityscape will be provided once 3D architectural models and 3D walk-through recordings of the landmarks and monuments are digitally combined with AR/VR technologies. For instance, new facades that are being built around historic sites may be modeled after them to maintain their vintage elegance. Due to the importance of maintaining cultural identity, conserving heritage properties frequently presents a unique set of difficulties. Around the world, numerous initiatives have been launched to preserve history and culture. For example, using Autodesk technology, simulation objects of the Bamiyan Buddha statues (which the Taliban demolished) were created from subscriber pictures. The main motive of Autodesk in this initiative was to explore the potential of community-sourced digitalization. Corresponding to this, the Smithsonian, a grouping of US government-run galleries and research centers, has employed Autodesk’s 3D scanners and graphics modeling technologies to electronically record and analyze several museums and galleries, experimental samples, and scientific locations. On the Smithsonian Explorer, which Autodesk constructed for them by utilizing Project Play software, the whole portfolio is accessible via the internet and around the globe using Autodesk innovation. Employing Autodesk tools like ReCap and ReMake, the Apollo 11 display was also totally converted to digital form. This is a groundbreaking effort to reconstruct heritage monuments in Rajasthan, residence to magnificent monuments and palaces. The collection of all relevant information was the initial phase that contributed to the basic modeling for documentation. The project was crucial in realizing the aim of building a 3D cityscape. There are three stages: making a 3D digital representation of archaeological and historic places is the initial stage. The development team scans the extant significant historical monuments using a blend of laser terrain scanning and drone aerial photography. The Autodesk
168
Vikas Gupta
ReCap 360 remote sensing data and laser scanner equipment is then used to transform the information in photogrammetry and 3D modalities. Using Autodesk technologies, these models are then linked with GIS images to provide a comprehensive 3D digital representation of the structure. The developer enters the point cloud data into Autodesk Revit in phase 2 to create a thorough building information model that can be utilized for ongoing maintenance and any future renovation work. In the last stage, high-resolution video images are used to collect historical artworks, sculptures, and architectural elements, which are then converted into 3D graphics and point cloud data using Autodesk ReCap 360 and Autodesk ReMake. The artifacts’ 3D models will be utilized in virtual displays, for study and restoration, and they can even be 3D printed to create duplicates if necessary. The scanned structures have been turned into 3D walk-through films that visitors can view on the Rajasthan website using cutting-edge computer animation tools (3ds Max and Maya). The team intends to produce future interactive media encounters for the web, smartphone, and VR. With only a flick, travelers will undoubtedly be able to get a deeper look at Rajasthan’s prominent heritage monuments because of the 3D walk-through recordings. It also offered the tourists the option of historical walks, which allows them to take advantage of and explore the AR-enhanced navigational walks of a site without actually going to the location in the landmark. Since 3D models of the actual monument constructions were made, and visitors could see even hidden areas of the monumental campus using the Autodesk application that can be utilized for both pre-visit and onsite interactions and experiences. Additionally, it allowed visitors to print 3D mementos that could be altered and reprinted alongside unique name tags depending on the tourist’s interests and background. The number of tourists and the profit generated from tourism has dramatically grown as a result of the use of AR and VR technology at the critical sites of cultural heritage in Rajasthan [5], indicating the effectiveness of this strategy for other Indian sites of cultural significance as well.
3 Conclusion and implications This chapter sought to establish a framework for value co-creation through the implementation of VR and AR applications in heritage sites. Practically, this study presents a value co-creation paradigm for the setting of cultural heritage utilizing a case study technique and a multitechnology technique using both AR and VR. The conceptual approach, in particular, demonstrates how the efficient application of several technologies in the setting of historical monuments helps to jointly create value for culture and heritage organizations and for the pre-visit, on-site, and post-visit experiences of tourists. Furthermore, it is shown how such technologies have the potential to help tourists and cultural heritage organizations jointly develop value and share value from the viewpoint of location management. The proposed model represents the start
10 How do augmented and virtual reality influences visitor experiences
169
of a brand-new field of research and inquiry. This goal is to guide the execution of these envisioned scenarios within culturally significant sites. AR is anticipated to give visitors their initial impression of the exhibit and persuade them to really attend as a preliminary step. Second, it was determined that AR technology is the leading technology for delivering enhanced information. This will add to the visitor’s experience, enable travelers to discuss their experiences, and encourage them to make purchases. In addition, tourists should be able to interconnect their encounter to the development of a custom souvenir, possibly 3D printed, attributable to AR applications. This is an essential step in the value co-creation approach since it gives individuals a feeling of participation in the overall experience fabrication [33]. Organizations need to consider strategies to draw tourists straight into cultural heritage sites, especially considering the rise of simulated tourism. However, Seal [44] highlighted the challenges in knowledge and skills related to VR and AR. To establish a co-creative setting as an element of the experience, this study conceptualized the premise of utilizing AR and VR in cultural and historical backgrounds. This is anticipated to promote tourist connectivity, a crucial component of contemporary social experiences, and the viability of sites of cultural significance. The thought of value co-creation using both AR and VR in the context of historical assets requires further study. It is recommended to hold focus group interviews with a variety of stakeholders in order to examine the concept’s viability thoroughly. Ultimately, Gupta et al. [14] assert that “an organization can dramatically enhance the value co-creation by creating marketing strategies that have high levels of internal and situational factors providing a configurational suitability.” As a result, it is advised that future studies be conducted to investigate an appropriate business strategy for funding and integrating information platforms into sites of cultural significance. In order to boost urban tourism in Rajasthan and maintain a competitive edge in providing visitors with a memorable tourism experience, this chapter proposed that various AR and VR innovations can be considered for value co-creation at sites of cultural significance. Further, it suggested a framework that used a case study method wherein the application of various AR and VR systems was incorporated with the value co-creation components to create output in terms of higher earnings, improved visit/revisit motives, revitalization of established city/urban tourism industry, development of new market segments, augmented tourist experience, and unforgettable memories for the stockholders at the sites of cultural significance in Rajasthan, that is, AR/VR designer. Additionally, it is recommended that the key players participate in value co-creation by increasing the tourists’ pre-visit, during, and post-visit interactions by implementing a perfect balance of AR and VR solutions at the cultural and historical locations. Furthermore, the scope of these systems can be applied to the mutually advantageous development of tourists and national heritage places, which spreads value among tourists from the perspective of destination marketing. It is projected that creating a virtual environment utilizing AR and VR at sites of cultural significance will promote competition and improve social connectivity among visitors.
170
Vikas Gupta
Urban tourism marketplaces become much more aggressive and draw more visitors as social connectedness rises. Therefore, it becomes essential to conduct brainstorming among the stakeholders to uncover the maximum capabilities of this idea for the development of urban tourism. Moreover, designing novel corporate arrangements with high conformational fitness will significantly improve the value co-creation approach. As a result, more research is advised to find a suitable modeling approach for the use of various mechanization in sites of cultural heritage. Stakeholders might consider drawing tourists to places of cultural significance and assuring the implementation of innovative experiences at some of these sites with the rising utilization of AR and VR technologies.
4 Limitations We encountered a few limitations that called for additional investigation and analysis. This chapter attempted to explain the influence of AR and VR in value co-creation at the sites of cultural significance in Rajasthan using a conceptual modeling approach and a case study. First, the administration of places of cultural significance must make substantial financial efforts to acquire and integrate AR and VR technologies to give tourists significantly better encounters, which was not addressed in this study. Additional research may cover this aspect and the value co-creation concept of using AR/VR platforms at historical sites. Second, the assumption that all AR and VR components will function on a single digital application could be a constraint to offering tourists the ultimate visitor interactions, especially in the instance of senior tourists, has not been examined in this study. Additional research might cover cutting-edge hardware and software that could be applied to various mobile platforms. Lastly, this study of integrating AR and VR platforms at Rajasthan’s cultural heritage sites can only be viewed as a foundation for the next technological advancements for improving tourist experiences, rather than a comprehensive method of value co-creation via AR and VR. The conceptual model framework may be complemented and expanded upon by new platforms that are continuously developed.
References [1] [2] [3]
Aziz, Azlizam, and Nurul Amirah Zainol. 2011. “Destination Image: An Overview and Summary of Selected Research (1974? 2008).” International Journal of Leisure and Tourism Marketing 2(1): 39–55. Baker, Michael J., and Emma Cameron. 2008. “Critical Success Factors in Destination Marketing.” Tourism and Hospitality Research 8(2): 79–97. Bogomolov, V. 2019. “Top 5 ideas how to use AR in tourism.” (accessed 12 August 2022).
10 How do augmented and virtual reality influences visitor experiences
[4]
[5]
[6]
[7] [8]
[9]
[10]
[11]
[12] [13] [14]
[15] [16]
[17]
[18]
[19] [20]
171
Buhalis, Dimitrios, and Rob Law. 2008. “Progress in Information Technology and Tourism Management: 20 Years on and 10 Years after the Internet – The State of eTourism Research.” Tourman 29(4): 609–623. Chowdhary, S. 2017. Rajasthan turns to 3 D design technologies to preserve heritage. Retrieved from: https://www.financialexpress.com/industry/rajasthan-turns-to-3d-design-technologiestopreserve-heritage/603688/ (on 20th August 2022). Chung, Namho, Hyunae Lee, Jin-Young Kim, and Chulmo Koo. 2018. “The Role of Augmented Reality for Experience-Influenced Environments: The Case of Cultural Heritage Tourism in Korea.” Journal of Travel Research 57(5): 627–643. Clark, Lillian, Levent Çallı, and Fatih Çallı. 2014. “3D Printing and Co-creation of Value.” In Proceedings of 12th International Conference e-Society, pp. 251–254. Create amazing. 2013. Create amazing ‘mini me’ versions of you and your family at Asda, available at: http://your.asda.com/news-and-blogs/create-detailed-miniature-versionsof-you-and-yourfamilywith-3d-printing-at-asda. [Accessed May_2, 2019]. Dadwal, Sumesh S., and Azizul Hassan. 2016. “The Augmented Reality Marketing: A Merger of Marketing and Technology in Tourism.” In Mobile Computing and Wireless Networks: Concepts, Methodologies, Tools, and Applications, pp. 63–80. Pennsylvania, USA: IGI Global. Miranda, Ana, Carla Colomer, Jessica Mercader, M. Inmaculada Fernández, and M. Jesús Presentación. 2015. “Performance-based Tests Versus Behavioral Ratings in the Assessment of Executive Functioning in Preschoolers: Associations with ADHD Symptoms and Reading Achievement.” Frontiers in Psychology 6: 545. Echtner, Charlotte M., and J.R. Brent Ritchie. 1993. “The Measurement of Destination Image: An Empirical Assessment.” Journal of Travel Research 31(4): 3–13. https://doi.org/10.1177/ 004728759303100402. Groenendyk, Michael, and Riel Gallant. 2013. “3D Printing and Scanning at the Dalhousie University Libraries: A Pilot Project.” Library Hi Tech. https://doi.org/10.1108/07378831311303912. Grønflaten, Øyvind. 2009. “Predicting Travelers’ Choice of Information Sources and Information Channels.” Journal of Travel Research 48(2): 230–244. https://doi.org/10.1177/0047287509332333. Gupta, Vikas, Manohar Sajnani, and Saurabh Kumar Dixit. 2020. “Impact of Augmented and Virtual Reality on the Value Co-creation of Visitor’s Tourism Experience: A Case of Heritage Sites in Delhi.” Tourism in Asian Cities, pp. 263–277. Routledge. https://doi.org/10.4324/9780429264801. Guttentag, Daniel A. 2010. “Virtual Reality: Applications and Implications for Tourism.” Tourism Management 31(5): 637–651. https://doi.org/10.1016/j.tourman.2009.07.003. Han, Dai-In, Timothy Jung, and Alex Gibson. 2013. “Dublin AR: Implementing Augmented Reality in Tourism.” In Information and Communication Technologies in Tourism 2014: Proceedings of the International Conference in Dublin, Ireland, January 21–24, 2014, pp. 511–523. Springer International Publishing. Healy, Noel, Carena J. van Riper, and Stephen W. Boyd. 2016. “Low Versus High Intensity Approaches to Interpretive Tourism Planning: The Case of the Cliffs of Moher, Ireland.” Tourman 52: 574–583. https://doi.org/10.1016/j.tourman.2015.08.009. Huang, Yu-Chih, Sheila J. Backman, Kenneth F. Backman, and DeWayne Moore. 2013. “Exploring User Acceptance of 3D Virtual Worlds in Travel and Tourism Marketing.” Tourism Management 36: 490–501. https://doi.org/10.1016/j.tourman.2012.09.009. Ivanov, Stanislav, and Craig Webster (Eds.). 2019. “Robots, Artificial Intelligence, and Service Automation in Travel, Tourism and Hospitality.” https://doi.org/10.1108/978-1-78756-687-320191011. Kim, Myung Ja, Choong-Ki Lee, and Timothy Jung. 2020. “Exploring Consumer Behavior in Virtual Reality Tourism Using an Extended Stimulus-Organism-Response Model.” Journal of Travel Research 59(1): 69–89.
172
[21]
[22]
[23]
[24] [25] [26] [27]
[28]
[29] [30]
[31]
[32] [33]
[34] [35]
[36] [37]
[38] [39]
Vikas Gupta
Janda, M. Schittek, Nikos Mattheos, Anders Nattestad, Anders Wagner, Daniel Nebel, Catarina Färbom, D‐H. Lê, andRolf Attström. 2004. “Simulation of Patient Encounters Using a Virtual Patient in Periodontology Instruction of Dental Students: Design, Usability, and Learning Effect in History‐ taking Skills.” European Journal of Dental Education 8(3): 111–119. Javornik, Ana. 2016. “It’s an Illusion, but it Looks Real!’ Consumer Affective, Cognitive and Behavioural Responses to Augmented Reality Applications.” Journal of Marketing Management 32 (9–10): 987–1011. Jung, Kwanghee, Vinh T. Nguyen, Diana Piscarac, and Seung-Chul Yoo. 2020. “Meet the virtual Jeju Dol Harubang – The Mixed VR/AR Application for Cultural Immersion in Korea’s Main Heritage.” ISPRS International Journal of Geo-Information 9(6): 367. Jung, Timothy, Namho Chung, and M. Claudia Leue. 2015. “The Determinants of Recommendations to Use Augmented Reality Technologies: The Case of a Korean Theme Park.” Tourman 49: 75–86. Kim, Hyeon-Cheol, and Martin Yongho Hyun. 2016. “Predicting the Use of Smartphone-based Augmented Reality (AR): Does Telepresence Really Help?” Computers in Human Behavior 59: 28–38. Klein, Lisa R. 2003. “Creating Virtual Product Experiences: The Role of Telepresence.” Journal of Interactive Marketing 17(1): 41–55. Kozinets, Robert V. 2023. “Immersive Netnography: A Novel Method for Service Experience Research in Virtual Reality, Augmented Reality and Metaverse Contexts.” Journal of Service Management 34(1): 100–125. Loureiro, Sandra Maria Correia. 2022. “Technology and Luxury in Tourism and Hospitality.” In Anupama S. Kotur and Saurabh Kumar Dixit (Eds.) The Emerald Handbook of Luxury Management for Hospitality and Tourism, pp. 273–284. Emerald Publishing Limited. MacKay, Kelly J., and Malcolm C. Smith. 2006. “Destination Advertising: Age and format Effects on Memory.” Annals of Tourism Research 33(1): 7–24. Magnenat-Thalmann, Nadia, and George Papagiannakis. 2005. “Virtual Worlds and Augmented Reality in Cultural Heritage Applications.” Recording, Modeling and Visualization of Cultural Heritage 419–430. Marasco, Alessandra, Piera Buonincontri, Mathilda Van Niekerk, Marissa Orlowski, and Fevzi Okumus. 2018. “Exploring the Role of Next-generation Virtual Technologies in Destination Marketing.” Journal of Destination Marketing and Management 9: 138–148. MashableUK. 2014. Augmented reality. Available at: http://mashable.com/category/augmentedreality/. (accessed 11 August 2022). Mohanty, Priyakrushna, Azizul Hassan, and Erdogan Ekis. 2020. “Augmented Reality for Relaunching Tourism Post-COVID-19: Socially Distant, Virtually Connected.” Worldwide Hospitality and Tourism 12(6): 753–760. Mozilla. 2019. A Web Framework for Building Virtual Reality Experiences. Available online: https://aframe.io (accessed on 4th August 2022). Olya, Hossein, Timothy Hyungsoo Jung, Mandy Claudia Tom Dieck, and Kisang Ryu. 2020. “Engaging Visitors of Science Festivals Using Augmented Reality: Asymmetrical Modelling.” International Journal of Contemporary Hospitality Management 32(2): 769–796. Prahalad, Coimbatore K., and Venkat Ramaswamy. 2004. “Co‐creating Unique Value with Customers.” Strategy Leadersh 32(3): 4–9. Privitera, Donatella. 2020. “Value of Technology Application at Cultural Heritage Sites: Insights from Italy.” In The Emerald Handbook of ICT in Tourism and Hospitality, pp. 345–356. London, UK: Emerald Publishing Limited. Ramos, Vicente, Maurici Ruiz-Pérez, and Bartomeu Alorda. 2021. “A Proposal for Assessing Digital Economy Spatial Readiness at Tourism Destinations.” Sustain 13(19): 11002. Rebelo, Francisco, Paulo Noriega, Emília Duarte, and Marcelo Soares. 2012. “Using Virtual Reality to Assess User Experience.” Human Factors 54(6): 964–982.
10 How do augmented and virtual reality influences visitor experiences
173
[40] Rizzo, A., J. Cukor, M. Gerardi, S. Alley, C. Reist, M. Roy, and J. Difede. 2015. “Virtual Reality Exposure for PTSD Due to Military Combat and Terrorist Attacks.” Journal of Contemporary Psychotherapy 45(4): 255–264. [41] Saarijärvi, Hannu, P. K. Kannan, and Hannu Kuusela. 2013. “Value Co‐creation: Theoretical Approaches and Practical Implications.” European Business Review. https://doi.org/10.1108/09555341311287718. [42] Santoso, Halim Budi, Jyun-Cheng Wang, and Nila Armelia Windasari. 2022. “Impact of Multisensory Extended Reality on Tourism Experience Journey.” Journal of Hospitality and Tourism Technology.13(3): 356–385 [43] Scholz, Joachim, and Andrew N. Smith. 2016. “Augmented Reality: Designing Immersive Experiences that Maximize Consumer Engagement.” Bus Horizons 59(2): 149–161. [44] Seal, A. 2020. Top 7 augmented reality statistics for 2020, available at: www.vxchnge.com/blog/aug mented-reality-statistics. (accessed 29 July 2022). [45] Shah, Mrudul. 2019. “How Augmented Reality (AR) is Changing the Travel & Tourism Industry.” Towards Data Science. [46] Shilkrot, Roy, Nick Montfort, and Pattie Maes. 2014. “Narratives of augmented worlds.” In 2014 IEEE International Symposium on Mixed and Augmented Reality-Media, Art, Social Science, Humanities and Design (ISMAR-MASH’D), pp. 35–42. IEEE. [47] Sudharshan, Devanathan. 2020. “Virtual Reality (VR).” In Marketing in Customer Technology Environments. Bingley BD16 1WA, UK: Emerald Publishing Limited. [48] Jung, Timothy. 2016. “Value of Augmented Reality to Enhance the Visitor Experience: A Case Study of Manchester Jewish Museum.” E-Review of Tourism Research 7. [49] Tussyadiah, Iis P., Dan Wang, and Chenge Jia. 2017. “Virtual Reality and Attitudes Toward Tourism Destinations.” In Information and Communication Technologies in Tourism 2017: Proceedings of the International Conference in Rome, Italy, January 24–26, 2017, pp. 229–239. Springer International Publishing. [50] Tussyadiah, Iis P., Dan Wang, Timothy H. Jung, and M. Claudia Tom Dieck. 2018. “Virtual Reality, Presence, and Attitude Change: Empirical Evidence from Tourism.” Tourman 66: 140–154. [51] Wei, Wei. 2019. “Research Progress on Virtual Reality (VR) and Augmented Reality (AR) in Tourism and Hospitality: A Critical Review of Publications from 2000 to 2018.” Journal of Hospitality and Tourism Technology 10(4): 539–570. [52] Yang, Xuewei. 2021. “Augmented Reality in Experiential Marketing: The Effects on Consumer Utilitarian and Hedonic Perceptions and Behavioural Responses.” In Information Technology in Organisations and Societies: Multidisciplinary Perspectives from AI to Technostress, pp. 147–174. Bingley: Emerald Publishing Limited. [53] Yi, Youjae, and Taeshik Gong. 2013. “Customer Value Co-creation Behavior: Scale Development and Validation.” Journal of Business Research 66(9): 1279–1284. [54] Buhalis, Dimitros, and Zornitza Yovcheva. 2013. “Augmented Reality in Tourism: 10 Unique Applications Explained.” Digital Tourism Think Tank Reports and Best Practice 1: 1–12.
Gnanasankaran Natarajan, Subashini Bose, Sundaravadivazhagan Balasubramanian, Ayyallu Madangopal Hema
11 Scope of virtual reality and augmented reality in tourism and its innovative applications Abstract: The novel advent of virtual and augmented reality (VR/AR) has revealed glorious doors for several industries to satisfy the needs of their customer experience. Like many outstanding industries, the travel and tourism industry has also improved its trends and technology with the benefit of VR and AR. AR is exhibiting an incredible part in transfiguring the tourism landscape and heightening the travel involvement for vacationers. In today’s speeding world, many people travel around the globe and they look for better places to visit frequently to seek different environmental experiences. During their travel, tourism lovers keep on searching for places to visit and booking hotels and resorts on the go. So, if a travel agency wants to catch a traveler’s attention with extensively changing travel applications, AR-established transit applications can help them by contributing a distinctive modest improvement. Like VR, AR adds digital features on top of the real world to provide a rich and beautiful experience of the locations that already exist. There are limitless advantages to AR in travel apps. It has a great deal of potential to support and provide creative innovations to the travel and tourism sector. There are many fantastic and growing uses for AR and VR that might benefit tourist commerce. Basically, the main objective of AR is that it improves and modifies how individuals see their immediate environment when viewed through a specific gadget. The creation of novel travel applications benefits impressively from AR and it becomes a treasured tool for businesses and vendors. The surroundings can be observed and enjoyed by the customers in their own way. The travel apps created on AR may easily draw users with their interactive and immersive experiences. It is very reasonable to say that AR may provide outstanding marketing experiences and can be used in the tourist business in a variety of ways. AR can be utilized and easily practiced in any smart electronic device and it is also very cheap as it could be never dreamt of when compared to VR devices. Gnanasankaran Natarajan, Department of Computer Science Thiagarajar College Madurai, Tamil Nadu, India, e-mail: [email protected], Orcid: 0000-0001-9486-6515 Subashini Bose, Department of Computer Science Thiagarajar College Madurai, Tamil Nadu, India, e-mail: [email protected] Sundaravadivazhagan Balasubramanian, Department of Information Technology University of Technology and Applied Sciences Al Mussanah, Oman, e-mail: [email protected], Orcid: 0000-0002-5515-5769 Ayyallu Madangopal Hema, Department of Computer Science Thiagarajar College Madurai, Tamil Nadu, India, e-mail: [email protected] https://doi.org/10.1515/9783110981445-011
176
Gnanasankaran Natarajan et al.
A wide range of crucial and sophisticated tasks are covered by the travel and tourism industry, including local transportation, accommodations, hospitality, and tourist sites. AR plays a vital role in the travel and tourism domain even if it is a smart young field in technical sciences. With so much potential to alter each of these areas, AR in tourism has a promising future. With the most intrinsic uses of AR, here are some assured illustrations of how AR can be used in travel app development. This chapter elucidates the prevailing and innovative AR travel applications and their use cases. Various location-based, marker-based, and simultaneous localization and mapping (SLAM) applications will be discussed in this chapter, and its advantages, as well as the future scope, will be enlightened. Keywords: Virtual reality, augmented reality, tourism, travel applications, location and mapping, augmented tourist destinations, immersive navigation, hotel elements
1 Introduction to AR and VR 1.1 Virtual reality Virtual reality (VR) is a computer-generated environment that can be similar to or completely different from reality. VR has countless outstanding applications, particularly in the field of entertainment, education, and business. Augmented Reality, Mixed Reality which is also known as Extended Reality are some of the other features of Virtual Reality. To provide realistic audio, video, and other lifelike experiences virtually, the current VR systems either utilize VR gadgets or multiprojected surroundings. A person can move within a virtual environment and he/she can imprint his/her expression by touching and feeling an object with virtual devices and he/she can also interact with the objects around him/her by utilizing some virtual environments such as specially built rooms with several huge displays, despite the fact that it is utmost frequently created by VR receivers, which uses a head-seated display with a trivial screen in front of the eyes. While aural and graphic contributions are classically included in VR, haptic expertise may also make it possible for further corporeal and physical feedback. One efficient method to increase the realism of VR is through simulation. Driving simulators, for instance, provide the autoist the appearance like they are actually operating a real car by anticipating the movement of the vehicle due to the user’s input and providing the driver with the appropriate visual, motion, and audio indications. Real-world video feeds and avatar-based VR avatar incarnations are two ways that people can interact with the simulated ecosphere. The 3D distributed virtual environment allows for participation using either a traditional avatar or a real video. Depending on the capabilities of the system, users can choose the sort of engagement they want to have.
11 Scope of virtual reality and augmented reality in tourism
177
Accurate representations of the real environment are essential for many VR applications, including machine steering, model construction, and flight recreation. Imagebased VR systems are increasingly widespread in the computer graphics and computer vision industries. Accurately registering the 3D data that has been gathered is crucial for producing realistic models; often, a camera is employed to represent tiny objects that are very closely visible. A 3D virtual environment is shown on a conventional desktop screen using desktopbased VR, which does not need the usage of any specific VR positional tracking gadget. As an example, many modern video games use a variety of gun trigger, receptive characters, and other communicating components to create a player feel sensational to be in a simulated situation. One mutual reproach leveled at this type of immersion is the lack of marginal vision, which limits the user’s capacity to reveal what is going on around them. A head-mounted display plunges the user extra deeply into a simulated environment. A VR headphone often includes biaural acoustic, positional, and revolving instantaneous head trailing for six degrees of movement, and two small, high-resolution OLED or LCD screens that deliver discrete images to each eye. An omnidirectional treadmill for improved bodily movement flexibility and the capacity to execute locomotive motion in any direction, as well as motion controllers with haptic response for materially participating inside the simulated setting in a natural style with slight to no conception are options. AR is a category of VR technology that cartels what a human views in their material environment with cybernated content created by computer software. Supplementary software-generated pictures with the simulated act typically expand the appearance of the actual environments in some way. AR systems overlay virtual intelligence over an animate camera feed into an earpiece, smart glasses, or mobile device, consenting the user to involve in multidimensional visuals. The combination of the physical and simulated worlds to create new landscapes and representations in which actual and digital items coexist and intermingle in real time is known as MR (Matthew, 2017). Simulated reality is a fictitious virtual world that is as engrossing as the real thing, permitting for a classier lifelike experience or conceivably virtual eternity.
1.2 Augmented reality AR is a real-world cooperating practice in which elements in the actual world are heightened with computer-generated intuitive instruction, sometimes spanning many tactile methods serving as visual, auditory, haptic, somatosensory, and olfactory. AR could be explained as a systematic method that integrates the actual and implicit worlds, enables instantaneous collaboration, and properly records original and imaginary things in 3D. The receptive intelligence overlay might be useful or critical to the ordinary situation. This sensation is so entwined with reality that it is regarded as an alluring component of
178
Gnanasankaran Natarajan et al.
reality. In this manner, AR influences the user’s long-term perception of a physical situation, whereas VR completely replaces the user’s real situation with a simulated one. MR and computer-mediated reality are two phrases that are virtually identical with AR [2]. The primary advantage of AR is how the autonomous world rudiments blend into a person’s judgment of the actual world, not as a simple display of data, but through the insertion of immersive sensations regarded as natural components of an environment. The very primary operative AR systems that offered users with immersive assorted reality practices were established in the early 1990s, beginning with the Virtual Chandeliers device developed at the University of California, Berkeley. Professional AR applications originally appeared in the entertainment and gaming industries. As a consequence, AR applications have moved into lucrative industries, including academia, communication systems, health sciences, and entertainment. In academic learning, content can be acquired via scanning or seeing a snapshot with a phone or tablet, or by using markerless AR technologies. AR is utilized to improve accepted settings, otherwise circumstances, while also providing emotionally heightened understandings. The knowledge about the person’s surrounding definite world becomes cooperative and programmed updated with the usage of recent AR technologies (e.g., computer vision alliance, putting AR cameras into smartphone apps, and entity assimilation). The physical universe is teeming with valuable data about the ecological system and its people. Any artificial impression that enhances or magnifies the previously dominant realities, or actuality, such as seeing other genuine felt or quantified information like electromagnetic radio waves superimposed in perfect alignment with where they genuinely are in space, is referred to as AR. Methods of augmentation are frequently utilized in real time, in meaningful situations, and with eco-friendly rudiments. Intense perceptual information is usually blended with supplementary information, such as scores from a sporting game streamed live. These blend ARs benefit with heads-up display (HUD) technology [3].
2 Recent advanced applications of AR and VR VR and AR have many applications, spanning from entertainment and gaming to healthcare, academia, and business. Only a few examples of application fields include archaeology, construction, commerce, and education. Among the initial uses described are AR materials for astronomy and welding, as well as AR material used to aid surgical treatment by regulating virtual overlays to advise medical practitioners.
11 Scope of virtual reality and augmented reality in tourism
179
2.1 Urban strategy and development AR techniques are being utilized as cooperative design and development tools in the built environment. AR locations, architecture, and data streams, for example, might be produced and reflected upon tabletops for interactive inspection by built environment professionals. Design options may be elaborated on the spot, and often appear additionally accurate than outmoded desktop approaches such as 2D maps and 3D models [4].
2.2 Education AR has been used in educational settings to supplement traditional curricula. Wording, images, video, and music can be projected in real time over a student’s atmosphere. Literary texts, flashcards, and other pedagogical reading materials may have “markers” or triggers that, when identified with an AR device, provide the learner with further knowledge in a multimedia format. Google Glass, for instance, may be envisioned as an AR gadget capable of replacing the traditional classroom. To begin, AR technologies empower learners to participate in true exploration in the actual world, while virtual things like texts, films, and images serve as extra materials for learners to investigate their environments. Students will be able to interact with information more genuinely as AR improves. Instead of being unreceptive recipients, students may become energetic learners who participate in their learning environment. Students may explore and learn about individual substantial components of the incident site by using computer-generated simulations of historical occurrences. Construct3D, a Studiers tube system, assists students in understanding mechanical engineering principles as well as algebra and geometry. In chemistry-oriented AR apps, students may inspect and cooperate with the contiguous arrangement of a particle by holding a marker item in their hand. HP Reveal, a free application, can be used to create AR notecards for understanding organic chemistry processes or virtual representations of laboratory equipment. Anatomy students can investigate a spectrum of anthropoid organ systems in multi-extents. It has been demonstrated utilizing AR as a learning aid to study anatomical structures that increase learner understanding and provide fundamental assistance such as greater engagement and learner involvement [5].
2.3 Industrial engineering AR is utilized to counterfeit paper booklets with automated procedures draped in the field of view of an industrial operator, reducing the thought and determination which is essential to function. AR improves machine maintenance efficiency by offering operators quick access to a machine’s maintenance history. Virtual handbooks help pro-
180
Gnanasankaran Natarajan et al.
ducers to familiarize with uninterruptedly varying product designs since digital directives can be revised and distributed more rapidly than traditional manuals. By eliminating the need for operators to look away from the work area to view a screen or manual, digital instructions improve operator safety. Instead, the instructions are scattered around the workplace. When operating near heavy-duty industrial machinery, the use of AR can improve operators’ perceptions of safety by presenting extra information about the machine’s status and safety measures, as well as problematic areas of the workplace [6].
2.4 Human-computer communication Human-computer interaction (HCI) is the study and design of computers that intermingle with humans. HCI investigators originate from various disciplines, including computer science, engineering, design, human factors, and social science, with the overarching goal of resolving issues in the design and use of technology in order to make it easier, more effective, efficient, safe, and pleasant to practice [7].
2.5 Communal interaction AR might be used to encourage individuals to interact with one another. In Talk2Me, an AR topology of social network, individuals may simulcast information and analyze advertising content from others. Talk2Me’s fast and flexible knowledge transfer and noticing lineaments enable people to commence deliberations and create networks with others in their immediate vicinity. However, if only one person is not wearing an AR headset, the significance of a meeting among the two entities may be diminished if the headgear becomes a disruption. In addition, AR allows users to practice various types of social connections with other individuals in a risk-free setting. Many people can access a shared environment inhabited by virtual goods using collaborative AR. This approach is most successful for educational purposes when participants are in the same room and can engage naturally (through voice, gestures, etc.), but it may also be effectively combined with riveting VR or isolated association [8, 9].
2.6 Healthcare development and preparation The earliest applications of AR were in healthcare, where it was used to help with surgical operation planning, practice, and training. When the earliest AR systems were constructed at the US Air Force research sites, ornamental human recital during surgery was a clearly stated aim. Since 2005, a near-infrared vein finder has been used to detect
11 Scope of virtual reality and augmented reality in tourism
181
veins by filming subcutaneous veins, analyzing the images, and projecting them onto the skin. AR provides surgeons with one-to-one care data for each patient in the form of a fighter pilot’s HUD, as well as access to and overlay patient imaging information, including functional videos. A virtual X-ray view based on previous tomography or realtime pictures from ultrasound and confocal microscopy probes, seeing the position of a tumor in an endoscopic video, and radiation exposure issues from X-ray imaging technologies are some examples. AR can let doctors see a fetus within a mother’s womb more clearly. Many standard enterprises have established a laparoscopic liver surgical gadget that examines subsurface lesions and veins using AR. Patients Wearing AR glasses can receive medication reminders. AR can be involved to offer vital information to a doctor or surgeon without demanding them to take their gaze away from the patient. Microsoft revealed the HoloLens, their first effort at AR, on April 30, 2015. The HoloLens has progressed over time, and it can currently display holograms for image-guided surgery using near-infrared fluorescence. AR is widely used in healthcare to give guidance during diagnostic and therapeutic operations, such as surgical treatment. AR shows its significant performance in clinical training for the modeling of ultrasound-aided needle insertion. AR technology improves university students’ research skills while also supporting them in creating positive attitudes about physics laboratory work. Recently, AR has begun to be used in neurosurgery, a discipline that needs extensive imaging prior to surgeries [10–12, 28].
2.7 Program and live trials The first use of AR on television was for weather visualizations. A common practice in weather casting is to show cast complete agitation video of images that were taken in real time from a number of cameras and other imaging devices. These animation images are the first authentic use of AR in television when paired with 3D graphic symbols and connected to a single virtual geographic model. A prevalent practice in sports broadcasting is AR. For better crowd-watching in sporting and entertainment venues, tracked camera feeds are employed to augment seethrough and overlay augmentation. One illustration is the yellow “first down” line that appears during American football game broadcasts on television and designates the distance the offensive team must travel to gain a first down. In order to allow listeners to correlate a tournament to the finest performance, swimming telecasts frequently feature a line across the lanes that exhibit the location of the existing record holder as a race unfolds. Other examples include following the movements of hockey pucks, racing car, and snooker ball trajectories. AR has been used to enhance theatrical and musical performances. By merging their performance with those of other bands or user groups, artists, for instance, allow listeners to improve their listening experience [5, 13–15].
182
Gnanasankaran Natarajan et al.
2.8 Tourism and exploration A site’s features, as well as any comments or materials left by previous visitors, can all be shown in real-time informational displays for travelers using AR. Advanced AR applications include simulations of historical occasions, locations, and objects projected onto the environment. Applications for AR focused on physical locations that use audio to communicate location data, announcing features in a particular location when they become visible to the user. This study aims to assess how well AR technology is received in Persepolis, Iran, one of the world’s most popular historical tourist destinations. The extended and mixed model of technology adoption was used for this evaluation [16].
3 Opportunity of AR and VR in tourism In the tourism industry, AR has become more and more popular recently. This is partially due to the fact that technology enables hotels and other businesses in this sector to enhance the actual areas they are trying to entice customers to visit, like local attractions and hotel rooms. Since customers need a lot of information before their trip, travel is thoroughly researched in contrast to other purchases. Additionally, when the consumer arrives, the demand for knowledge does not disappear. A large portion of AR’s information may be available all the time, rather than just when it is most needed. A total of 26.7 million people were employed by the travel and tourism industry in India in 2018, which also contributed 9.2% to the country’s GDP. The COVID-19 is expected to cause loss of jobs of up to 38 million, according to the Indian tourism and hospitality industries. Hotel industry occupancy levels decreased by more than 65% in April 2020 as compared to the same month the previous year. The hospitality sector, which comprises the hotel, airline, and travel sectors, would suffer a loss of about USD 85 billion as a result of travel restrictions put in place, according to the Indian Association of Tour Operators [17]. Nations have been encouraged by the World Travel and Tourist Council (WTTC) to protect the tourism sector. Among the many measures suggested by the council, the council has prioritized obtaining funds to promote holiday spots. When travel limitations are lifted and travelers’ confidence in traveling is restored, this is precisely where VR will be useful. The tourism industry has been using VR for years as a “try before you buy” option, with travel agencies, airlines, hotels, and tourist boards using the technology to promote their places to potential customers. Consequently, a traveler would want to experience things rather than read about them in descriptions.
11 Scope of virtual reality and augmented reality in tourism
183
The use of technology in business, however, seems to be changing in response to the trend to view VR as the destination in and of itself. There are alternative perspectives on this assertion, though. The technology lacks enough resilience, first and foremost. The traveler has limited control over the areas they will visit and is forced to rely on the travel agent’s choice of location. Having said that, we are aware that VR technology will not be able to completely replace conventional travel, but it may undoubtedly enhance it. Additionally, it may serve as a means of escape for those who are confined to their houses during a pandemic. VR will help the tourism industry in two ways: first, it will entice people to travel again by serving as an effective destination marketing tool, and second, it will give those who aren’t yet ready to travel a quick retreat. VR technology may help foster greater social interaction in a time when it is challenging to do so in real life, in addition to offering brief diversion and pleasure. In order to improve overall strategic investment and management, researchers continue to explore how this technology may be applied by the tourism industry to provide actual value and purpose. When the outbreak is limited, Virginia Messina, General Director of the WTTC, predicted that it could take up to 10 months for the tourism industry to recover; as a result, in the coming months, we can anticipate new technology advancements having an impact on the industry [18]. The VR experience requires physical involvement and emotional presence. To make the intangible experience of tourism more tangible, marketers must continuously innovate the forms of visual imagery [19]. Virtual tourism and digital heritage are enabled by VR/AR/MR technologies and applications. Without the implementation and support of the right tools, this promising industry cannot develop and thrive. This work reviewed various technologies and applications related to VR, AR, and MR that play a vital role in Virtual Tourism and Digital Heritage. This work conducted a survey analysis of the perceptions, experiences, and intentions of users of different age groups about virtual tourism and digital heritage [20].
4 Virtual reality in travel A scenario can be explored in all 360° through interactive images or films, which are referred to as VR. A site’s entirety is covered in a VR production, as opposed to typical video shots, which are captured from a fixed angle. In the tourism industry, VR can be utilized to capture tourist end point in a fresh and immersive way. Specialized equipment, installations, and software are used to achieve this. A VR headgear, a desktop computer, or a mobile device can subsequently be used to view the finished information. Many people mistakenly believe that a specialized VR headset is necessary in order to see VR content; however, this is not the case. Even while this
184
Gnanasankaran Natarajan et al.
method of viewing VR makes it more immersive, it can still be observed on any device, even mobiles [17].
5 VR in tourism promotions VR marketing is the most common use of technology in the tourism industry. It is a powerful marketing tool to capture tourist attractions in such an interesting and lasting way. The ability to recreate the experience of “being there” is one of VR’s key advantages. Ordinary images and videos may be useful for showcasing a location’s attractions, but they rarely evoke strong feelings. VR in tourism has the power to center the user in the situation, making it easier for them to see themselves there [18].
5.1 360 VR holiday business Computer-generated graphics is generally used in the gaming industry. However, there is a technology of VR termed as 360 VR, sometimes known as 360VR video. Unlike computer-generated images, 360 VR concentrates on the actual surroundings. This makes it perfect for the travel industry since visitors like to see a real location as opposed to a model or simulation. The same techniques used to collect standard image and video material are used to create 360 VR content. A 360 VR enterprise, like Immersion VR, makes a visit to the location with specialized gear to shoot the scene. After returning to the studio, the film is processed by specialized software to create VR content.
6 VR expertise in tourism VR has several uses in the tourism industry. The use of VR in tourism is growing at the same time as technology is developing quickly. The most popular VR technologies used in the tourism industry are VR video and VR photography.
6.1 VR tourism videos Similar to a standard video, a VR tourism video accomplishes its goals. They can be seen on websites or social media, but unlike conventional videos, the viewer can browse the entire space while the video is playing.
11 Scope of virtual reality and augmented reality in tourism
185
Omnidirectional cameras, which are specialized cameras, are used to film VR travelogues. These cameras simultaneously record the scene from every angle. Once the recording is complete, the footage is brought back to the studio and edited to produce a VR travelogue. Videos for VR tourism fall into two categories: – Monoscopic – Stereoscopic Monoscopic VR films intended for tourism can be visualized on common devices like smartphones and laptops. By clicking or dragging across the screen, the viewer can turn the ground of vision, similar to tilting your head to look around a section. Tourism-related stereoscopic VR films are made specifically for VR headgears and it may not be witnessed on a general device. In general, they cost more and take longer to make, but they contribute a more engaging travel experience. These movie-based head tracking enable the viewer to move their head to realistically explore their surroundings [21].
6.2 VR tourism photography Making 360° images of tourist destinations is a component of VR tourism photography. Typically, these images are created with desktop PCs and other common devices like smartphones in mind. When it comes to how it works, virtual tourism photography is identical to VR tourism videos. To see the entire scene, the view can freely slide or swipe across the image. Modern digital single-lens reflex (DSLR) cameras are used to take the pictures on specialized settings that allow for 360° coverage. Compared to VR tourist videos, higher resolution images are made feasible by the use of DSLR cameras. The 360° photos can then be shared on social media and websites and viewed similarly to regular photos. Despite not being as engaging as VR films, these images are cheaper and produced more quickly. The user can explore an inn and its environments in an immersive and interactive way using hotel tours made possible by 360° photography.
7 Applications of virtual reality in the vacation industry VR can be used for a variety of tourism-related purposes, such as – VR travel experiences; – VR tourism content for social media and websites; and – virtual hotel tours.
186
Gnanasankaran Natarajan et al.
7.1 Virtual reality travel involvements VR tourism films made using VR headsets typically represent VR travel experiences. These virtual travel adventures aim to offer an experience that is as similar to actually being in the destination as possible. Traveling in VR offers the user an experience that is absolutely one of a kind and unforgettable. The number of businesses and travel agencies utilizing this technology is gradually growing, and it indicates a promising future for the industry [16, 18].
7.2 VR headsets in the travel industry The most lifelike VR travel experiences are often provided by VR headsets. The user’s head movement is tracked by specialized software in a VR headset. This enables the user to experience the trip place as if they were physically present. Owners of VR headsets are becoming more and more common. This rise in headset sales may be mostly due to the active marketing of the technology in the gaming industry. Additionally, major internet platforms like Google, Facebook, and Amazon are spending a lot of money on VR equipment and content, suggesting that this industry has a bright future. Due to the need for stereoscopic content and spatial audio, creating VR for headsets is more expensive. The expenditure, however, might be worthwhile for those travel agencies wanting to be on the bleeding edge of technology in order to stand out from the competition and provide their clients with an unmatched experience [21].
7.3 VR for travel interventions Travel agencies are one of the most popular users of VR headsets in the tourism sector. They may provide potential consumers with virtual travel experiences that are vastly different from what it is like to visit a travel agency in person. Travel agencies may develop a virtual experience for travelers rather than displaying brochures and computer screens. This tactic can be used successfully in trade shows and other public events to attract attention quickly. Utilizing VR not only gives the user an exceptional experience but it also helps travel companies stand out from the competition. Many travel businesses have adopted VR technology and used it to increase sales and brand exposure.
11 Scope of virtual reality and augmented reality in tourism
187
7.4 Virtual tours of hotels Users can now take much more detailed virtual hotel tours to discover more about a hotel and its environs. Much as how they have changed the real estate market, virtual tours are changing the hotel industry as well. The interiors and exteriors of hotels can be attained in a great manner by using high-resolution cameras and specialized technology. The user may select which room to visit in a 360° interactive tour created by combining the photographs. The bulk of VR hotel tours is monoscopic, making them accessible on both PC and mobile devices. The tours can be published on websites and social media so that prospective clients can access them whenever they want. They might also be stereoscopic, depending on the situation and the available funds. This might result in a more realistic and immersive experience. In contrast to standard hotel photos, these tours enable customers to visualize themselves in the room. This degree of immersion contributes to the creation of distinctive brand interaction and leaves the user with a favorable impression [22].
7.5 VR travel expeditions There are several venues outside of hotels where VR travel tours can be created, including tourist destinations and historical sites. To give users a sense of what it is like to be there, it is possible to develop VR tours of well-known tourist attractions.
8 Profits of virtual reality in vacation industry VR has several advantages for tourism, including: – Enabling users to visualize themselves at a destination – The ability to display a destination in high-resolution 360° – Permitting independent scene exploration by the user – Providing the user with meaningful and distinctive experiences – Fostering distinctive brand interaction – Making it possible for travel agencies to stand out from the competition – Giving individuals who are unable to travel the chance to experience travel – Lessening tourism’s negative effects on areas at risk
188
Gnanasankaran Natarajan et al.
9 The future of VR travel The future of tourism VR is unimaginable to us at Immersion VR. Quite the opposite, there has been a rise in the use of VR for travel. It has been unable to foresee the direction in which this field will go or the emerging VR travel technology. However, new patterns can be identified in the market.
10 VR travel inclinations The following trends are typical in VR travel: – VR travel experiences provided by travel agencies – Virtual hotel tours provided by hotels and travel agencies – Technology to increase the realism of VR travel – Elderly-friendly VR travel adventures – VR flight simulations – Virtual excursions to famous locations – Interface for virtual booking
10.1 Virtual experiences of landmark destinations The environmental issues brought on by too many tourist visitors frequently affect wellknown iconic locations. The number of users can be managed, lessening the impact on the environment, by creating VR experiences of these places. These kinds of experiences are certain to become a standard component in many homes around the world as the number of VR headsets rises.
10.2 VR flight experiences After receiving a preflight safety demonstration and receiving gourmet meals, passengers board a cabin model of an aeroplane. The intention is to give them a true sense of what it is like to fly somewhere. The field of VR travel has recently welcomed VR flight experiences. The first VR airline in the world was developed by First Airlines, a Tokyo-based business. The travelers are then given a virtual tour of the place upon “arrival” utilizing VR goggles. Despite its infancy, this type of experience has the ability to provide people with access to travel options they would not otherwise have.
11 Scope of virtual reality and augmented reality in tourism
189
10.3 Virtual reservation interface Another very new innovation in tourism VR is a virtual booking interface. Wearing a VR headset allows users to make travel arrangements. The entire booking procedure is carried out in VR. While the user is using VR, everything takes place, from picking a hotel to paying for the trip. Although there aren’t many uses for this, the travel organizations can envision and the businesses adopting this strategy boost conversion rates. It appears to be the natural progression from one-time travel experiences to total control of the booking process. Although virtual booking interfaces are still in their early stages, now it is becoming more familiar in the travel industry.
10.4 Virtual reality travel for firstborns For individuals who are unable to travel, especially elders, VR is a viable substitute for actual travel. The elderly are not typically thought of when people discuss VR. However, being able to give them travel opportunities that they would not otherwise have is often quite fulfilling [21–23].
11 Can virtual reality replace travel experiences? While VR is fantastic for short bursts of immersion, it cannot yet match the continuous immersion that comes from being in a real area. In fact, 81% of adults indicated that VR cannot replace travel in a recent poll conducted by the European travel agency Italy for Real. Ninety-two percent of respondents claimed that visiting a place in VR was not the same as going there in person. Additionally, 77% of respondents said it was important to them to try the local cuisine. Smells and the general atmosphere that humans and animals produce are additional drawbacks of VR. Although VR technology is developing quickly, travel will not be entirely replaced by VR any time soon. Currently, VR in tourism is most effective when used to promote travel destinations.
190
Gnanasankaran Natarajan et al.
12 Forms of AR travel applications The following methods are available if you have been wondering how to include AR in travel apps:
12.1 Location-based applications Location- or position-based applications are those that use AR and track locations. Most location-based applications rely on data from the GPS, accelerometer, or digital compass. App developers have the chance to interact with consumers in a more engaging and personal way by augmenting the actual environment with location-based AR, also known as location-based AR. The technology in AR applications enables you to give an experience relevant to where the user is instead of the standard, same display for everyone that you receive in conventional apps. For creating AR-based city tours or navigation applications, the location-based approach to AR is ideal.
12.1.1 What is location-based augmented reality? For app developers, smartphones with location services enabled are the ideal playground. It seems ridiculous and only relevant to Google Maps to let people determine their outside location. It also implies that notifications can be started depending on location, which is particularly useful for prompting people to take action when they are close to significant locations. In addition, there are various ways you might use the setting to provide a distinctive experience. The development of location-based AR apps for mobile devices is one of such things. One may overlay digital data, such as a picture function in an app, to generate digital animations, images, or several other data across real, spatial context using the screen of a mobile device. By combining GPS, motion tracking, prevailing geomarkers, location-based sensors, and AR technologies, you may offer geo-based AR. This permits the installation of virtual objects in the actual environment via a digital landscape that sits on the screen of a mobile device. The potential uses of technology are only now beginning to be investigated across a variety of industries, and if done correctly can result in a particularly engaging experience for the user through indoor or outdoor AR.
11 Scope of virtual reality and augmented reality in tourism
191
12.1.2 Trials for location-based AR expansion The technological considerations that must be made for location-based AR were highlighted in a study written by a number of academics at the University of Ulm. Additionally, it discusses some of the difficulties in creating AR applications that make use of the technology. There are still some issues that need to be resolved before developers can create AR apps, even if advancements in recent years have made the technology more precise and available. However, do not let this derail your intentions to use location-based AR in yan app. Find the best app development partner, one that is experienced in dealing with these challenges. The following are some of the most typical difficulties encountered when creating AR apps: – Multiple sensors on the smartphone or mobile device must be accurately queried at the same time in order to increase the accuracy of location-based AR. This makes it possible for the device’s location and motion tracking, as well as other important parameters, to be appropriately taken into account. – The user’s desired “virtual” elements or areas of interest must be accurately displayed by the gadget on the screen. Whichever angle the smartphone camera is facing does not matter for accurate scene recognition. – Location-based AR can be rendered in a variety of ways, but the method for determining the separation between two places must always be accurate and efficient. This often relies on GPS information; therefore, it occasionally has problems. – Maintaining accuracy is, in general, the greatest obstacle to developing AR applications and effectively utilizing location-based AR technologies. This might be different for Android and iOS devices, which use different location services. iOS uses the iOS location service. The user experience will be diminished if a device is unable to do that task with your app consistently [24].
12.2 Marker-based applications This type of AR, also known as recognition-based AR or image recognition, is based on the testimony of markers or user-defined pictures. A marker is necessary for markerbased AR in order to activate an augmentation. Markers, which can be made of paper or real-world objects, are discrete outlines that cameras can quickly identify and analyze. They are visually autonomous of their environments. Marker-based AR works by scanning a marker, which results in an AR-based experience (which could consist of an item, text, video, or animation) being displayed on the device. A software program called an app is typically needed to enable users to scan markers from their device utilizing the camera stream.
192
Gnanasankaran Natarajan et al.
There are several uses for marker-based AR technologies. After completing object recognition, marker-based AR or recognition-based AR provides us detailed information about the entity. It recognizes and shows information about the subject in front of the camera. Codes, real things, or printed pictures can all serve as markers. The most widely utilized AR applications for travel are those that use marker-based technology. The travel agency Thomas Cook, for instance, provides printed documents in addition to digital content via AR software that uses markers.
12.3 SLAM SLAM is a computational task that entails creating or updating a map of an unknown environment while simultaneously tracking an entity’s position inside it. At first look, this appears to be a confusing problem but there are various algorithms that can roughly solve it in tractable time for specific settings. The extended Kalman filter, the particle filter, the covariance intersection, and Graph SLAM are examples of popular approximation techniques. Robot navigation, robotic mapping, and odometry for VR or AR all require SLAM techniques, which are based on computational geometry and computer vision concepts. The goal of SLAM algorithms is operational compliance rather than perfection because they are designed to work with the resources that are currently available. Published approaches are utilized in self-driving automobiles, remotely piloted, remotely operated underwater vehicles, lunar rovers, more modern domiciliary robots, and even inside the human body. SLAM software quickly recognizes items in the immediate vicinity of the user. To recognize colors, patterns, and other physical object properties, SLAM technology employs sophisticated algorithms. Modern SLAM technology uses hardware that runs smoothly at the highest level [25].
13 What AR apps can offer to travelers? 13.1 Hotels and lodgings Hotels and lodging facilities have endless opportunities to communicate with visitors and travelers, thanks to AR services. It enables companies to inform likely customers in an immersive way, with 360° visual tours and the presentation of 3D views. Booking demand may rise as a result of customers feeling more secure about the property after receiving interactive information. Businesses can also improve their internal operations by allowing visitors to look around the premises and use the amenities. Nowadays, travelers may access their de-
11 Scope of virtual reality and augmented reality in tourism
193
vices from anywhere. By incorporating interactive hotel aspects into the physical environment, customers can be tempted to visit adjacent attractions. Some hotels, for example, present guests with an immersive wall map that enables them to quickly uncover crucial information about surrounding tourist sites.
13.2 Augmented vacationer destinations The use of AR in tourism also has collaborating tours of popular attractions and landmarks. The usage of AR-based apps can enable users to take tours of historic sites and learn extensively about them. Simply pointing the app at a structure or landmark and presenting a visual of the whole history and events that transpired provides an incredible experience for the voyagers as well as a real-time tool to learn and explore further. Beyond merely AR travel places, this capability of travel apps can be expanded. A user may point their handset toward a restaurant to retrieve menus and reviews, for example. Tourists may be able to gather information on important sites of interest while traveling to enhanced destinations, which can elevate experiences.
13.3 Immersive navigation Travelers who do not understand the local language can use AR travel applications to ensure they never get lost in an unfamiliar city or rural location. Visitors can use ARbased navigation apps to get where they need to go without having to ask for directions. Instructions are simple to follow, thanks to the features of travel apps, which include arrows and pointers. Additionally, these apps use Google Maps and the camera on the device to enable immersive navigation and improve user experiences. A true innovator in providing real-time instructions is Google Maps. In order to improve spatial orientation, Google Maps released a new update in April 2018. Realtime street view, navigational advice, and other useful information are all combined in this AR software.
13.4 Local transportation Local transit is being greatly enhanced by AR. With AR-based travel applications, which offer live map views of roads, traffic, destinations, and the best areas to travel in the city, things become so much easier and safer, even in foreign locales. The same apps can be used by travelers on public transportation in addition to when they take cabs. In the future, tourist guides may not even be necessary, thanks to AR apps, since travelers will always have access to the necessary information on their smartphones.
194
Gnanasankaran Natarajan et al.
In fact, AR technology may turn a metro map into a useful trip guide that is available in a variety of languages.
13.5 Beacon technology and push notifications The use of Bluetooth-enabled beacons as part of AR is another exciting application. This technology is incredibly beneficial to the travel and tourism industry since it enables companies and marketers to send push alerts or activate specific services when customers visit a specific location. The application of this technology by Starwood Hotels is among the best. Customers are able to unlock their hotel rooms using beacons when they are close to the door. Maps, reviews, menus, exclusive offers, and discount coupons can also be sent to users when necessary.
13.6 Augmented reality gamification The gaming business has employed AR extensively, but now, and deservedly so, the travel sector is starting to do the same. The usage of an AR tourism app by a hotel or other hospitality provider can improve the experience of its customers by adding a playful element to their actual environment. As an illustration, Best Western employs AR to let kids see Disney characters within their hotel. Adult users of AR applications can renovate and personalize their rooms [19].
13.7 Augmented reality disregarding language differences While visiting a foreign country is appealing, doing so might be challenging without a translator to help. In a foreign country, it might be challenging to read menus, signs, and other types of written communication. Smartphones can now translate a variety of foreign languages, thanks to AR technology. These apps give travelers from all over the world an immersive, interesting experience by using AR to translate objects and text.
14 Other groundbreaking uses of AR in tourism 14.1 AR for metropolitan tours The ability to improve or change the perception of the environment is one of the main advantages of AR. With AR apps, tourists may explore the existing architectural treasures
11 Scope of virtual reality and augmented reality in tourism
195
in detail and learn more. Applications for AR can bring historical events back to life and display how the neighborhood around a landmark seemed in the past. Additionally, it can transport the user into the future, where it can see just recently planned structures. The Paris, Then and Now app, which transports users back to Paris in the twentieth century and offers fascinating insights and a singular experience, is one of the best instances of AR in the travel and tourist industry.
14.2 AR for galleries Museums are no longer dull places to visit, all thanks to AR’s offering of additional pathways. Travelers may quickly discover how various technological devices functioned, what prehistoric species looked like, and more with an AR app in a fun way. For example, visitors can look for various plants and animals with this AR app. Along with being amused, users can get knowledge about the habitats of many plants and animals.
15 Reputation of augmented reality in the travel industry After thoroughly examining the use of AR in travel and tourism applications, let us examine how AR can help with the creation of travel apps. The finest travel companion of today is an app, and in addition to alleviating a number of traveler’s annoyances, AR-based travel apps benefit businesses by expanding their services and providing more individualized experiences to their clients. The opportunities that AR presents are limitless.
15.1 Easy access to evidence The tourism sector is booming. It enables individuals to become more open to novel encounters, travel to other places, and respect cultures other than their own. The user can choose their destinations and activities in a way that is much more fulfilling by utilizing the power of AR. Mobile devices can be used to access AR, which makes information portable and easy to access. For instance, the user may identify Wi-Fi hotspots, find information about and reviews of nearby places, and even look at a current weather forecast.
196
Gnanasankaran Natarajan et al.
15.2 Better advertising and promotion Travelers’ decisions may be influenced by AR-based advertising and marketing because it allows for a more interactive visualization of properties and services. With the use of virtual tours that give app users’ experiences that are nearly lifelike, travel companies are now able to offer distinctive and inventive storytelling. AR-focused marketing tactics can also increase trust and turn new visitors into devoted patrons in addition to having an impact on booking decisions. Actually, according to Expedia, interactive content from travel marketers has a 78% or higher influence on the decisions made by travelers.
15.3 Improved convenience and coziness Mobile apps help enable timely access of on-the-go information possible, which can significantly improve travelers’ experiences. Travelers can navigate arrows and other navigation-related indicators into a foreign city with the aid of AR-based tourism apps. An AR app serves as a 24/7 guide and helps users from other countries overcome linguistic hurdles. From making hotel and travel arrangements to locating the top places to shop, eat, and drink, as well as touring well-known tourist locations, it improves customers’ comfort and convenience.
15.4 Brands tourist places extra striking AR technology enables designers of travel applications to include 3D models of historical sites and monuments. Users can enjoy a time-traveling immersive experience. The use of AR technology is significantly increasing tourist attraction in zoos and amusement parks. The revolutionary AR technology can increase the allure of any tourist destination more than ever by fusing art, science, culture, and architecture. The four foundational components of the tourism sector can be strengthened through AR: 1) Transportation: Guide tourists to the many routes that are available in an unfamiliar city for transportation. 2) Accommodation: Evidence about assessments and other key hotel features is provided under accommodations. 3) Catering: Provide information about the many food options that are offered by local eateries to tourists. 4) Places of interest for tourists: Make sure that tourism attractions have improved [25, 26].
11 Scope of virtual reality and augmented reality in tourism
197
16 Conclusion It is hard to overestimate the importance of AR in the travel industry. AR has so much promise that it may even help you find brand-new business opportunities. The tourism industry often offers a wonderful experience, and AR not only solves problems but also enhances the holiday experience. With the help of AR, hotels, booking services, neighborhood restaurants, and all other business participants can increase their consumer bases. This post has shown why AR is important to the travel industry’s future and how it is applied to the creation of tourism-related apps. It will be recommended to use AR in the travel app solutions as a top travel and tourism app development firm. A company could advance, thanks to it. Along with having in-depth knowledge of AR technology, they also have extensive domain understanding in the tourism sector and the ability to create unique mobile app solutions for businesses.
References [1]
Faisal, A. Aldo. 2017. Computer Science: Visionary of Virtual Reality. Nature 298–299. https://doi.org/ 10.1038/551298a. [2] Schueffel, Patrick. “The Lengthy History of Augmented Reality,” 2016, Huffington Post, The Concise Fintech Compendium. [3] Schueffel, Patrick. 2017. The Concise Fintech Compendium. Fribourg: School of Management Fribourg/Switzerland. [4] Rosenberg, Louis B. 2019. “The Use of Virtual Fixtures as Perceptual Overlays to Enhance Operator Performance in Remote Environments”. [5] Wu, Hsin Kai, Silvia Lee, Hsin Yi Chang, and Jyh Chong Liang. 2013. Current Status, Opportunities and Challenges of Augmented Reality in Education. Computers & Education 62: 41–49. https://doi. org/10.1016/j.compedu.2012.10.024. [6] Steuer. 2016. Defining Virtual Reality: Dimensions Determining Telepresence. Department of Communication, Stanford University. [7] “Introducing Virtual Environments”. 2016. Wayback Machine National Center for Supercomputing Applications. University of Illinois. [8] Rosenberg, Louis B. 1993 “Virtual Fixtures: Perceptual Tools for Telerobotic Manipulation.” In IEEE Virtual Reality Conference. https://doi.org/10.1109/vrais.1993.380795. [9] Ground breaking Augmented Reality-Based Reading Curriculum. 2011. PR Web. [10] Moro, Christian, Zane Stromberga, Athanasios Raikos, and Allan Stirling. 2017. The Effectiveness of Virtual and Augmented Reality in Health Sciences and Medical Anatomy. Anatomical Sciences Education 10(6): 549–559. https://doi.org/10.1002/ase.1696. [11] Don’t be blind on wearable cameras insists AR genius. 2018. Slash Gear. [12] Stewart-Smith, Hanna 2012. “Education with Augmented Reality”, ZDNet, Japan [13] Shumaker, Randall, and Stephanie Lackey. 2015. “Virtual, Augmented and Mixed Reality: 7th International Conference, VAMR 2015”, In Springer, Held as Part of HCI International 2015, Los Angeles, CA, USA
198
Gnanasankaran Natarajan et al.
[14] Mourtzis, Dimitris, Vasilios Zogopoulos, Ioannis Katagis, and Panagiotis Lagios. 2018. Augmented Reality Based Visualization of CAM Instructions towards Industry 4.0 Paradigm: A CNC Bending Machine Case Study. Procedia CIRP 70: 368–373. https://doi.org/10.1016/j.procir.2018.02.045. [15] Michalos, George, Niki Kousi, Panagiotis Karagiannis, Christos Gkournelos, Konstantinos Dimoulas, Spyridon Koukas, Konstantinos Mparis, Apostolis Papavasileiou, and Sotiris Makris. 2018. Seamless Human Robot Collaborative Assembly – An Automotive Case Study. Mechatronics 55, 194–211. https://doi.org/10.1016/j.mechatronics.2018.08.006. [16] Ronaghi, Mohammad Hossein, and Marzieh Ronaghi. 2022. “A contextualized study of the usage of the augmented reality technology in the tourism industry.” Decision Analytics Journal 5. [17] Research Human Computer Interaction (RHCI), Virtual and Augmented Reality, Wearable Technologies, 2021, www.cs.nycu.edu.tw [18] Shu, Jiayu, Sokol Kosta, Rui Zheng, and Pan Hui. 2018. “Talk2Me: A Framework for Device-to-Device Augmented Reality Social Network.” In IEEE International Conference on Pervasive Computing and Communications. https://doi.org/10.1109/percom.2018.8444578. [19] Pestek, M. Sarvan. 2020. “Virtual reality and modern tourism.” Journal of Tourism Futures 7(2): 245–250. [20] Shoaib Siddiqui, Muhammad. 2022. “Virtual Tourism and Digital Heritage: An Analysis of VR/AR Technologies and Applications.” International Journal of Advanced Computer Science and Applications 13(7). [21] Saenz, Aaron. 2009. “Augmented Reality Does Time Travel Tourism Singularity”, HUB 19. [22] Sung, Dan. 2011. “Augmented reality in action – travel and tourism Pocket-lint”. [23] Dawson, Jim. 2009. “Augmented Reality Reveals History to Tourists Life Science”. [24] Bartie, Phil, and William Mackaness. 2006. Development of a Speech-Based Augmented Reality System to Support Exploration of Cityscape. Transactions in GIS 10(1): 63–86. https://doi.org/10.1111/ j.1467-9671.2006.00244.x. [25] Benderson, Benjamin B. 2002. “Audio Augmented Reality: A Prototype Automated Tour Guide”. In Wayback Machine Bell Communications Research, ACM Human Computer in Computing Systems Conference, 210–211. [26] Jain, Puneet, Justin Manweiler, Roy Choudhury, and Romit, “OverLay: Practical Mobile Augmented Reality”, 2015, ACM MobiSys.
Aman Anand, Rajendra Kumar✶, Praveen Pachauri, Vishal Jain, Khar Thoe Ng
12 6G and IoT-supported augmented and virtual reality–enabled simulation environment Abstract: The majority of the latest inventions focus on the use of cutting-edge technologies like blockchain, the internet of things (IoT), and machine learning in the implementation of augmented and virtual reality (AR/VR) systems for social learning. As the AR companies are deploying their AR clouds, it is expected to integrate blockchain technology with AR for various purposes like gaming, healthcare, and simulations in many ways in the near future. Blockchain is a technology in the form of a digital ledger for storing the transactional records (the blocks) of the public in various databases called chains in a well-connected network through peer-to-peer nodes. AR is the technology comprising enormous promises that are emerging to unlock the full potential of the IoT in all fields, whether it is healthcare, gaming, or social learning. AR applications are using various kinds of data generated by IoT devices and components to help the developers to be more effective and productive to make users more excited and enjoyable. AR and artificial intelligence are closely related technologies for application developers to combine for creating different experiences for users. This chapter discusses the infrastructure requirements in terms of blockchain and the IoT in AR/VR environments for the simulation of various things. The chapter includes simulation examples in the domain of e-games, smart cities, amusement and theme parks, and so on. The presented simulations are 6G supported and the real/virtual objects are captured by cameras equipped with IoT sensors. Keywords: Simulation environment, blockchain, IoT, augmented and virtual reality, Unity
✶ Corresponding author: Rajendra Kumar, School of Engineering and Technology, Sharda University, Greater Noida, India, e-mail: [email protected] Aman Anand, Vishal Jain, School of Engineering and Technology, Sharda University, Greater Noida, India Praveen Pachauri, UP Institute of Design, Noida, India Khar Thoe Ng, Wawasan Open University, George Town, Malaysia
https://doi.org/10.1515/9783110981445-012
200
Aman Anand et al.
1 Introduction Digital content and the internet of things (IoT) have developed a better environment for the financial sector. Also, with the increase of IoT at a large scale, automation of other technologies like augmented and virtual reality (AR/VR) are emerging as key players for driving a range of mobile and desktop applications [1]. The sixth-generation (6G) networks are prominent in supporting improved holographic projection using terahertz bandwidths, ultralow latency, and device connectivity at a large scale. The data may be exchanged among autonomous networks over an unsecured channel. Therefore, to ensure data security and privacy at the end of different stakeholders, blockchain technology is open in new dimensions for managing intelligent resources, controlling user access, the level of audibility, and stored transactions in a well-defined way. The blockchain and 6G can coalite in the near future in AR/VR applications in many emerging investigations. To date, many researchers have presented surveys regarding the integration of blockchain and 6G in AR/VR as an isolated entity. Virtual representation of real-world things in the form of simulation is being used in many AR/VR applications [2]. There may be remarkable support of 6G systems to rely on virtual worlds along with benefits from their rich contextual information for improving performance and reducing communication expenditures. A 6G network may interact with AR/VR applications and different sensors for sharing the users’ data on multiple nodes [1]. This open channel allows the users to provide data using different sources suffering from external threats in a decentralized environment. The data that is collected undergoes classification process on the basis of information like data localization, data transformation, and users’ personal details, along with some observable information like avatar, and computed information like recommendations, marketing, and biometric details [3], and also associated data like login credentials, contacts’ information, payment details, wallets, and IP of the machine. A piece of advice can be issued regarding possible attacks on decentralized AR/VR applications. The attacks may include impersonation threats, internet protocol version 6 (IPv6) related to spoof, session hijack, and anonymous session links looking for unauthorized access to get personal and sensitive information. Blockchain technology can ensure trust, transparency, and accountability in avoiding fraudulent activities in an AR/VR application. It works as an immutable ledger to enable transactions in a chronological and timestamped fashion. Blockchain technology allows adding block details for employing a consensus mechanism that is transparent to all network users [4]. The use of blockchain can eliminate the need for third-party tools for transparency and security for trusted data communication in an AR/VR channel. The major industrial AR/VR deployments require blockchain networks as the best option over public blockchain owing to the ease in node verification and enhanced transactions throughout [5]. The integration of 6G and blockchain in the AR/VR domain can pace to a secured and accountable experience to satisfy a desirable quality of service (QoS) to the users. The use of 6G services is quite appreciated support to the
12 6G and IoT-supported augmented and virtual reality–enabled simulation environment
201
latest technological advancements. Therefore, 6G can envision a blockchain-based AR/ VR system to have guaranteed trust, security, privacy, and extreme reliability [6].
2 Literature review We examine some of the recent research on the intersection of IoT, AR/VR, and 6G in simulation environment. One of the key challenges in AR/VR applications is the need for low-latency and high-bandwidth networks. This is where 6G comes into an important role. 6G wireless networks are expected to provide ultra-high-speed data transfer rates, very low latency, and massive connectivity to support AR/VR applications. Ballo et al. [7] discussed 6G wireless networks to provide data transfer rates of up to 1 Tbps, which is 100 times faster than 5 G. IoT devices are also becoming increasingly important in AR/VR applications. The use of IoT devices such as sensors and cameras can enhance the user experience in AR/VR applications. For example, a smart home environment equipped with IoT devices can be integrated into an AR/VR system to provide a more immersive experience. IoT devices can be used to collect data about the environment and the user, which can then be used to optimize the AR/VR experience [8]. Several studies have explored the use of AR/VR in combination with IoT devices. A study [9] proposed an AR-based platform that utilizes IoT devices to monitor and control smart home environments. The system allows users to interact with IoT devices through an AR interface, providing a more intuitive and immersive user experience. Another study [10] proposed a VR system that uses IoT devices to create a more immersive and realistic virtual environment. The system uses IoT sensors and cameras to collect data about the environment, which is then used to create a 3D virtual environment that the user can interact with. One of the main benefits of using AR and VR in simulations is the ability to provide a safe and controlled environment for training. Simulation training can reduce errors and improve performance by allowing trainees to practice in a controlled environment without the risk of harm or damage. AR and VR provide an even more immersive training experience by allowing trainees to interact with digital objects and environments, which can enhance the realism and effectiveness of the training [11]. Another benefit of using AR and VR in simulations is the ability to provide personalized and adaptive training experiences. A study [12] proposed an AR-based simulation system for training surgeons, which uses haptic feedback to provide a realistic sense of touch during surgery simulations. The system also includes personalized feedback based on the user’s performance, which can help to identify areas for improvement and tailor the training to the individual user’s needs.
202
Aman Anand et al.
Several other studies have also explored the use of AR and VR in simulations for various applications. For example, [13] proposed a VR-based simulation system for firefighter training, which provides a realistic and immersive environment for trainees to practice firefighting techniques. The system includes realistic fire and smoke simulations, as well as interactive tools and equipment for trainees to use. Another study by [14] proposed an AR-based simulation system for training assembly line workers, which provides interactive guidance and feedback during the assembly process. The system uses AR to overlay digital instructions and feedback onto the physical workspace, allowing trainees to learn and practice in a more efficient and effective manner.
3 Potential blockchain and 6G assistance in AR/VR space The technological support of blockchain provides high security of transactions and ensures users’ trust transparently. Figure 1 presents the blockchain-based solutions in AR/VR environments.
Decentralization Engaging nodes in transaction without relying on a central authority for record maintenance
Tokenization Facilitating digital presentation of object, services and right
Immutability Data stored by a consensus mechanism in distributed ledger (can not be altered or tempered)
Blockchain based solutions in AR/VR
Sacalability Ability to support an increasing load of transaction and number of nodes in the network
Anonymity Enabling the trust among nodes even unknown to each other
Security Safeguards the data against possible threats using cryptographic algorithms
Figure 1: Blockchain-oriented solutions for AR/VR problems.
The use of blockchain has the potential to prove its recommended presence in secured AR/VR applications not especially in social learning but in almost all areas. In a decentralized blockchain system, a user can create his/her completely virtual environment
12 6G and IoT-supported augmented and virtual reality–enabled simulation environment
203
supported by a set of protocols without interfering or facing risk from the users of other services. The blockchain permits copyright protection of the contents, and the users may keep their records in a blockchain supported system. The blockchain can enhance the uses of VR applications by combining things with cryptocurrency marketing to increase profits. Some of the advantages of the 6G and blockchain coalition in AR/VR include the following: 1. 6G provides support in virtualizing the services like 3D imaging, driverless vehicles, simplification of digital twins in IR 4.0, IoT, etc. 2. It helps in securing mass data sharing over 6G communication channels and maintaining trust in AR/VR applications. 3. The blockchain with AR/VR ensures the trusted decentralized activities at multiple nodes of the network. 4. The AR/VR assets need fast imaging and video display and process, so the data needs to be stored and communicated using a local central server, as the storage and computation power is limited. Therefore, the use of blockchain enhances the sharing and securing of the communicated data. 5. The AR/VR applications need high bandwidth, so the central server may be overloaded because of mass communications between AR/VR systems. In this case, the blockchain can provide the facility for data decentralization. 6. The blockchain also improves cybersecurity to share sensitive information and public security for army applications using AR/VR machines. 7. The blockchain can ease the commercial use of AR devices by creating a useroriented marketplace to store and upload content in a decentralized way. 8. The blockchain-oriented decentralized system can help the users in downloading and uploading the content, creating a marketplace to activate commercial AR/VR devices. 9. The blockchain-based tokens enhance the financial transactions in which AR systems perform peer-to-peer transactions [7].
4 Key takeaways in 6G, blockchain, and AR/VR The key elements of systems using 6G technology and blockchain to support AR/VR technologies included the coalition of 6G services to support the blockchain providing a complex real-time connection at high bandwidth and secure data communication between AR/VR applications. As the 5G/6G services are gaining popularity, the AR/VR applications provide high connectivity at very low latency. Many architectures are proposed to support the decentralized systems for responsive edge servicing, smart parking, vehicular network, and IR 4.0 products. Tahir et al. [8] discussed a framework supported for blockchain solutions in a 5G environment. Dai et al. [9] discussed a survey on issues on the IoT by presenting the
204
Aman Anand et al.
key fundamentals of blockchain-based solutions for addressing the trust and interoperation problems among the stakeholders. Alrubei et al. [10] discussed the brief review and analyzed many latency issues by demonstrating the integration of blockchain and the IoT. AR can be seen as a middle layer between VR and the real-world environment. AR can be described as the real virtual continuum (RVC). Figure 2 presents the continuum timeline, and the real virtual continuum shows the transformation of real elements into a virtual environment. The figure shows the different levels in the real virtual continuum and the different degrees of interactions with AR/VR users change. Application Requirements Ai Real-Reality
Augmented Reality
A1
Ak
Input
An
Augmented Virtuality
Virtual Virtuality
Output
Mixed Reality
Figure 2: Reality/virtuality continuum of AR/VR.
VR highly depends on four major techniques: visual display, graphics, tracking methods and tools, and database creation and maintenance. An effective VR rendering experience plays an important role in managing the movement and tracking speed by devices calculating processing latency. VR determines the interesting use cases in support of different applications in domains like vehicle simulations, entertainment, vehicle design, architecture design, microscopes, and so on. VR is seen as an improved technology for user experiences and real-time controls: however, the expansion of VR in industrial developments has been challenged because of high-end communication latency, high rendering of simulation models, and so on [11].
5 Sixth-generation-envision in augmented and virtual reality The 6G communications are envisioned in support of complex network connectivities, mass coverage areas, low-powered terminals, and effective artificial intelligencebased support. The 6G is much capable of supporting around 100 parallel sensor connections per square kilometer area. This also allows the computational intelligence to improve the AR/VR perception models at highly reliable and coverage areas. It is envisioned as supporting smart city vertical designs like vehicular networking, the inter-
12 6G and IoT-supported augmented and virtual reality–enabled simulation environment
205
net of biosensor things, massive edge computation, and optical radio access with photonic communications for highly complex visible light communications [12]. The 6G technology supports virtualizing software-defined networks and requires no need of manual processing on nodes. Therefore, it presents the limitations of the 4G long-term evaluation (4G-LTE) network and 5G radio-access network (5G-RAN) in physical operations and tasks management. In contrast to this, 6G networks are operated as virtualized components to meet the resource requirements of terminals. 6Gbased services enable haptic man–machine interaction and have predictions to drive Industry Revolution 4.0 in encompassing the whole cyber virtual space. The digital contents produced may be transmitted via an intelligent physical system and the media access control layer protocol using the edge nodes. In conclusion, 6G technology is enriched and capable of providing many services to AR/VR applications: – Making powerful the entire AR/VR environment for replacing the legacy of industrial automation of manual processes. – 6G may reduce uses of the IoT revolutions in AR/VR and may be led by immersive extended reality, digital twin process, and robotics in day-to-day activities. – In AR/VR-based healthcare, the 6G may revolutionize the telemedicine sector using interactive and haptic services. – Digital robots in learning and training can communicate with remote instructors to illustrate things in an attractive way, which can be supported using real-time responsive 6G-network service. Additionally, VR services may allow the instructors and trainers to feed precise examination and visualization of the 3D models of objects and control via 6G network.
6 Cryptocurrency and blockchain-based AR/VR The cryptocurrency has gained prominence in the market through Bitcoin ledgers and its use as digital money that can be shared and distributed across many peer-topeer nodes. It has provided a framework to start a foundation for blockchain to be applied in many application areas using other technologies, including third-party financial services [13]. The cryptocurrency-oriented transactions use immutable ledgers, in which virtual money is deducted from the payer’s digital wallet and gets a credit to the receiver’s digital wallet. After the completion of a transaction, the record is appended to the particular block, which can be mined after the required transactions are performed in the block. After mining the blocks, the state of the transaction ledger is visible across all the terminals in a one-to-one connected network. This visualization of the transaction can be presented in a better way using VR. The existing blockchain-based augmented reality ecosystems present the use case of IR 4.0 automation and focus on the automatic evaluation and examination of equipment, tools, and processes to automate on-site monitoring and tracking. Therefore re-
206
Aman Anand et al.
mote monitoring using 6G will have a great role in an augmented and real environment. The in-site data collection is used to monitor the systems and analyze them with supportive artificial intelligence techniques. Then the things are passed to different stakeholders. Augmented Reality 6G-feMBB
Blockchain 6G-feMBB
In-situ Monitoring
Supply chin Integration
Automated Inspection, Autonomous Robots
Unmanned Vehicles
Quality Data Customers Figure 3: Decentralized system architecture in augmented reality.
Figure 3 presents the massive bandwidth required in a 6G-enhanced mobile broadband (6G-FeMBB) service in leveraging real-time automated operations and controlling. After providing the data to various stakeholders in a supply chain ecosystem, the blockchain is used at different points of supply chains for ensuring consistent and timestamped activities among the network users. As shown in Figure 3, a decentralized architecture in an AR/VR environment is demonstrated. It utilizes AR in the Industry Revolution 4.0 process cycle. AR plays a key role in the Industrial Revolution 4.0 process that increases manufacturing, productivity, robotics, and automated examination of goods and equipment. In the modern era, industrial equipments use sensors and are connected with networks to help various in-site and remote processes in real time using 5G support. The use of 6G will definitely improve this process in multiple. This in-site architecture uses robots in monitoring and inspecting the various tasks physically. Using robotics assistance, human involvement can be reduced to a great extent; it is also advantageous in reducing the causes of human accidents and is mainly useful in human-harmful industries such as petroleum, mines, and oil-related industries where very risky operations are performed at the cost of human life. Dedicated, trained robots may easily collect the
12 6G and IoT-supported augmented and virtual reality–enabled simulation environment
207
data from sensors, and the collected data may be segregated before sending to the cloud server for analysis. The use of reinforcement learning is the appropriate choice for improving robots’ learning as there is the assurance of reward and penalty mechanisms that help robots in learning and adapting to the external environment. The major functions of the sensors and robots can be done using AR-based interactive control mechanisms, and robot gestures can be monitored from remote locations. AR systems use a high bandwidth, and therefore managing the network is a crucial concern. In many industrial processes, real-time automatic processes are required in many critical applications. A 6G-FeMBB network has the ability to support the real-time bandwidth to ensure correctness in accuracy and a stable networks. For data analysis, AR applications collect the data in the form of high-definition videos and pictures of daily processes. This faster collection of data is ensured using a 6G network. The data at the backend can be visualized using the process requiring visuals. The use of blockchain ensures decentralized and encrypted data is provided to authorized users. To integrate the supply chain in an AR/VR environment, the supply-chain-based ecosystem uses the services of blockchain to ensure chronological and timestamped positions among all stakeholders in the blockchain network, which starts working from the manufacturer level and moves to the supplier, logistic, physical warehouses, dealers, and the retail users.
7 6G wireless network elements in an AR/VR environment The use of artificial intelligence and machine learning provides the crucial components of the 6G system architecture in AR/VR environments. These play important roles in self-organizing, self-handling, self-configuring, and fault detection. Spectrum traffic is also pushing 6G technology in adopting new spectrums for communications in AR/VR environments. Therefore, this is one of the active components in the 6G system facilitating AR/VR applications. As 6G can accommodate a wide range of sensors and communicating devices, it needs to be in line with all the parallel related technologies. The major elements of the 6G wireless network are presented in Figure 4. It contains elements like air interface, new spectrum, artificial intelligence/machine learning, very large-scale antenna, and intelligent reflecting surfaces.
208
Aman Anand et al.
Air Interface
Coexistance of variable Radio Access Tecnology
New Spectrum
Advanced Beamforming with very large scale Antenna
Sixth Generation (6G)
Artificial Intelligence and machine Learning
Figure 4: Major elements of 6G wireless network.
7.1 Air interface The 6G technology concentrates on the terahertz frequency range with extremely wide bandwidths; therefore, it can initiate new issues to deal efficiently with those frequency bands. To get a secure communication infrastructure, having adequate infrastructure can change the focus from a spectrally optimized solution to an enhanced coverage solution.
7.2 New spectrum technology A technology mm wave (an innovative cellular technology, millimeter wave) is already in existence as a candidate for 5G; however, it is not extremely used to its maximum potential as the beamforming method is not matured as it must be. It needs many enhancements in the network while satellite connectivity is merged into cellular communications, as the spectrum is divided for multiple tasks, such as DTH service, military services, digital communication, and mobile communications.
7.3 Artificial intelligence and machine learning The 6G wireless communications over homogeneous and heterogeneous networks may be a core solution to the digital transitions in societies as a pervasive and secure method. A wide variety of emerging developments and technological advancements like self-
12 6G and IoT-supported augmented and virtual reality–enabled simulation environment
209
driving vehicles and natural language processing (voice assistance) and translation assistance have been made possible due to rapid research and development using machine learning algorithms. Machine learning is a subset of artificial intelligence.
7.4 Advanced beamforming using VLSA The optimized beamforming may be achieved using a very large-scale antenna (VLSA). The concept of beamforming provides a way to apply the beam to specific directions or areas of interest. As the energy cannot be spread in 360 dimensions, the coverage of data transmission can be improved with the concentration of the beam in the desired direction only. Advanced beamforming uses intelligent reflecting surfaces, orbital angular momentum, variable radio access technologies, and so on.
7.5 Intelligent reflecting surface An intelligent reflecting surface (IRS) is defined as a potential range for beamforming in 6G space [14]. An IRS is composed of layers of thin electromagnetic materials that can show the configuration of the incoming electromagnetic radiation as an intelligent presentation using software tools.
7.6 Orbital angular momentum aided multiple inputs and multiple outputs Orbital angular momentum (OAM), a new dimension in electromagnetic waves, was discovered to promise multiple data stream transmissions over a spatial channel. It is a multiplexing method at the physical layer to combine the signals carrying the electromagnetic waves with the help of orbital angular momentum.
7.7 Coexistence of variable radio access The use of 6G technology may proceed to ubiquitous network infrastructures where the users might not only be ignored in the choice of selection of the best communicating network. Every node in the network may be smart enough for sensing the condition of the channels and the configuration for QoS. Figure 5 presents a simulation of the 6G wireless network in a smart city. 6G allows communicating with sensors that use a very low bandwidth, such as biosensors and the IoT. It can also enable data transmission at a high rate, such as high-definition video
210
Aman Anand et al.
broadcasting in the smart city system. This communication is possible in fast-moving trains, airplanes, and so on.
Figure 5: The sixth-generation wireless network in a smart city.
The AR/VR simulation to identify changes in the physical world may use a threedimensional environment made utilizing a modeling software such as 3Ds Max, and Maya. The simulation can be developed using a game engine like Unity. The use of sensors like distance and angle measurement sensors may be integrated with the simulation for this purpose.
7.8 Gaming simulation Figure 6 presents a gaming simulation in an AR/VR environment supported by 6G and the IoT. Here AR/VR combines the digital world with real-world objects, and the real users feel a virtual user is like the real one. In such cases, the focus of VR is to produce a simulation of a new reality. With the help of a VR screen, a user can accept and react in the digital world like a real one. This environment needs two lenses between the user and the VR screen. They simulate the movements of the eyes and adapt the individual movements in the virtual environment.
12 6G and IoT-supported augmented and virtual reality–enabled simulation environment
211
Figure 6: Gaming simulation in 6G and IoT environment.
7.9 IoT and 6G-assisted AR theme park As a future vision of 6G [15], Walt Disney is expecting to start theme parks based on AR for its visitors. They have been granted a patent for an AR/VR world simulator without using a headset or glasses. Walt Disney has also received federal permission for a patent for constructing its Metaverse at its theme parks and hotel stays. The proposed simulator is supposed to deliver a realistic and immersive three-dimensional virtual experience. It will be an even more impressive experience as it will employ many screens capable of producing a high number of frames per second for a realistic appearance. Walt Disney has announced that it will be collaborating with Mark Zuckerberg to create its own self-branded Metaverse to connect the real and digital worlds using the IP library of Disney. The Metaverse is a hypothetical iteration of 6G and the IoT technologies to create a universal and immersive virtual environment to facilitate VR and AR headsets for theme park visitors. It is proposed that every person will have the opportunity to come to Walt Disney theme parks for an amazingly realistic experience. The visitors to Disney parks may soon be able to ride the Metaverse experience as headset-less AR. It is a great challenge, and the role of the digital divide will solve the issue [16]. Everything has its positive and negative sides. The manufacturers and solution developers of augmented and virtual systems will have to design the things as per standards of physical and cognitive ergonomics; otherwise, there may be negative consequences [17].
212
Aman Anand et al.
7.10 Conclusion The chapter presented the basic framework technology regarding 6G technology, the IoT, and blockchain in AR/VR environments. The major simulation cases in AR/VR environments presented included gaming, theme park, and smart city. The combination of IoT, AR/VR, and 6G wireless networks has great potential to revolutionize the way we interact with the digital world. The low latency and high bandwidth provided by 6G networks, along with the data collected by IoT devices, can be used to create more immersive and realistic AR/VR applications. As 6G wireless networks continue to develop, we can expect to see even more innovative applications of IoT and AR/VR. AR/ VR are being increasingly used in simulation training applications, providing realistic and immersive training experiences in various fields. In summary, the use of AR and VR in simulations provides a range of benefits, including increased safety, realism, personalization, and adaptivity. As the technology continues to develop, we can expect to see even more innovative applications of AR and VR in simulation training. The technology in near future will change the environment of gaming and entertainment. Several challenges will also be on the path, and the vendors will have to be ready for mass production with QoS. The majority of things related to maintenance and troubleshooting will be automated, and hence huge skilled manpower will be required to meet the challenges.
References [1]
[2] [3]
[4] [5] [6]
[7] [8]
Bhattacharya, Pronaya, Deepti Saraswat, Amit Dave, Mohak Acharya, Sudeep Tanwar, Gulshan Sharma, and Innocent E. Davidson. 2021. “Coalition of 6G and Blockchain in AR/VR Space: Challenges and Future Directions.” IEEE Access [Internet] 9: 168455–168484. doi: 10.1109/ ACCESS.2021.3136860. Kumar, Rajendra, Anil Kumar Kapil, Vikesh Kumar, and Chandra Shekhar Yadav. 2016. Modeling and Simulation Concepts. Laxmi Publications Private Limited. Kishor Gupta, Jugal, and Rajendra Kumar. “An efficient ANN Based approach for Latent Fingerprint Matching.” In International Journal of Computer Application, Foundation of Computer Science, USA, ISSN: 0975–8887, Volume 7, Issue 10, October 2010, pp. 18-21. Bodkhe, U., D. Mehta, S. Tanwar, P. Bhattacharya, P. K. Singh, and W.-C. Hong. 2020. “A survey on decentralized consensus mechanisms for cyber physical systems.” IEEE Access 8: 54371–54401. Wust, K., and A. Gervais. 2018. “Do you need a blockchain?” In Proc. Crypto Valley Conf. Blockchain Technol. (CVCBT), pp. 45–54. Zug, Switzerland. Nguyen, T., N. Tran, L. Loven, J. Partala, M.-T. Kechadi, and S. Pirttikangas. 2020. “Privacy-aware blockchain innovation for 6G: Challenges and opportunities. In Proc. 2nd 6G Wireless Summit (6G SUMMIT), pp. 1–5. Levi, Finland. Bello, S. A., A. I. Sulyman, and S. A. Almasri. 2019. “6G Wireless Networks: Vision, Requirements and Challenges.” IEEE Access 7: 77821–77837. Li, J., J. Yang, X. Wang, and Y. Chen. 2021. “IoT for Augmented Reality: A Comprehensive Survey.” IEEE Internet of Things Journal 8(3): 1493–1503.
12 6G and IoT-supported augmented and virtual reality–enabled simulation environment
[9] [10] [11] [12] [13] [14] [15] [16]
[17] [18] [19] [20] [21]
[22] [23]
[24] [25]
213
Lee, Y. H., H. Lee, and J. H. Kim. 2021. “An IoT–Based Augmented Reality Platform for Smart Homes.” IEEE Access 9: 62595–62605. Zhang, X., Y. Jiang, Y. Liu, X. Liu, and L. Zhou. 2021. “A Virtual Reality System Based on IoT Devices for Enhancing Immersive User Experience.” IEEE Access 9: 58253–58264. Wong, E. K. C. 2016. “Simulation in Healthcare: A Review of Simulation Models and Their Applications.” Asia Pacific Journal of Medical Education 1(2): 41–51. Woo, H. J., S. H. Kim, and J. H. Kang. 2019. “Development of a Haptic Augmented Reality Simulator for Endoscopic Surgery.” Journal of Medical Systems 43(3): 53–65. Karim, M. R., D. J. Kim, Y. J. Cho, J. M. Park, and H. Kim. 2020. “Virtual Reality–based Firefighter Training System.” KSII Transactions on Internet and Information Systems 14(8): 3333–3347. Kim, T. H., S. H. Kim, and J. H. Kang. 2021. “AR–based Simulator for Training Assembly Line Workers.” Journal of Mechanical Science and Technology 35(1): 89–96. Fernandez-Carames, T. M., and P. Fraga-Lamas. 2019. “‘A review on the application of blockchain to the next generation of cybersecure industry 4.0 smart factories.” IEEE Access 7: 45201–45218. M., Tahir, M. H. Habaebi, M. Dabbagh, A. Mughees, A. Ahad, and K. I. Ahmed. 2020. “A review on application of blockchain in 5G and beyond networks: Taxonomy, field-trials, challenges and opportunities.” IEEE Access 8: 115876–115904. Dai, H.-N., Z. Zheng, and Y. Zhang. 2019. “Blockchain for Internet of Things: A survey.” IEEE Internet Things Journal 6(5): 8076–8094 Alrubei, S. M., E. A. Ball, J. M. Rigelsford, and C. A. Willis. 2020. “Latency and performance analyses of real-world wireless IoT-blockchain application.” IEEE Sensors Journal 20(13): 7372–7383 Brooks, F. P. pp1999. “What’s real about virtual reality? IEEE Computer Graphics and Applications 19(6): 16–27 Bhattacharya, P., A. K. Tiwari, and A. Singh. May2019. “Dual-buffer-based optical datacenter switch design.” Journal of Optical Communications doi: 10.1515/joc2019-0023. Kabra, N., P. Bhattacharya, S. Tanwar, and S. Tyagi. 2020. MudraChain: Blockchain-based framework for automated cheque clearance in financial institutions. Future Generation Computer Systems 102: 574–587. Yang, P. 2020. “Reconfigurable 3-D slot antenna design for 4G and sub-6G smartphones with metallic casing.” Electronics 9(2): 216. Akhtar, M.W, S.A. Hassan, R. Ghaffar, et al. 2020. “The shift to 6G communications: vision and requirements.” Human-centric Computing and Information Sciences 10(53): https://doi.org/10.1186/ s13673-020-00258-2. Kumar, Rajendra. 2009. Information and Communication Technologies. Laxmi publications. Kumar, Rajendra. 2011. Human Computer Interaction. Laxmi publications.
Raj Gaurang Tiwari, Abeer A. Aljohani, Rajat Bhardwaj, Ambuj Kumar Agarwal
13 Virtual reality in tourism: assessing the authenticity, advantages, and disadvantages of VR tourism Abstract: The travel and tourism sectors were a major economic driver before the spread of COVID-19. COVID-19 has had a profound impact on the tourist industry, notably in the areas of marketing, sustainability, and virtual worlds, even if VR is still a relatively young development in the sector. Virtual reality (VR) is often believed to provide visitors with the “perfect” vacation. This is due to the elimination of all drawbacks experienced by visitors to traditional tourist destinations. Despite VR’s advantages and growing popularity, an important issue remains: will VR spell the death of traditional tourism, or is it only the beginning? This chapter examines the intersection between tourism and technology, with an emphasis on the part played by VR. The purpose of this chapter is to investigate whether VR tourism can replace traditional tourism or represents a distinct subset of the tourist industry. Tourism is placed and contextualized inside the VR domain, and the origins and evolution of VR are investigated in order to assess this astounding change in an ancient industry. It investigates the validity of the VR tourist experience and the pros and cons of it. Keywords: Tourism, COVID-19, VR, tourist experience, ICT
1 Introduction Given the importance of information in the tourist business, information and communication technologies (ICTs) play a crucial role in the sector. There has been a long history of ICT use in the tourist industry. ICTs have had a worldwide impact on the tourist sector since the 1980s. An ICT is “any product that will store, retrieve, alter, send, or receive information electronically in a digital form,” as the definition puts it. As a result of the many advantages provided by ICTs, many businesses within the Raj Gaurang Tiwari, Chitkara University Institute of Engineering and Technology, Chitkara University, Punjab, India, e-mail: [email protected] Abeer A. Aljohani, Computer Science department, Applied College, Taibah University, Saudi Arabia, e-mail: [email protected] Rajat Bhardwaj, School of Computer Science and Engineering, RV University, Bengaluru, India, e-mail: [email protected] Ambuj Kumar Agarwal, Department of Computer Science and Engineering, Sharda University, Greater Noida, India, e-mail: [email protected] https://doi.org/10.1515/9783110981445-013
216
Raj Gaurang Tiwari et al.
tourist industry have made the switch, including car rental firms, hotels, airlines, tour operators, travel agencies, and DMOs. Businesses can benefit from ICTs in a number of ways, including the ability to streamline their operations, connect with their customers on a more personal level, reach more people in more places, overcome obstacles (such as seasonality and crowding), and boost their product’s efficiency and customer service standards. The advent of VR, however, has been one of the most exciting advancements in the world of ICTs. According to academics, virtual reality (VR) has revolutionized the travel industry by altering consumers’ expectations of and engagement with the sector’s goods and services. VR has made tourism more engaging by allowing visitors to explore a variety of environments, from outer space to theme parks, without leaving the comfort of their own homes.
1.1 Definitions of key concepts In this section, we provide definitions for the main ideas discussed in this thesis, explicating what they imply here. Immersion, interaction, forms of immersion (i.e., nonimmersive, semi-immersive, and fully immersive), interaction, and presence are all part of the virtual and reality spectrum.
1.1.1 Virtual and reality In order to fully grasp the notion of “VR tourism,” it is necessary to first grasp the meanings of the individual terms “virtual” and “reality.” It is from the Latin terms virtulis [effective] or virts [virtue] that our English word “virtual” is derived. A virtual item is one that is “so nearly true that for most purposes it may be considered as true,” as defined by the Collins Dictionary. The words realite (French) and realitas (Latin) both mean “property,” from which “reality” is derived. Reality means “the genuine world, actual life, actually, truth, physical being,” according to the Pharos Dictionary. According to Lévy, “reality” is defined as “a material embodiment, a concrete existence.” A “real” location, in other words, is “not imagined” since it really exists and can be visited by the interested party. Since reality is antithetical to virtuality, the two words together seem to be at odds with one another.
1.1.2 Virtual reality VR (also known as cyberspace, telepresence, artificial reality, immersion computing, immersive media, synthetic experience, and virtual/artificial worlds), as seen by the
13 Virtual reality in tourism
217
plethora of terminology and literature on the subject, is notoriously difficult to pin down [1]. This is due to the fact that VR is still a growing technology, and hence its definition remains in a state of change. Also, academics do not always agree on what features a VR experience must have. The many technologies, ideas, and philosophies constitute the basis for most definitions. VR is defined by the Cambridge Dictionary as “a combination of visuals and noises, created by a computer, that appears to depict a location or a scenario in which a human may participate.”
1.1.3 Virtual reality tourism VR is defined differently by different tourism experts, but there does not seem to be a consensus on how to define VR in the tourist literature. This is because many academics fail to provide a clear definition of VR for use in the tourist industry. As A. Beaver puts it, “VR” means “experience[ing] a location artificially” in the context of travel. “VR” “offers travelers a chance to explore and experience a location using a computer or other technology,” writes P. Robison [2]. VR, as defined by C. Krug [3], is “the form of sophisticated cyber-journeys to classic tourist places (or, more specifically, to their virtual analogues on . . . the internet).”
1.1.4 Immersion, interaction, and presence There are three key criteria necessary for a VR experience: immersion, interaction, and presence as shown in Figure 1. Immersion, interaction, and physical presence are the three pillars of a successful VR experience (see Figure 1).
VR Experiences
Physical Presence
– A sense of “being” in the virtual world
Interaction – Passive (limited interaction) – Explanatory (computer commands) – Fuly interactive (full interaction within the VE)
Immersion – Non immersive (deskop VR experiience) – Semi immersive (3D and large screens) – Fully immersive (HMD)
Figure 1: The VR experience.
Immersion is undefined in VR literature. Immersion makes individuals feel “present” in a virtual environment (VE), isolating them from reality. Nonimmersive, semi-immersive,
218
Raj Gaurang Tiwari et al.
and fully immersive systems exist. Desktop VR is the least immersive and most popular. Desktop computers show nonimmersive VEs on monitors or other devices. VEs interface with mice, keyboards, trackballs, etc. Semi-immersive systems project images on walls and floors using 3D sound and a large screen or several displays like multifunctional and immersive. The fully immersive technology immerses the user in the VE. Head-mounted displays (HMDs) separate people completely. More present. VR interaction follows makes VR users feel real. Interaction enables users to navigate the VE and manipulate virtual chairs. A.V. Seaton and M.M. Bennett distinguish between passive, exploratory, and fully interactive engagement. Passive systems are less interactive since users cannot change the visual. The exploratory system allows computercontrolled VE control. The interactive technology lets users pick up virtual things. Presence completes VR (also known as telepresence and virtual presence). Presence is undefined. VR-like scholars define “presence” differently. VR consumers feel “presence” even while they are abroad, according to scholars [4].
2 Literature reviews VR is trendy in tourism literature. VR and tourist publications have boosted literature evaluations. These simplify trend analysis. Yung et al. evaluated 46 VR/AR marketing and sports periodicals. VR helps tourism researchers sell and educate. They also found inconsistent terminology, no theory-based VR/AR research, and gaps and challenges including awareness, usability, and time commitment in VR and tourism literature. Beck et al. [5] examined 27 publications and presentations published from 1994 to 2018. They divided VR installations into three categories, nonimmersion, partial immersion, and full immersion for a unique viewpoint. According to statistics, tourism scholars are more interested in how VR will affect pre-travel and on-site phases. From 2015 to 2018, academics focused more on HMDs’ practical uses, according to their study. This shows VR equipment’s evolution. Wei’s [6] study was the first to critically review VR and AR progress. The author examined sixty 2000–2018 peer-reviewed papers. This research used a theoretical framework to examine VR/AR user behavior and experience. The research found that marketing, destination management, customer experience, e-service, and co-creation should be prioritized in VR and AR. Wei found that VR and AR software adoption goes beyond the core user (i.e., a social network). Loureiro et al. [7] text-mined 325 conference papers and 56 academic publications. Journal articles address many topics, but authors agree that mobile and mobility implications, marketing of tourist attractions, sensations and emotions, atmosphere, smart cities, and cultural heritage are popular. Conference papers focused on mobile
13 Virtual reality in tourism
219
devices for sustainable tourism, the technology acceptance paradigm (TAM) as a theoretical model for virtual and augmented reality research, and so on. Kulakolu-Dilek et al. [8] note that academic debates on VR in the tourist business often focus on its definition, uses, positive and negative impacts on the industry, possible future influence, and whether VR can replace traditional vacationing.
2.1 Main focus areas In this part, we will examine the three most researched topics: marketing, virtual worlds, and sustainability. Over the last three decades, marketing has received considerable attention from tourist researchers. We focus in particular on VR’s marketing potential and how it may be used to promote travel-related goods and services. Some academics have compared VR to more conventional forms of advertising (i.e., photos, videos, websites, and brochures). According to Cooper et al. [9], VR not only changes the way tourism is advertised but also decreases the number of people who need to physically visit a location. Research by T. Griffen et al. [10] evaluates the effectiveness of VR versus a two-dimensional video and a website for promoting travel destinations. The writers conduct a poll of students’ opinions on South Africa as part of their research. Using an Oculus Rift HMD, a traditional two-dimensional film, or a website, students saw a video created by the South African Tourism Board. A higher likelihood to visit, interest in learning more, and advocacy for South Africa as a tourist destination were stated by those who saw the material in VR. K. Pasanen et al. [11] examine the relative merits of two popular platforms for watching 360° videos – iPads and the Samsung Gear VR. Their primary goal was to find out how viewers of 360° videos feel about them, and their secondary goal was to investigate how the VR experience and the gadget used to view the video influence viewers’ desire to book a trip. They found that the iPad and Samsung Gear VR had similar impacts on consumers’ propensity to travel and their actual behavior while abroad. When it came to the tourist experience, however, individuals felt less like onlookers and more like active participants. VR’s effect on how people think about a place has been investigated by A. McFee et al. [12]. The writers examine the differences between watching a 360° film on a PC and using the Samsung Gear VR. The findings of McFee et al. [12] were comparable to those of Pasanen et al. The findings show that HMDs have a greater effect on final picture development, which is why they are becoming more popular. This is because, compared to those who watched the movie on a computer, those who saw the 360° video using a HMD expressed a greater desire to go to the location seen in the film. Marketing’s “presence” is another favorite among tourist experts. The persuasive potential of VR imagination in tourism promotion was studied by I. P. Tussyadiah et al. [13]. The authors analyzed how physical proximity affects prospective tourists’ mindsets and plans of action while deciding where to vacation. Participants reported
220
Raj Gaurang Tiwari et al.
varied degrees of spatial presence throughout the experience and remembered the sensations of entering and leaving the VE as well as the events that gave them the strongest feeling of really being there. A novel concept of multimodal (more immersive) virtual experience is presented by J. Martins et al. [14] for marketing tourist attractions. The Douro Valley in Portugal is the subject of this research because of its importance to the port wine industry. The writers want to provide readers with a one-of-a-kind and comprehensive experience of the wine culture of the area. In light of this, they propose a technical solution and a conceptualization of multisensory port wine. Based on their findings, it is clear that a VR-based themed tourist experience may include several senses. The TAM is also often discussed among academics in the field of tourism. This is because it can be used in every situation with little effort. Most researchers utilize the TAM to dissect the ways in which customers’ perceptions of the product’s usability, cost, usefulness, advantages, and enabling factors influence their intentions, behaviors, and outcomes across contexts. However, A. Gibson et al. [15] use the TAM model and interviews to talk about VR in tourism promotion and marketing. Different 360° movies of Ireland’s Wild Atlantic Way were seen by the participants. Scholars in the tourism industry have also discussed the marketing potential of VR gadgets. A. Marasco et al. [16] looked at how a VR experience made using the latest generation of wearable devices affected people’s desire to visit real-world locations. The authors also evaluate the potential influence of VR’s aesthetic attractiveness and the user’s emotional investment in the experience on their decision to go to a certain location. Visually appealing VR experiences on wearable devices were shown to increase travelers’ propensity to visit that location. VR’s impact on tourist advertising hasn’t been thoroughly studied. The void, however, is closed by T. Li et al. [17]. In particular, they want to know whether VR would encourage or discourage vacation plans for vacationers. They do this by drawing on the extrinsic (experience of VR) and intrinsic (anticipated pleasure at a destination) theories. The results demonstrate that engaging in VR dampens one’s desire to travel. The reason is: vacationers’ plans changed after experiencing VR for fun. Therefore, if a traveler anticipates having a negative experience at the location, the increased satisfaction they have while using VR would discourage them from going. Two significant gaps in the literature are filled by the work of J. Hopf et al. [18], making this research all the more relevant: the effect of users’ presence facilitated by HMDs on their intention to advertise and advocate the location. The writers take a multi-sensory look at VR to see how it affects the desire to send someone there. Their findings demonstrate two things: first, contrary to earlier studies, more sensory input does not boost users’ feelings of presence; and second, the likelihood that visitors would promote the location to others does rise.
13 Virtual reality in tourism
221
2.2 Additional focus areas VR as a replacement for traditional tourism is now a hot issue in the field of tourism studies. VR was compared to the real thing by A. Wagler et al. [19]. Participants walked around or watched a 360° video tour of a state Capitol. They came to the conclusion that 360° tourism may stand in for the genuine thing. Guttentag [20] also briefly discusses the advantages and disadvantages of using VR in place of traditional tourism. The author claims that using VR instead of traditional tourism may save travelers money. The author, however, draws the opposite conclusion, arguing that traditional tourism will not be replaced by VR. In contrast, Guttentag [20] devoted his whole article to the topic of VR’s growing popularity as an alternative to traditional vacationing. His research suggests that although VR won’t completely replace traditional tourism, it will be widely used as a viable alternative by many. Finally, N. Losada et al. [21] compared a VR experience of the World Heritage Site So Leonardo de Galafura with a traditional in-person visit. Their findings also suggested that VR should be seen as an addition to, rather than a replacement for, the real thing. Attachment to one’s hometown has also been studied by academics. Meaningful connections between individuals and their environments. However, researchers have paid little attention to the importance of location attachment in VR. Lake District National Park in the United Kingdom serves as a case study for C. Pantelidis et al. [22]’s investigation on how VR influences visitors’ experiences and how they feel about the places they visit. They came to the conclusion that everyone involved had warm feelings for Lake District National Park. Thus, VR might strengthen feelings of connection to a certain location. In addition, they found that improved spatial cognition and happiness were the two overarching themes. The cognitive component occurs when individuals grow acclimated to their surroundings and begin to notice and learn about previously unseen features. Feeling good about the time at Lake District National Park deepens the connection to it on a personal level. Therefore, their findings indicate that forming an emotional connection to a certain location is often followed by a series of rewarding events. By using a SWOT analysis, Kulakolu-Dilek et al. [8] consider the pros and cons of using VR applications in the tourist business. The SWOT analysis reveals that VR’s novel approach to advertising is one of the technology’s main assets. They argue that VR’s biggest flaw is that it can’t replace the real thing. In terms of potential, VR connects the three stages of a journey: planning, execution, and reflection. Finally, the danger is that cultural contact, seen by many as a major benefit of the real-time experience, is missing from the VR experience.
222
Raj Gaurang Tiwari et al.
3 Literal gaps in the research Even if VR is widely used in the tourist business, there are still gaps that must be filled. It would seem that little has been said about VR travel. This is mostly due to the fact that both VR and its associated technologies, as well as its function in the tourist industry, are still in their infancy. VR clearly did not become a hot issue until the early 1990s. Some gaps that should be addressed are discussed here. There are significant knowledge gaps despite the fact that VR has been widely studied as a tourist marketing tool. Again, this is because the use of VR in advertising for tourist destinations is so novel. More study is needed on the “adaptation of the technology for the appropriate use of VR as a marketing tool,” as Yung and KhooLattimore put it. Guttentag [20] argues that academics may fill this need by studying how various VR output devices promote tourist destinations like museums, amusement parks, and beaches. The academic community is also lacking information on who VR is intended for. Guttentag argues that further research is needed to determine how VR might be utilized to advertise various types of tourist locations to people of diverse ages, backgrounds, and interests [23]. Researchers in this field should also pay greater attention to the potential of VR in tourism promotion. In addition, further study is required to determine how VR influences tourists’ pursuit of location details, selection of activities, and general outlook on vacationing there. Despite the technological advancement in the year 2000 that prepared the way for increasingly intricate virtual experiences, researchers have paid relatively little attention to the topic of tourists’ experiences in virtual worlds and the 360° VR experience in the previous 15 years. The limitations imposed by technology in virtual worlds are likely to blame for this. One more is that academics have been focusing on VR’s potential for the travel industry. In addition, there is a lack of qualitative studies in the tourism literature that focus on how VR apps affect visitors. Scholars in this area would do well to extend their gaze and give greater attention to Sansar, the successor to SL, as well as other virtual worlds like Active World, OpenSim, Croquet, the Consortium, ActiveWorlds, Project Wonderland, and Sansar. Few academics have analyzed the differences in the virtual world use across various market groups or the reasons that contribute to such differences. Focus on genuine settings (as opposed to virtual recreations of real environments) as they pertain to travel experiences is another topic that has been underexplored. Beck and Egger argue that filling these gaps is critical for enabling marketers to make informed investment decisions. VR as a viable alternative to traditional tourism has garnered a lot of attention, but there are still certain knowledge gaps. Researchers have to figure out what motivates and limits visitors’ willingness to use VR as an alternative. The usage of VR expe-
13 Virtual reality in tourism
223
riences as a replacement for tourist experiences and the function of VR in destination substitution are two further topics that need investigation. Popularity among academics notwithstanding, there seem to be holes in the TAM that require filling. TAM in VR, particularly in terms of immersion, is a nearly unexplored area. The varying degrees of user preparation and attitude toward these new technologies is another crucial issue that has gotten little attention from researchers. Although genuineness has received a lot of attention in the tourist industry, there are still certain issues that need fixing. The lack of factual and in-depth study on the experience of visitors’ views of authenticity is highlighted by the work of Mura et al. [24] The role of modern technologies in the modern traveler’s search for genuine experiences is another topic left unexplored. According to Guttentag, additional study is needed to compare the educational effects of VR in museums to those of more conventional exhibitions. Scholars would do well to focus more on emerging immersive technologies like mixed reality and extended reality in the future. Other areas that have yet to be thoroughly investigated include the feasibility of achieving a profit in virtual tourism, the role of VR in cruise tourism, the feasibility of experiencing a full tourism adventure remotely or virtually as an e-tourist, and the level of acceptance of VR in the tourism industry.
4 Virtual reality technologies VR relies on numerous different technologies to create a convincing simulation of the real world. HMDs, gloves and bodysuits, joysticks, wands, 3D mice, and the CAVE system are only some of the technologies discussed in this area. The necessary technology for VR is shown in Figure 2.
4.1 Head-mounted displays (HMD) VR is associated with HMDs. HMDs immerse users in the online world by separating them from reality. A first-person view of the virtual world results. Like most high-end products, HMDs may be tethered or wireless (mid-tier devices and low-end devices). HMDs that are linked to a computer include a motion controller, sensors to monitor the user, a big field of vision, and a high resolution. The computer processes the images. In a room-scale scenario, these HMDs provide better images, real-time tracking, and a more dynamic environment. Tethered HMDs include the Vive, Rift, and OSVR. Mobile VR headsets, or “untethered HMDs,” use smartphones as displays. However, mobile HMDs use smartphones to power their displays and analyze real-time 3D data. Untethered HMDs include Gear VR and Cardboard.
224
Raj Gaurang Tiwari et al.
GLOVE BODY SUIT CAVE
VR Technology
JOY STICK
HMD
WAND 3D MOUSE
Figure 2: VR technology.
The user wears the HMD, which consists of two displays installed on the user’s head and used to display stereoscopic pictures. CRT and LCDs are used to show pictures in HMDs. Tracking technology is always included in HMDs. HMDs rely on tracking devices. The user’s head movement and orientation are tracked by an apparatus. Once the tracking device has reported the user’s position and orientation to the computer, the latter will render a picture from the perspective most advantageous to the user’s current location. This allows the user to freely explore the VE. The quality of the VR experience may be improved by adding other gadgets to the HMD. A speech recognition system is one example of such a tool. This paves the way for the user to control the VE using their voice. The headphone is another tool for isolating oneself from the outside world while listening to music or audio.
4.2 Gloves and body suites When it comes to controlling and inputting data into a VE, gloves are among the most well-known gadgets. The glove allows users to manipulate virtual items in the same way they would physically. The capacity to “touch” virtual items has been found to enhance the feeling of being physically there. Thin fiber optic wires are sewn into the back of the user’s hand and fingers, as previously indicated. Flexing the user’s fingers
13 Virtual reality in tourism
225
alters the fiber’s optical properties. The computer records the data and identifies the bending finger and the amount of force applied. The location and orientation of the user’s hand may be tracked thanks to a tracking device connected to the wrist of the glove. Manus VR Data Glove, a brand-new product, has two sensors, a vibration motor for tactile feedback, and a rechargeable battery. Haptic feedback (defined as “the feeling of touch intentionally reproduced by applying pressures or vibrations”) is sent to the body via a bodysuit (also called a haptic vest, gaming suit, VR suit, or tactile suit). A fiber optic wire is threaded through the bodysuit to record the wearer’s every move. The data is subsequently shown on a screen or in a VE by the computer. The Tesla Suit is only one example of many different kinds of bodysuits available. With this set, users may feel and touch virtual items in the VE to interact with them. The suit uses neuromuscular electrical stimulations to convey feelings to the wearer.
4.3 Joysticks, wands, and 3D mice Joysticks are the most common kind of handheld interface. Due to their low cost and user-friendliness, they are widely used. Joysticks are small in form, generate mild forces, and have a wide mechanical bandwidth, but only allow for a single degree of freedom. A wand is another kind of remote control. This is a 3D input device used for exploring, painting, interacting with, and selecting digital content. Wands have a tracking sensor built at the tip and the bottom. Users choose an item in the VE by pointing the wand at it and clicking a button. The nearest target is then chosen by the laser beam. Examples of wands are the Oculus Touch and HTC Vive. A 3D mouse is the last input device. This is how one move about in a VE and pick up virtual items. When the 3D mouse symbol comes into contact with a virtual item, the user is able to “pick” it up by pressing a button. The user is then able to choose and interact with VEs. The user’s sight determines the route taken while the 3D mouse controls the forward momentum. For instance, one can halt the forward motion of a 3D mouse by holding it vertically, and one may start it by tilting it forward.
4.4 The computer audio-visual environments system The CAVE system was created at the University of Illinois’ Electronic Visualization Lab in 1992. The overall goal of the system is to build a fully fledged VR environment. A chamber with several walls used to show 3D stereoscopic images is at the heart of the CAVE technology. A floor screen projects downward, and there are three rear-projection walls. The stereoscopic 3D effect is achieved by projecting pictures from a computer onto a screen. The user enters the VE by donning a HMD and using a wand. The end effect is
226
Raj Gaurang Tiwari et al.
total immersion for the user. The fact that more than 10 individuals may experience the CAVE at once without disrupting the other visitors is another factor in its widespread appeal. A major step was taken in incorporating VR into the travel industry with this announcement.
5 Types of virtual tourism This section discusses drone tourism, e-tourism, film and television tourism, gamification tourism, smartphone and app tourism, space tourism, and 360° video tourism. Drones are becoming essential in archaeology, natural disaster management, animal census, and live sports broadcasting. With single and numerous blades, they resemble aircraft but are controlled by apps on wristwatches, mobile phones, tablets, and video gaming consoles. They usually have cameras, sensors, or other devices to capture and stream user experiences. The pilot sees what the drone sees in real time. The travel sector is also feeling the effects of drone technology. Researchers are only now beginning to investigate drone platforms, despite the fact that the technology is becoming more accessible and affordable. Drones have several uses in the travel sector. VR films shot with a drone are considered the “most fascinating and engaging visual form” today. Because VR enhances and engages visitors, here is why. By pointing the drone’s screen in the direction of points of interest, visitors may get a birds-eye perspective of the scene. In addition, tourist landmarks and places, such as volcanoes and cliffs, that are inaccessible or too risky for regular visitors are now open to them thanks to drone-filmed movies. Promotion is another perk. Drones are being used by several travel businesses as a means of advertising. Drones’ primary weakness is that they are restricted by the law. Countries including the United States, South Africa, Canada, and Australia, among others, have started drafting new legislation regarding the use of drones inside their borders in order to preserve citizens’ right to privacy. ICTs have created a new sector, electronic tourism (also known as e-tourism, travel technology, or eTravel). E-tourism comprises tactical and strategic stages. ICTs boost the tourism organization’s performance tactically. Strategically, e-tourism transforms how companies function, value is made, and tourist organizations connect with their customers. E-commerce, e-marketing, e-finances, e-accounting, e-human resource management, e-research and production, e-strategy, e-planning, and e-management are ICT technologies and applications used in tourism management, planning, development, marketing, and distribution. E-tourism helps managers reach and serve customers by offering platforms for them to study, shop for, and purchase tourism-related products and services. Internet usage has altered the way sightseers look for information and make purchases. Tourists seldom consult such sources as libraries, encyclopedias, newspapers,
13 Virtual reality in tourism
227
periodicals, brochures, and travel agencies for advice on where to go and what to see. Travelers rely heavily on the internet for everything from initial trip inspiration and research to final hotel and airline confirmations, weather forecasts, restaurant reservations, and activity bookings. As a result, many public places including hotels, airports, and cafés provide visitors wireless-radio connections to the internet via services like wireless fidelity (WIFI). Today’s knowledgeable, self-reliant travelers seem to be abandoning package vacations. Modern tourists like to follow their schedules and fancies, and due to the internet, they can do it without tour providers. Thus, “e-mediaries” like Expedia, Opodo, and Travelocity have emerged as online travel agents marketing all kinds of tourism services for diverse locales. Search engines, meta-search engines, destination management systems, social networking sites, web 2.0 portals (like Facebook and TripAdvisor), pricing comparison sites (like Kelkoo), and supplier and intermediary websites are other emerging tools. VR may enhance Google Earth VR, Maps, Street View [25, 26], and Tour Creator [27]. Google acquired Keyhole Earth Viewer in 2004 and renamed it Google Earth. Google Earth, a free web-based GIS, shows satellite images and three-dimensional visualizations of Earth. Virtual tourism allows tourists to see the Eiffel Tower, Grand Canyon, Forbidden City, and even outer space. Travelers may relax and let Google Earth take them there. Visitors may travel up, down, right, left, and back to fully enjoy their location. Travelers get a totally immersive 3D “just like being there” experience. Google Earth VR debuted in 2016. This application “opens the globe” on the internet [26]. J. Kim, Google Earth VR’s product manager, says travelers may see South America’s Amazon River, the US’s Manhattan skyline, the Grand Canyon, and Europe’s Alps (in Switzerland). Vacationers may fly above a city, stand on the tallest peaks, and even fly into space. Thus, Google Earth VR lets would-be travelers see stunning surroundings without leaving home. Google Maps is a game-changer for digital mapping. Lars and Jens E. Rasussen, two Danish brothers, set out to create a web application in 2004 that would show static maps and also provide users the ability to search, scroll, and zoom in on the displayed map [28]. Google Maps debuted in 2005, and by 2007 it had made its way onto the first iPhone. As a result, the experience of virtual travelers has been revolutionized by the availability of aerial and satellite views of regions, towns, and places. The results of a tourist’s search (e.g., “hotels near the destination”) may be seen on a map. Tourists may learn more about their location by using Google Maps, which allows them to click and drag to move about and zoom in on the map. The result is a fast, easy, and engaging interaction for visitors. Because of this, Google Maps is included on the websites of the vast majority of travel businesses nowadays. In addition to Google Maps, one may also use Google Street View. Larry Page, Google’s co-founder, had the idea for a street view project in 2003. As a result, Google’s Street View was released to the public in May 2007 and now includes many global
228
Raj Gaurang Tiwari et al.
cities and suburbs. By offering 360° horizontal and 290° vertical panoramic streetlevel views of cities and suburbs, Google Street View enables virtual strolls, exploration of landmarks, and the location of businesses and lodging options. Google primarily employs two vehicles, a car and a Street View Trekker to collect the photographs for Street View. Google uses cameras, laser rangefinders, global positioning systems, and computer vision algorithms to analyze images and determine exactly where they were taken. At regular intervals, the cameras take pictures from all angles. A local motorist will double-check the photographs to make sure both streets are covered. Google receives the hard disc and uses software to “stitch” together the photographs into a panoramic. Incorporating a camera system, the Street View Trekker may be carried like a backpack. Since it can go to places where other methods ca not, it is being utilized regularly since 2012. Each location is photographed every 2.5 s by a variety of cameras. Consequently, this offers a breath-taking panorama of the location. Google launched Earth VR with Street View in 2016. When planning a virtual vacation, users often use a search engine to locate the desired location. Next, they descend to ground level to inquire at the controller about the accessibility of Street View. Virtual tourists using Vive, Rift, or Cardboard may have an immersive experience using Street View since it is similar to “entering an immersive 360° picture.” Google has also created Tour Creator, which has found a lot of usage in the classroom. The goal is to increase the number of VR field excursions that schools and students take part in rather than rely on 360° recordings. The tourist sector also makes use of Tour Creator since it provides web-based tools for the creation of high-quality VR tours. The tour photographs may be uploaded by the user and can be from a camera, Street View, or the user’s 360° photos. After creating a tour, viewers may check it out on Google’s 3D content repository Poly or use Cardboard to see it. The company Time Out New York used Tour Creator to create promotional tours based on the city’s most popular tourist destinations. Since the first public showing of a film in Paris, France, in 1895, in 1895, people have been captivated by them. This enthusiasm for movies led to film tourism, also known as cinematic tourism, movie-induced tourism, film-induced tourism, television-induced tourism, mediarelated tourism, screen tourism, media pilgrimage, jet setting, and TV tourism [29]. Visitors who see a location or attraction promoted on film or television sometimes feel compelled to visit it in person. There hasn’t been a tonne of scholarly work done on the topic of film tourism yet, making it somewhat of a novelty in the tourism literature. However, in the recent two decades, there has been a rise in interest in the connection between cinema and tourism from students, scholars, tourist organizations, local communities, and filmmakers [30]. The industry of film tourism is booming around the world. This is because of the development of cinema and the increase in global travel. When a movie or TV show is
13 Virtual reality in tourism
229
filmed in a real location, the surrounding area usually sees an increase in tourism [31]. TV shows may potentially encourage tourism. Dallas (1978–1991)’s Southfork Ranch still attracts 400–500,000 tourists every year. The mansion symbolizes the “American Dream” – freedom, money, the Wild West, cowboys, and prosperity. Highclere Castle, the backdrop for Downtown Abbey, attracted 105,904 visitors in 2013, generating £10.5 million for the UK economy (2010–2015, 2019). Game of Thrones (2011–2019), based on George R. R. Martin’s A Song of Ice and Fire, was also popular. Ireland, Spain, Croatia, and Iceland had large tourist increases. After the program premiered, 150,000 people visited San Juan de Gaztelugatxe, a Spanish island. The word “game” is where gamification gets its start, so the idea itself is not new. Game theory was coined in 2002 by computer programmer Nick Pelling. In 2008, the term “gamification” was first used in print, but it wasn’t until 2010 that the concept truly took off. The term “gamification” can mean different things to different people. The evolution of motion sensors, graphics, multimodal display technologies, and interaction has allowed VR to find a home in the gaming industry. The point of VR in gaming is to educate players, entice new audiences, and involve existing ones in exciting ways. VR has spread from the gaming industry to other sectors, including education, real estate, automotive, healthcare, and the travel industry [32]. The use of gaming elements to promote tourism is also not novel. All parts of a vacation can be thought of as a game in the tourism industry. Games are primarily used by airlines and hotels as a part of their loyalty programs. The tourism industry can reap many of the benefits of gamification, and destinations can use games to teach and entertain visitors. A further benefit is that visitors spend more time at the attractions thanks to the games. Many travel providers promote through games. The game’s tournaments, rewards, rankings, badges, and score tables encourage good tourist behavior. Travel businesses are replacing brochures with smartphone games. The use of conventional loyalty programs is on the decline. This is why a growing number of hospitality establishments are introducing gaming elements into their customer loyalty programs. The advent of smartphones has led to the twenty-first century being dubbed the “mobile age” because of the proliferation of mobile devices. The tourism industry has been profoundly impacted by innovations in smartphone technology, and smartphones are a vital resource for the industry. How visitors get information, how they share and receive it, how they engage with locals, how they have fun, and how businesses interact with and profit from them have all been affected by these changes. Smartphones have quickly become the go-to travel accessory and are often referred to as “travel buddies” by those who use them while on the road. As a result, travelers are now always online, thanks to the widespread availability of cell phones that allow them to connect to the internet at fast speeds at any time and from nearly any location. Because of this, vacationers may make purchases, ar-
230
Raj Gaurang Tiwari et al.
range for meetings and trips, and research attractions while they’re on the go. This helps travelers make more informed selections at the moment. As a result, amenities catering to mobile devices, such as WiFi and power outlets, have proliferated at airports, airlines, and hotels. The capacity to acquire and deploy applications is a key feature of modern cell phones. Apps are a vital element of the tourist sector and have recently become a “hot new issue” in the academic literature. It’s no surprise that applications would flourish in the travel business. Apps have been utilized by the travel industry ever since their inception. This includes airlines, travel agencies, tour operators, and tourist businesses. Due to the proliferation of smartphones, many academics in the tourism industry have begun focusing on apps and their potential to improve the visitor experience. Topics covered in these studies include the following: improving the quality of the tour, aiding mobile guides, utilizing mobile information systems, managing check-ins, using positioning data to track the whereabouts of visitors, creating apps specifically for tourists, and accepting payments via mobile devices. However, there are still blank spots in the books when it comes to mobile applications. There is a dearth of research examining the use of cell phones and travel applications. This is because app research is just getting started because most apps are still in the planning or prototype stages. There are several ways in which apps help both travelers and companies. Apps make traveling easier by streamlining tasks like making purchases, sharing information, and playing games. Travel agents, guides, guidebooks, and maps are all functions that may be performed by apps. Since prehistoric times, people have been captivated by and curious about space. Academics have written extensively on space tourism, covering topics as diverse as the size of the potential market and the regulations for playing football in three dimensions in zero gravity. Humanity’s fascination with space has been stoked by major events like the moon landing and the construction of the International Space Station (ISS). The large numbers of visitors who flock to see space shuttle launches, solar eclipses, and training shows for astronauts and cosmonauts are evidence of this. About 2 million people made the trip to Florida’s Cape Canaveral to see John Glenn, America’s first orbital astronaut, launch into space for the second time. The NASA Mars Pathfinder website received 556 million page views during the mission of the “Sojourner Rover” to the red planet. The most recent landing of “Perseverance Rover” on Mars on February 18, 2021, resulted in headlines in magazines, television shows, websites, and books due to the photos taken by the Mars Excursion Rover. This demonstrates growing public interest in funding and planning for space tourism. Because of this, the aerospace industry, space agencies, and the tourism industry are all putting significant resources into developing space tourism.
13 Virtual reality in tourism
231
6 Benefits of virtual reality for tourism Those who study the tourist industry are almost unanimous in their belief that VR has enormous potential to improve the sector in many ways. They also claim that visitors gain from VR since they don’t have to deal with lines, transit challenges, language barriers, bureaucracy, visas, or bad weather. Scholarly interest in VR for use in the travel industry dates back much before the recent COVID-19 epidemic, as seen by works like Guttentag’s fundamental study. In his paper from 2010, he outlines six key applications of VR in the tourist industry: promotion, strategy, sustainability, access, instruction, and recreation. Brochures, travel guides, television ads, and websites are all examples of conventional media that travel businesses use to sell their wares and services. Since traditional media can only supply travelers with generic information, it is generally seen as antiquated, inauthentic, and a one-way medium. Because of this, vacationers often make hasty judgments based on incorrect or out-of-date information. Therefore, vacations can fall short of the mark set by the traveler. Companies promote places to draw in more visitors since the tourism business is so cutthroat and everyone wants a piece of the pie. There is a shift away from traditional or conventional media and toward VR since they must employ new technology for this goal in order to increase their reach. VR marketing is nothing new, especially in tourism. Destinations, museums, hotels, restaurants, tour operators, and travel agencies have promoted tourism services and commodities using VR since the mid-1990s. VR is changing tourism marketing. Many in the tourism business saw VR as an innovative and powerful marketing tool and the “ultimate trip brochure” since tourism is intangible and cannot be judged beforehand. As a result, VR helps travelers in their quest for knowledge and facilitates their final purchasing decisions by giving them a chance to “try before they buy” in certain circumstances. Tourists may “experience” a location before actually visiting it, according to the “try before you buy” principle. As a result, VR allows people to “taste” the treats and also experience the locations, activities, and unique occurrences of a potential vacation spot before actually going there. Using VR in advertising a tourist place has the bonus of enticing people to go there in person. This is supported by the research; for example, the subjects in both the Mura et al. [24] investigations provide evidence of this. Tourists said their thoughts and wishes were sparked after seeing a virtual version of a place or activity, leading them to increase their visits to the real thing. VR also affects how a location is seen by creating an individual mental representation of a location’s unique attributes. Before visitors even arrive, their expectations have been raised thanks to clear, obvious visuals. Tourists’ images of a location might influence their choice to go there in the future, even if they aren’t interested in visiting right now.
232
Raj Gaurang Tiwari et al.
Businesses in the tourist industry employ VR in two main ways: Second Life (SL) and 360° photos and videos. Many tourist-related embassies, boards, organizations, and businesses have set up shops in SL. The 360° movies and photos are another kind of VR used in advertising. Compared to other VR formats, virtual films, and pictures are becoming more popular as marketing tools. According to research undertaken by Google, 360° photographs and movies are more popular with travelers and generate more subscriptions, shares, and views than conventional tourism films. One important reason for this is the immersive nature of 360° photos and videos, which allow viewers to feel as if they are there without really being there. The outcome is that visitors get more out of their experience and can engage with one another. Tourists’ interest is piqued when they are given control over the viewing experience by being able to pan, tilt, and move a film, which may encourage them to make a real-world visit or make an online purchase. The tourist business must also address the problem of sustainability and preservation. The tourist industry is widely acknowledged as a major factor in the demise of innumerable cultural and natural landmarks across the globe. As a long-term solution, VR offers several advantages. In addition, the use of transportation modes associated with tourism, such as airplanes, cars, trains, and ships, is a major source of greenhouse gas emissions. Using VR to produce eco-friendly tourism may cut down on fuel use and vehicle use. VR technology means that tourists may enjoy experiences previously only available at tourist hotspots without really having to leave their homes. VR may also be used to recreate the consequences of visitors’ behavior on natural settings. Additionally, VR enables visitors to “touch” an item without really touching it. To better appreciate a gold crown discovered in Italy, for example, visitors may “touch” a digital replica of the crown. Since people are more likely to back causes they understand, it is possible that they should be in favor of protecting the actual thing. VR has the potential to open up previously unavailable sights and experiences [33]. The influence of visitors and other degrading elements has led, for instance, to the closure of China’s Dunhuang Caves. This was the impetus behind the creation of the CAVE system. Visitors might explore a digital recreation of the caverns with the use of a virtual torch to see the paintings. Additionally, VR makes it possible for people with lower budgets to see these locations. Every practicing Muslim, for instance, must visit Mecca, the holiest place in Islam and situated in Saudi Arabia. It is sometimes hard to eliminate or circumvent the physical obstacles that prevent persons with disabilities (PwDs), health concerns, or age-related infirmities from accessing many locations. By eliminating these “physical obstacles,” VR provides these visitors with different options [34]. VR also allows visitors to reach previously off-limits and hostile regions. Many people are reluctant to travel because of safety concerns after 9/11 and other acts of terror targeting foreigners. VR solves this problem since it allows travelers to prepare for the consequences of a foreign environment in a controlled environment before
13 Virtual reality in tourism
233
they ever leave home. This leads to calmness, revitalization, anticipation, surprise, trust, and an increase in one’s estimation of oneself. VR also eliminates the need for travelers to visit these nations. VR is also employed as a teaching or learning aid in the tourist industry. The edutainment (education plus enjoyment) potential of VR has led several museums and historical sites to include it in their exhibitions. VR first appeared in the entertainment sector, with the advent of the Sensorama. Since then, VR has sprung into the entertainment scene, growing into a billion-dollar market. Theme parks, which attract a large number of visitors each year, are among the most frequented tourist sites in the world. Some vacationers want to push themselves to their limits by seeking out extreme sports like bungee jumping. Many visitors are injured or killed because they lack the proper training to participate in extreme sports. Using VR, visitors may experience these activities without worrying about their safety or being unable to leave if they start to feel uneasy. VR also enables visitors to engage in otherwise taboo pursuits like murder or trophy hunting. For example, Muslim women cannot travel without the approval of a male relative, under Islamic law. Conversely, VR has opened the door for Muslim women to travel without restrictions. In addition, visitors may choose their own characteristics and choose how they interact with other virtual beings and things. Many industries have found success using VR for training purposes, and the travel business is no exception. VR is becoming more popular in the hospitality industry as a means of educating new staff members. The Best Western Hotel and Resort is just one establishment that has begun using VR simulations to help its front desk employees better handle passenger complaints and connect with visitors. The widening income inequality is another challenge for the tourist industry. Tourism is a luxury industry that serves only the well-off with spare time and money. Both the privileged and the underprivileged may now “virtually” go to Paris, for instance. Therefore, VR may democratize the tourism business by providing an equally enjoyable experience to all visitors. Additionally, VR has improved in quality, cost, and ease of use to make this possible. But this is a contentious matter for sure. The ongoing epidemic of the mosquito-borne respiratory illness COVID-19 highlights another advantage of using VR in the tourist industry. Despite the availability of potential vaccinations, governments throughout the globe have mandated nonpharmaceutical measures (activities performed by individuals without medicine), like as quarantines and social isolation, to stem the spread of the virus (a physical distance of 1–2 m between people). The tourist business suffers greatly from the negative effects of physical distance. The distance between people makes it difficult for them to be physically close to one another. Some people think that even when COVID-19 is gone, separation will be the “new normal.” VR, however, stands out as one of the best tools for bridging geographical gaps. VR allows businesses to continue catering to tourists while having to adhere to tight distance regulations. Due to the immersive nature
234
Raj Gaurang Tiwari et al.
of VR, visitors may enjoy the same attractions without having to physical contact with one another. Furthermore, several nations were compelled to seal land and maritime borders as a consequence of COVID-19.
7 VR tourism barriers There is a dark side to technology, unfortunately. VR is seen as a major threat to the tourist sector by some academics. Since the mid-1990s, there has been growing worry about the negative effects of VR on tourism, with primary areas of concern being the loss of income, health difficulties, technology issues, and access issues. One main worry is that VR would cause the tourist industry to lose money. Potential visitors may be dissuaded from taking a trip since they would have such a good time staying “at home” and utilizing a VR replacement. Under COVID-19’s severe laws, when people throughout the world were ordered to stay inside their houses under lockdown, the residence took on the role of a safe and convenient hub for tourists. Potential visitors may try VR without leaving their homes. As a consequence, fewer people want to go to far-flung locations, which hurts nations (particularly in the global South) that rely on tourism for a significant portion of their GDP. Because these nations and their people often rely solely on tourism for income, a drop in visitor numbers may have a devastating economic impact. The number of people working in the tourist business will decrease as a result of VR, as will the number of people working in the large informal economy. VR also has serious potential negative consequences for users’ physical and mental health. Since HMDs are known to produce motion sickness, nausea, and headaches, particularly if the user wears the set for a lengthy amount of time, VR technology can only be used for brief periods of time. When there is a discrepancy between what the eye perceives and what the vestibular system detects, the result is motion sickness. The brain detects this discrepancy and concludes that the body must be sick. To counteract the sickness, the brain generates symptoms such as these: headaches, dizziness, disorientation, and nausea. Simulator sickness and VR sickness are two forms of motion sickness that visitors may suffer (or cybersickness). Trainee pilots often suffer from simulator sickness. The latency, or the delay in the system’s response to the user’s head movement relative to the direction of travel, is thought to be the root cause of simulator sickness. A slower timescale means the system is slower to “catch up” to the user. Dislocation, nausea, or an excessive reaction are the results. VR sickness is a kind of visual motion sickness. However, the sensory conflict hypothesis suggests that a mismatch between visual, vestibular, and proprioceptive sensory inputs may be at the root of VR sickness. Body movement and direction may be gleaned from sensory input. A person’s vestibular and proprioceptive systems might get confused if they don’t move sometimes. As a result, users may experience pain,
13 Virtual reality in tourism
235
apathy, nausea, sleepiness, disorientation, eye strain, and exhaustion as well as other symptoms of VR sickness. Wearing an HMD causes pressure on the eyes. U.S. military researchers at the Defense Advanced Research Projects Agency said this happens when the HMDs’ optical centers are too close together. Chronic weariness, lack of initiative, sleepiness, lethargy, apathy, and irritability are among the symptoms of sopite syndrome, which may be brought on by HMD usage. Tourists’ physical and mental well-being are impacted by VR experiences. Visits to “dark tourism” places, such as the Charles Mason home or Holocaust museums, may cause visitors emotional distress. In addition, since VR amplifies the senses, visitors may suffer side effects similar to those of drug use. There are also several technical issues with VR. Due of the restricted range of the tracking system and wires, prevents individuals from physically moving great distances. Because of the fixed-focus nature of HMDs, users must swivel their whole bodies, not just their eyes, to take in their surroundings. Restriction of movement lessens people’s freedom and may provide a false impression. Potential visitors’ health may be negatively impacted by VR due to a lack of exercise and inactivity. No one leaves their homes and goes somewhere else to get some fresh air and exercise. There are emotional and physiological costs associated with this. The COVID-19 limitations have already restricted people’s ability to move about, which has had detrimental social, psychological, and bodily effects. Some nations also lack the required modern technical infrastructure, which is another drawback of VR technology for visitors. This is problematic since using antiquated hardware with modern VR setups will render the program useless or at best unreliable. Keeping VR technology up-to-date is essential for improving its responsiveness, compatibility, and adaptability. This is an issue not just for private use but also for tourism attractions that rely on VR to draw in visitors. Due to the high cost of VR equipment, it is now only available in wealthy countries and resorts. Therefore, vacation spots lacking in VR technology may miss out on visitors. The fact that destination marketing organizations don’t always have VR experts on staff is another issue. There is no need to worry that VR will lead to a shift in indigenous communities’ worldviews because of the problems it causes with access to non-Western traditions. When one culture comes into touch with another, it undergoes a process of cultural transformation. It may also be the outcome of foreign ideas or technology permeating a culture and subsequently altering its norms and practices. A further issue is that visitors might “visit” sacred rites, restricted regions, or controversial landmarks. People who cannot afford to go to Mecca, as well as “other inquisitive onlookers,” may do a virtual pilgrimage using the “Experience Mecca” app and other websites. This second point is of great importance since non-Muslims are strictly barred from visiting Mecca or taking part in the Hajj. VR may have a harmful effect on indigenous or sacred communities by allowing access to these locations without proper consideration or permission.
236
Raj Gaurang Tiwari et al.
Unfortunately, many visitors, local communities, and nations encounter digital exclusion with VR in tourism, notably low-income visitors, older visitors, and local populations that aren’t as tech-savvy as younger visitors. It is important not to underestimate the potential for virtual tourism to exacerbate economic inequality. There is a significant gap between rich and low-income travelers. Poor travelers often can’t afford to enjoy a VR experience of a trip, whereas wealthy travelers may do so at any time. VR may threaten diverse cultures and languages. According to a 2015 Broadband Commission and State of Broadband study, only 5% of the world’s languages are spoken online. English is a technical language. 55.2% of webpages are written in English, 5.8% in French, 5.2% in German, 5.2% in Japanese, 5.2% in Russian, and 5.2% in Spanish. VR cannot use most regional and native tongues. Thus, it is biased against them. VR also poses privacy and security concerns owing to online dangers including theft, fraud, and child pornography. Tourists’ sensitive data (such as their geolocation) is exposed to both tourism marketers and hackers when they sign up for, pay for, or use VR products. All of these problems increase the danger and adverse effects of using VR for tourism.
8 Conclusion VR has the potential to revolutionize the travel business by providing easy, low-cost, and fully immersive experiences, as shown by our analysis. VR may be a viable alternative to more traditional forms of tourism; but, it cannot replace the unique experiences that can only be had by actually going there. Due to the inherent limitations of a simulation, the veracity of the virtual tourist experience is still up for question. This research also emphasizes the benefits and drawbacks of VR for the travel industry. Among the benefits is the opportunity to provide one-of-a-kind adventures, the removal of geographical limitations, and the establishment of a secure, manageable setting. Disadvantages include being unable to communicate with other people, experiencing motion sickness, and missing out on the sensory stimulation that comes with actual life events. All things considered, VR tourism represents its own distinct subset of the travel industry. It might be interesting to young people who are interested in trying out new things while on vacation. The tourist business, however, should know that VR has its limits and is not a perfect substitute for the actual thing. The benefits and drawbacks of using VR technology in tourism should be carefully weighed before deciding how to use the technology.
13 Virtual reality in tourism
237
References [1]
[2]
[3]
[4]
[5] [6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
Singh, G., A. Mantri, O. Sharma, and R. Kaur. 2021. “Virtual Reality Learning Environment For Enhancing Electronics Engineering Laboratory Experience.” Computer Applications in Engineering Education 29(1: 229–243. doi: 10.1002/CAE.22333. Robinson, P. 2012. Tourism : the key concepts. In Routledge key guides. London ; New York : Routledge, c2012., Accessed: Apr. 23, 2023. [Online]. Available: https://www.routledge.com/TourismThe-Key-Concepts/Robinson/p/book/9780415677936 Krug, Christian. 2006. “Virtual Tourism: The Consumption of Natural and Digital Environments,” In Nature in Literary and Cultural Studies, pp. 249–273. Brill. Accessed: Apr. 23, 2023. [Online]. Available: https://brill.com/display/book/9789401203555/B9789401203555_s013.xml Tiwari, R. G., M. Husain, V. Srivastava, and A. Agrawal. 2011. “Web Personalization By Assimilating Usage Data and Semantics Expressed In Ontology Terms.” In International Conference and Workshop on Emerging Trends in Technology 2011, ICWET 2011 – Conference Proceedings. doi: 10.1145/1980022.1980133. Beck, J., M. Rainoldi, and R. Egger. 2019. “Virtual Reality In Tourism: A State-Of-The-Art Review.” Tourism Review 74(3): 586–612. doi: 10.1108/TR-03-2017-0049/FULL/XML. Wei, W. 2019. “Research Progress On Virtual Reality (VR) and Augmented Reality (AR) in Tourism and Hospitality: A Critical Review of Publications From 2000 to 2018.” Journal of Hospitality and Tourism Technology 10(4): 539–570. doi: 10.1108/JHTT-04-2018-0030/FULL/XML. Loureiro, S. M. C. 2020. “Virtual Reality, Augmented Reality and Tourism Experience.” The Routledge Handbook of Tourism Experience Management and Marketing 439–452. doi: 10.4324/ 9780429203916-38/VIRTUAL-REALITY-AUGMENTED-REALITY-TOURISM-EXPERIENCE-SANDRA-MARIACORREIA-LOUREIRO. Kulakoğlu-dilek, N., İ. Kizilirmak, and S. E. Dilek. 2018. “Virtual Reality or Just Reality? A SWOT Analysis of the Tourism Industry.” Journal of Tourismology, 4(1): 67–74. doi: 10.26650/ jot.2018.4.1.0001. Cooper, M., and N. MacNeil. “Virtual Reality Mapping Revisited: IT Tools for the Divide Between Knowledge and Action in Tourism.” In Virtual Technologies: Concepts, Methodologies, Tools, and Applications, pp. 630–643. IGI Global, 1AD. doi: 10.4018/978-1-59904-955-7.CH039. Griffin, T., et al. 2017. “Virtual Reality and Implications for Destination Marketing.” Travel and Tourism Research Association: Advancing Tourism Research Globally, Accessed: Apr. 23, 2023. [Online]. Available: https://scholarworks.umass.edu/ttra/2017/Academic_Papers_Oral/29 Pasanen, K., J. Pesonen, J. Murphy, J. Heinonen, and J. Mikkonen. 2019. “Comparing Tablet and Virtual Reality Glasses for Watching Nature Tourism Videos.” Information and Communication Technologies in Tourism 2019: 120–131. doi: 10.1007/978-3-030-05940-8_10. McFee, A., T. Mayrhofer, A. Baràtovà, B. Neuhofer, M. Rainoldi, and R. Egger. 2019. “The Effects of Virtual Reality on Destination Image Formation.” Information and Communication Technologies in Tourism 2019: 107–119. doi: 10.1007/978-3-030-05940-8_9. Tussyadiah, I., and D. Wang, and C. (Helen) Jia. 2016. “Exploring the Persuasive Power of Virtual Reality Imagery for Destination Marketing.” Travel and Tourism Research Association: Advancing Tourism Research Globally Accessed: Apr. 23, 2023. [Online]. Available: https://scholarworks.umass. edu/ttra/2016/Academic_Papers_Oral/25 Martins, J., R. Gonçalves, F. Branco, L. Barbosa, M. Melo, and M. Bessa. 2017. “A Multisensory Virtual Experience Model for Thematic Tourism: A Port Wine Tourism Application Proposal,” Journal of Destination Marketing and Management, 6(2): 103–109. doi: 10.1016/J.JDMM.2017.02.002. Gibson, A., and M. O’Rawe. 2018. Virtual Reality as a Travel Promotional Tool: Insights from a Consumer Travel Fair. pp. 93–107. doi: 10.1007/978-3-319-64027-3_7.
238
Raj Gaurang Tiwari et al.
[16] Marasco, A., P. Buonincontri, M. van Niekerk, M. Orlowski, and F. Okumus. 2018. “Exploring the Role of Next-Generation Virtual Technologies in Destination Marketing.” Journal of Destination Marketing & Management 9, 138–148. doi: 10.1016/J.JDMM.2017.12.002. [17] Li, T., and Y. Chen. 2019. “Will Virtual Reality Be a Double-Edged Sword? Exploring the Moderation Effects of The Expected Enjoyment of A Destination On Travel Intention.” Journal of Destination Marketing & Management 12: 15–26. doi: 10.1016/J.JDMM.2019.02.003. [18] Hopf, J., M. Scholl, B. Neuhofer, and R. Egger. 2020. “Exploring the Impact of Multisensory VR on Travel Recommendation: A Presence Perspective.” Information and Communication Technologies in Tourism 2020: 169–180. doi: 10.1007/978-3-030-36737-4_14. [19] Wagler, A., and M. D. Hanus. 2018. “Comparing Virtual Reality Tourism to Real-Life Experience: Effects of Presence and Engagement on Attitude and Enjoyment.” Communication Research Reports 35(5): pp. 456–464. doi: 10.1080/08824096.2018.1525350. [20] Guttentag, D. 2020. “Virtual Reality and the End of Tourism? A Substitution Acceptance Model.” Handbook of e-Tourism 1–19. doi: 10.1007/978-3-030-05324-6_113-1. [21] Losada, N., F. Jorge, M. S. Teixeira, M. Melo, and M. Bessa. 2021. Could Virtual Reality Substitute the ‘Real’ Experience? Evidence from a UNESCO World Heritage Site in Northern Portugal, pp. 153–161. doi: 10.1007/978-981-33-4260-6_14. [22] Pantelidis, C., M. Claudia, T. Dieck, T. Jung, and A. Miller. 2018. “Exploring Tourist Experiences of Virtual Reality in a Rural Destination: A Place Attachment Theory Perspective.” e-Review of Tourism Research. Accessed: Apr. 23, 2023. [Online]. Available: https://ertr-ojs-tamu.tdl.org/ertr/article/view/116 [23] Tiwari, R. G., M. Husain, V. Srivastava, and A. Agrawal 2011. “Web Personalization By Assimilating Usage Data and Semantics Expressed In Ontology Terms.” In International Conference and Workshop on Emerging Trends in Technology 2011, ICWET 2011 – Conference Proceedings. pp. 516–521. doi: 10.1145/1980022.1980133. [24] Mura, P., R. Tavakoli, and S. Pahlevan Sharif. 2017. “‘Authentic But Not Too Much’: exploring Perceptions of Authenticity of Virtual Tourism.” Information Technology and Tourism 17(2): 145–159,, doi: 10.1007/S40558-016-0059-Y/METRICS. [25] Martínez-Hernández, C., C. Yubero, E. Ferreiro-Calzada, and S. M. De Miguel. 2021. “Didactic use of GIS and Street View for Tourism Degree students: Understanding commercial gentrification in large urban destinations.” Investigaciones Geograficas (75): 61–85. doi: 10.14198/INGEO2020.MYFM. [26] Huang, H., and W. Liu. 2011. “Development of Three Dimensional Digital Tourism Presentation System Based on Google Earth API.” In ICSDM 2011 – Proceedings 2011 IEEE International Conference on Spatial Data Mining and Geographical Knowledge Services. pp. 300–302. doi: 10.1109/ICSDM.2011.5969051. [27] Boyle, F. et al. 2019. “Innovation of Learning and Development for the Agritech Sector Using Google Tour Creator.” In EDULEARN19 Proceedings, Vol. 1, pp. 8541–8546. doi: 10.21125/ EDULEARN.2019.2119. [28] Vandeviver, C. 2014. “Applying Google Maps and Google Street View in Criminological Research.” Crime Science 3(1): 1–16. doi: 10.1186/S40163-014-0013-2/FIGURES/2. [29] Hudson, S., and J. R. B. Ritchie. 2006. “Promoting Destinations via Film Tourism: An Empirical Identification of Supporting Marketing Initiatives,” Journal of Travel Research 44(4): 387–396. doi: 10.1177/0047287506286720. [30] Busby, G., and J. Klug. 2001. “Movie-Induced Tourism: The Challenge of Measurement And Other Issues.” Journal of Vacation Marketing 7(4): 316–332. doi: 10.1177/135676670100700403. [31] Tessitore, T., M. Pandelaere, and A. Van Kerckhove. 2014. “The Amazing Race to India: Prominence In Reality Television Affects Destination Image and Travel Intentions.” Tourism Management 42: 3–12. doi: 10.1016/J.TOURMAN.2013.10.001.
13 Virtual reality in tourism
239
[32] Angra, S., B. Sharma, and A. Sharma. 2022. “Analysis of Virtual Reality and Augmented Reality SDK’s and Game Engines: A Comparison.” In International Conference on Edge Computing and Applications, ICECAA 2022 – Proceedings, pp. 1681–1684. doi: 10.1109/ICECAA55415.2022.9936111. [33] Tiwari, R. G., M. Husain, B. Gupta, and A. Agrawal. 2010. “Amalgamating Contextual Information Into Recommender System.” In Proceedings – 3rd International Conference on Emerging Trends in Engineering and Technology, ICETET 2010. doi: 10.1109/ICETET.2010.110. [34] Agarwal, H., P. Tiwari, and R. G. Tiwari. 2019. “Exploiting Sensor Fusion for Mobile Robot Localization.” In Proceedings of the 3rd International Conference on I-SMAC IoT in Social, Mobile, Analytics and Cloud, I-SMAC 2019. doi: 10.1109/I-SMAC47947.2019.9032653.
Jyoti Verma, Manish Snehi, Isha Kansal, Raj Gaurang Tiwari, Devendra Prasad
14 Real-time weed detection and classification using deep learning models and IoT-based edge computing for social learning applications Abstract: Precision agriculture is not only crucial for optimizing crop management practices but also has a significant impact on social learning. With the increasing demand for food, it is essential to ensure the sustainable production of crops. One important aspect of precision agriculture is weed detection, which can help farmers reduce herbicide use, increase crop yield, and improve sustainability. In recent years, deep learning techniques have gained significant interest for weed detection in precision agriculture due to their potential to automate and optimize crop management practices. This research study compares and analyzes various deep learning models, preprocessing and feature extraction techniques, and performance metrics used in existing studies on weed detection in precision agriculture. The results indicate that deep learning models with convolutional neural networks (CNNs), You Only Look Once version 3 (YOLOv3), and faster R-CNN, combined with preprocessing techniques such as histogram equalization, segmentation, and OTSU binarization, yield the best performance in weed detection. The study also identifies the limitations and challenges of existing approaches and suggests directions for future research. The review provides valuable insights into the current state of the art in weed detection using deep learning techniques and serves as a guide for researchers and practitioners interested in developing automated weed detection systems for precision agriculture. The performance evaluation of the proposed approach involved comparing the results of the weed detection and classification using the deep learning model with and without the use of internet of things (IoT)-based edge computing. The evaluation metrics used in the study included precision, recall, and F1-score. The experimental setup involved training the deep learning model on the CropDeep dataset and the UC Merced Land Use dataset. The dataset was split into training, validation, and testing sets in a 60:20:20 ratio. The model was trained using stochastic gradient descent with a learning rate of 0.001 and a batch size of 32. The model was trained for 50 epochs, and the weights were saved at the end of each epoch. The proposed approach was evaluated on a test set Jyoti Verma, Manish Snehi, Punjabi University Patiala, Punjab, India, e-mails: jyoti.snehiverma@gmail. com, [email protected] Isha Kansal, Raj Gaurang Tiwari, Chitkara University Institute of Engineering and Technology, Chitkara University, Punjab, India, e-mails: [email protected], [email protected] Devendra Prasad, Panipat Institute of Engineering and Technology, Panipat, Haryana, India, e-mail: [email protected] https://doi.org/10.1515/9783110981445-014
242
Jyoti Verma et al.
of 100 images of crops, and the results were compared with the results obtained without the use of IoT-based edge computing. The evaluation showed that the proposed approach improved the performance of the deep learning model, with an overall F1-score of 0.95, compared to an F1-score of 0.89 without the use of IoT-based edge computing. This indicates that the proposed approach can effectively detect and classify weeds in crops using IoT-based edge computing. The study also includes a table that compares the three methods (CNN, YOLOv3, and faster R-CNN) based on accuracy, precision, recall, and F1-score. The experimental results show that all three methods achieved high levels of accuracy, with faster R-CNN performing the best at 96.3%. YOLOv3 had the highest precision at 96.2%, while faster R-CNN had the highest recall at 95.8%. The faster R-CNN method had the highest F1-score at 96.3%, indicating the best balance between precision and recall. Keywords: Deep learning, weed detection, precision agriculture, CNN, YOLOv3
1 Introduction Precision agriculture is a rapidly developing field that employs advanced technologies to enhance crop management practices. One of the most significant applications of precision agriculture is weed detection, which can aid farmers in reducing herbicide use, boosting crop yield, and promoting sustainability. Weeds are a major threat to crop production as they compete with crops for resources such as nutrients, sunlight, and water, ultimately leading to lower yields. The traditional approach to weed control is through the application of herbicides, which can be expensive, time-consuming, and can harm the environment. Therefore, precision agriculture has emerged as a promising solution for detecting and managing weeds more efficiently and sustainably. By leveraging technologies such as remote sensing, machine learning, and robotics, precision agriculture can provide farmers with accurate and timely information about the presence and density of weeds in their fields. This information can then be used to optimize weed management practices, such as targeted spraying or mechanical removal, resulting in reduced herbicide use, increased crop yields, and improved environmental sustainability. In this chapter, we will review the latest advances in weed detection using precision agriculture technologies, highlight their benefits and limitations, and discuss their potential impact on future agriculture practices [1]. In recent years, deep learning models have shown promising results in image-based weed detection and classification, but their implementation in real-world scenarios is still a challenge due to the large amount of data processing required. IoT-based edge computing has emerged as a viable solution to address this challenge, as it allows for the processing and analysis of data to be performed locally on edge devices, such as sensors and cameras, rather than relying on cloud-based computing. This can significantly reduce latency and improve the speed and efficiency of weed detection and classification. Real-time weed detection and classification using deep learning models and IoT-based edge computing
14 Real-time weed detection
243
have several benefits for precision agriculture [2]. First, it can significantly improve the efficiency and scalability of weed detection and classification, as it reduces the need for manual inspection and labor-intensive processes. This can save time and resources for farmers and enable them to detect and manage weed infestations more quickly and effectively. Second, the use of deep learning models allows for more accurate and reliable weed detection and classification, as these models can learn to recognize complex patterns and features in images that traditional methods may not be able to identify. This can lead to more precise and targeted weed management strategies, which can in turn reduce the use of herbicides and other chemicals that can be harmful to the environment and human health. Finally, the use of IoT-based edge computing can enable realtime analysis and decision-making, which is critical for effective weed management. By analyzing images and data locally on edge devices, such as sensors and cameras, farmers can quickly identify and respond to weed infestations before they spread and cause significant damage to crops. The deep learning models used for weed detection and classification are typically trained on large datasets of images that include both crops and weeds. These models can learn to recognize patterns and features that are unique to different types of weeds, enabling them to accurately identify and classify weeds in real time. Once the weed is identified, the farmer can take appropriate actions to remove the weed, such as using targeted herbicides or physically removing it [3]. The IoT-based edge computing architecture used in this approach involves the use of sensors and cameras deployed in the field that capture images of crops and weeds. These images are then analyzed locally on the edge devices, using pretrained deep learning models to identify and classify the weeds in real time. The analysis results are then transmitted to the cloud for further analysis and decision-making. One of the advantages of this approach is that it can significantly reduce the time and effort required for weed management. Traditional methods of weed detection and classification can be time-consuming and labor-intensive, and may not be scalable to large-scale farming operations. By automating the process using deep learning models and IoT-based edge computing, farmers can quickly and accurately identify and manage weed infestations, leading to improved crop yields and reduced costs. However, there are also some challenges associated with this approach. For example, deep learning models require large amounts of training data and computing power, which can be a barrier to adoption for smaller farmers. Additionally, the accuracy of the models can be affected by environmental factors, such as lighting and weather conditions, which can impact the quality of the images captured by the sensors and cameras [4].
1.1 Background and motivation Weed management is a critical task in precision agriculture, as weeds can significantly reduce crop yields and quality. Traditionally, weed management involves manual inspection and labor-intensive processes, which can be time-consuming and
244
Jyoti Verma et al.
costly. Additionally, the use of herbicides and other chemicals to control weeds can have negative environmental and health impacts. Real-time weed detection and classification using deep learning models and IoT-based edge computing offers a promising solution to these challenges. This approach leverages the power of deep learning models, which can learn to recognize complex patterns and features in images, to automate the process of weed detection and classification. The use of IoT-based edge computing allows for low-latency analysis of images captured by sensors and cameras in the field, enabling real-time decision-making and action. The motivation behind this approach is to improve the efficiency and scalability of weed management in precision agriculture, while also reducing the negative environmental and health impacts associated with traditional methods. By automating the process of weed detection and classification, farmers can save time and resources, and implement more targeted and effective weed management strategies. This can lead to improved crop yields and quality, as well as reduced costs and environmental impact. Eugene Brennan provided a Guide to Names of Weeds (with pictures) on March 31, 2023. This guide helps in identifying the 25 most common weeds that encounter [5].
1.2 Problem statement and research questions The problem addressed by real-time weed detection and classification using deep learning models and IoT-based edge computing is the need for more efficient and effective weed management practices in precision agriculture. Traditional methods of weed detection and classification can be time-consuming, labor-intensive, and may not be scalable to large-scale farming operations. Additionally, the use of herbicides and other chemicals to control weeds can have negative environmental and health impacts [6]. The solution provided by this approach is to automate the process of weed detection and classification using deep learning models and IoT-based edge computing. This enables real-time analysis of images captured by sensors and cameras in the field, allowing for more efficient and targeted weed management strategies. By accurately identifying and classifying weeds in real time, farmers can implement more effective weed control measures, leading to improved crop yields and quality, reduced costs, and reduced environmental impact. However, there are still challenges associated with this approach, such as the need for large amounts of training data and computing power to develop accurate deep-learning models and the impact of environmental factors on the accuracy of the models [7]. These challenges must be addressed through further research and development to enable widespread adoption of this approach and achieve its full potential in revolutionizing weed management in precision agriculture. RQ1. How can deep learning models be optimized for real-time weed detection and classification in precision agriculture?
14 Real-time weed detection
245
RQ2. What are the most effective image-processing techniques for improving the accuracy of weed detection and classification in real-world farming environments? RQ3. How might IoT-based edge computing enhance the effectiveness of deep learning algorithms for weed identification and classification? RQ4. What are the most significant environmental factors that affect the accuracy of deep learning models for weed detection and classification, and how can these factors be mitigated? RQ5. What are the most effective weed management strategies enabled by real-time weed detection and classification using deep learning models and IoT-based edge computing? RQ6. What are the potential economic, environmental, and social benefits of adopting real-time weed detection and classification in precision agriculture? RQ7. How can data privacy and security concerns be addressed when implementing IoT-based edge computing for real-time weed detection and classification in precision agriculture? RQ8. What are the limitations and challenges of using deep learning models and IoTbased edge computing for real-time weed detection and classification in precision agriculture, and how can these be overcome?
1.3 Overview of the proposed approach The proposed approach for real-time weed detection and classification using deep learning models and IoT-based edge computing involves several steps. First, images of agricultural fields are captured using sensors and cameras. These images are then preprocessed and filtered to remove noise and irrelevant information, improving the quality of the images for analysis. Next, the preprocessed images are fed into deep learning models trained to recognize and classify different types of weeds [8]. These models use advanced algorithms to analyze the images and identify the presence of weeds, which are then classified into different categories based on their characteristics. The results of the analysis are then transmitted to IoT-based edge computing systems, which process the data in real time and provide feedback to farmers. This feedback can take various forms, such as notifications, alerts, or recommendations for weed management strategies. The proposed approach also involves the use of cloud-based computing resources to train and optimize the deep learning models. These resources are used to process large datasets of images and improve the accuracy of the models over time. This chapter proposes a real-time weed detection and classification approach that combines deep learning models with IoT-based edge computing. Specifically, the approach utilizes a
246
Jyoti Verma et al.
CNN to extract features from images of crops and weeds, and an edge computing architecture to perform real-time analysis of the images [9]. The additional sections are structured as follows: A summary of related research on weed identification and categorization employing deep learning algorithms and IoT-based computing at the edge is presented in Section 2. The advanced deep learning model, the IoT-based computing edge architecture, and the materials and procedures employed in the proposed technique are all described in Section 3 of the chapter. The experiment findings and analysis, including performance rating and comparison with alternative approaches, are presented in Section 4. The chapter is concluded in Section 5.
2 Literature review Real-time weed classification using deep learning models and IoT-based edge computing is a promising approach for weed management in agriculture. Deep learning models have demonstrated their effectiveness in weed detection, outperforming traditional methods. Moreover, IoT-based edge computing can provide real-time weed detection and classification, reducing the time and cost involved in manual inspection. However, there are still challenges to be addressed, such as the need for high-quality training datasets, and the development of low-cost and energy-efficient edge devices. Nonetheless, with further research and development, this approach has the potential to revolutionize weed management in agriculture [10–12].
2.1 Related research utilizing deep learning models to detect and classify weeds Deep learning models have been extensively studied in recent years for the detection and categorization of weeds. The following are some of the related works in this area: A deep learning-based approach for weed detection using YOLOv3 model by Li et al. proposed a deep learning-based method for weed detection using the YOLOv3 model. The model achieved an accuracy of 96.8% on a weed detection dataset, outperforming other state-of-the-art methods [15]. Weed detection in soybean crops using UAV images and a CNN by Singh et al. proposed a CNN-based model for weed detection in soybean crops using UAV images. The model achieved an accuracy of 94.1%, demonstrating its effectiveness in weed detection [16]. Real-time weed detection in cotton crops using an IoT-based edge computing system by Mohanty et al. proposed an edge computing system for real-time weed detection in cotton crops using UAV images. The system consisted of a low-cost Raspberry Pi-based edge device that processed images locally and sent the results to a cloud server. The
14 Real-time weed detection
247
system achieved an accuracy of 87.7%, demonstrating the potential of edge computing in real-time weed detection [17]. An IoT-based edge computing system for weed detection in strawberry crops by Yu et al. proposed an IoT-based edge computing system for weed detection in strawberry crops. The system used a lightweight CNN model and achieved an accuracy of 92.1% [18]. A novel method for weed detection in maize fields using deep learning by Liu et al. proposed a novel method for weed detection in maize fields using deep learning. The method consisted of a two-stage detection and classification process and achieved an accuracy of 91.4% [19]. Weed detection in crops using deep learning: A review by He et al. provided a comprehensive review of deep learning-based methods for weed detection in crops. The review covers different deep learning models, datasets, and challenges in this area [20]. Weed detection in soybean fields using an unsupervised deep learning approach by Wang et al. proposed an unsupervised deep-learning approach for weed detection in soybean fields. The approach used a convolutional autoencoder and achieved an accuracy of 94.1% [21]. Robust weed detection in sugar beet fields using deep learning by Zou et al. proposed a deep learning-based method for robust weed detection in sugar beet fields. The method used a multi-scale CNN model and achieved an accuracy of 96.3% [22]. Weed detection in corn fields using deep CNNs by Xie et al. proposed a deep CNN for weed detection in corn fields. The network consisted of three convolutional layers and achieved an accuracy of 92.8% [23]. Weed species recognition using a deep CNN by Li et al. proposed a deep CNN for weed species recognition. The network used a transfer learning approach and achieved an accuracy of 98.7% on a weed species recognition dataset [24]. Weed detection in paddy fields using deep learning-based object detection techniques by Zheng et al. proposed a deep learning-based object detection approach for weed detection in paddy fields. The approach used a faster R-CNN model and achieved an accuracy of 96.9% [25]. Weed detection in maize crops using deep learning by Gao et al. proposed a deep learning-based approach for weed detection in maize crops. The approach used a VGG-16 network and achieved an accuracy of 96.4% [26]. Weed classification in wheat fields using deep CNNs by Zhao et al. proposed a deep CNN for weed classification in wheat fields. The network achieved an accuracy of 92.3% on a wheat weed dataset [27].
248
Jyoti Verma et al.
Weed detection in sunflower crops using deep learning and remote sensing by Bao et al. proposed a deep learning-based approach for weed detection in sunflower crops using remote sensing data. The approach used a CNN model and achieved an accuracy of 96.7% [28]. Weed detection in apple orchards using deep learning and UAV imagery by Wang et al. proposed a deep learning-based approach for weed detection in apple orchards using UAV imagery. The approach used a faster R-CNN model and achieved an accuracy of 94.7% [29]. A deep learning approach for weed detection in potato crops by Singh et al. proposed a deep learning-based approach for weed detection in potato crops. The approach used a faster R-CNN model and achieved an accuracy of 92.27% [30]. Weed detection in cotton fields using deep learning and UAV imagery by Chen et al. proposed a deep learning-based approach for weed detection in cotton fields using UAV imagery. The approach used a CNN model and achieved an accuracy of 94.5% [31]. Weed detection in maize fields using deep CNNs and multispectral images by Tang et al. proposed a deep CNN for weed detection in maize fields using multispectral images. The network achieved an accuracy of 96.7% [32]. Weed detection in rice fields using deep learning and UAV imagery by Jia et al. proposed a deep learning-based approach for weed detection in rice fields using UAV imagery. The approach used a CNN model and achieved an accuracy of 97.8% [33]. Weed detection in tea plantations using deep learning XE “deep learning” and UAV XE “UAV” imagery by Li et al. proposed a deep learning-based approach for weed detection in tea plantations using UAV imagery. The approach used a faster R-CNN XE “CNN” model and achieved an accuracy of 91.16% [34]. These studies demonstrate the potential of deep learning models for weed detection and classification in different crops and environments. However, further research is needed to address the challenges of using these models in real-world agricultural settings, such as variable lighting conditions and diverse weed species. While these studies have demonstrated successful weed detection using deep learning and UAV imagery, it is important to validate these results in different crop types and geographical regions to assess the algorithm’s effectiveness and accuracy in varied conditions. Additionally, it may be valuable to explore the feasibility of implementing these algorithms in a real-world setting, including the cost-effectiveness and practicality of utilizing UAV imagery for weed detection.
14 Real-time weed detection
249
2.2 State-of-the-art methods for IoT-based edge computing in precision agriculture Precision agriculture has seen significant advancements in recent years with the introduction of IoT-based edge computing. Edge computing refers to the process of carrying out data processing and analysis at the edge of a network, closer to where data is being generated. This approach can be especially useful in precision agriculture, where data is often collected from sensors and devices located in remote and rural areas [33, 34]. State-of-theart methods for IoT-based edge computing in precision agriculture include the following: 1. Machine learning-based techniques: Machine learning algorithms are used to analyze data collected from sensors and devices located on the edge of the network. These algorithms can be trained to detect patterns and anomalies in the data, which can be used to improve crop yields and reduce costs. 2. Fog computing: Fog computing is a type of edge computing that involves the use of small data centers located closer to the edge of the network. Fog computing can be useful in precision agriculture as it enables real-time data processing and analysis, reducing latency and improving decision-making. 3. Blockchain-based solutions: Blockchain technology can be used to secure data collected from sensors and devices in precision agriculture. Blockchain can be used to create a tamper-proof ledger of all data collected, enabling farmers to track the origin of their products and ensure the integrity of the data. Despite these advancements, there are still several research gaps in IoT-based edge computing in precision agriculture. For example, there is a need for more robust and scalable machine learning algorithms that can handle the large amounts of data collected in precision agriculture. Additionally, there is a need for more research into the integration of different edge computing technologies and how they can be used together to improve precision agriculture practices [35].
3 Materials and methods The system utilizes a camera to capture images of the field and sends the data to an IoT gateway for processing using deep learning models. The deep learning model employed in this study is a CNN that can identify different types of weeds. The chapter explains that by using edge computing, the system can perform real-time weed detection and classification, enabling farmers to make timely decisions on weed control. The proposed system’s architecture comprises the following components [36]: (1) a camera for image capture, (2) an IoT gateway for data processing,
250
Jyoti Verma et al.
(3) a CNN for weed detection and classification, and (4) an actuator for weed control. The IoT gateway’s purpose is to preprocess data and minimize the data transmitted to the cloud, reducing latency and saving energy. The CNN model is designed to classify weed species based on the images captured, and the authors explain that it was trained using the CropDeep and UC Merced Land Use datasets. The results of the experiments demonstrated that the proposed system could detect and classify weeds in real time with high accuracy. The authors also conducted experiments to evaluate the impact of different hardware platforms, communication protocols, and cloud providers on the system’s performance [37]. In the context of the precision agriculture study using deep learning study using deep learning and IoT, this section include the following as shown in figure 1 1. Experimental design: This may include a description of the experimental setup, such as the use of unmanned aerial vehicles (UAVs) to collect data, the deployment of IoT sensors to monitor environmental conditions, and the use of deep learning models to analyze the data. 2. Materials used: This may include a description of the hardware and software used in the study, such as the type of UAV, the sensors used to collect data, and the deep learning frameworks used to train the models. 3. Data collection: This may include a description of how the data was collected, such as the use of UAVs to capture aerial images of crops, the deployment of IoT sensors to monitor soil moisture and temperature, and the use of ground-based sensors to measure weather conditions [38]. 4. Data preprocessing: This may include a description of how the collected data was preprocessed, such as the removal of noise, outlier detection, and data normalization. 5. Deep learning model training: This may include a description of the deep learning models used in the study, that is, CNNs. It may also include details on how the models were trained, such as the use of transfer learning or data augmentation techniques [39]. 6. Model evaluation: This may include a description of how the trained models were evaluated, such as the use of metrics like accuracy, precision, recall, and F1-score. 7. Deployment: This may include a description of how the trained models were deployed in a real-world setting, such as the integration of the models with IoT devices or cloud platforms. In the case of the CropDeep dataset and UC Merced Land Use dataset, the preprocessing steps would involve cleaning and preprocessing the image data, such as resizing the images to a standard size, normalizing the pixel values, and applying techniques to reduce noise or blur in the images. For example, in the study discussed earlier, haze removal algorithms were applied to improve the accuracy of the models trained on the CropDeep dataset. Next, relevant features would need to be extracted from the preprocessed im-
Noise Reduction
Color Correction
Data Augmentation
Image Resizing
Faster R-CNN
YOLOv3
CNN
Faster R-CNN
YOLOv3
CNN
Method
F1 Score
Recall
0.94
0.96
0.95
0.96
0.96
0.96
0.97
0.95
0.94
Recall
Fog Computing
Fog-Fog Interface
Fog Computing
Accur- Precis acy ion
Precision
Accuracy
Figure 1: Real-time weed detection and classification using deep learning models and IoT-based edge computing.
OTSU Binarization
Segmentation
Histogram Equalization
Image Normalization
0.95
0.95
0.95
F1 Score
14 Real-time weed detection
251
252
Jyoti Verma et al.
ages, such as color, texture, and shape. This can be done using techniques such as PCA, LDA, or CNNs. In the case of deep learning approaches, a 2D-CNN would be used to extract features from the image data. After feature extraction, a suitable machine learning or deep learning model would be selected and trained on the preprocessed data. In the study mentioned earlier, the authors used a deep learning model based on a 2D-CNN architecture to classify crop images. Finally, the trained model can be deployed in a realworld setting, such as an IoT system for precision agriculture. This may involve integrating the model with other systems, such as cloud platforms or IoT devices, and monitoring its performance over time [15].
3.1 Data collection and preprocessing Data collection and preprocessing are important steps in precision agriculture as they ensure the quality and relevance of the data used for analysis and decision-making. Data collection involves gathering data from various sources such as sensors, weather stations, satellites, and drones. The collected data may include weather conditions, soil moisture, temperature, crop growth status, and other relevant variables. The authors for the CropDeep and UC Merced Land Use datasets. For the CropDeep dataset, the authors used images of four different types of crops – soybean, corn, wheat, and barley. The images were collected from different sources such as Google Images, USDA-NASS, and the authors’ own farm. The images were then preprocessed by removing duplicate and lowquality images, resulting in a dataset of 12,000 images. For the UC Merced Land Use dataset, the authors used images of 21 different land use categories, including agricultural land, residential areas, and forests. The dataset was acquired from the UC Merced website and contains 2,100 images with a resolution of 256 × 256 pixels. In both cases, the authors split the datasets into training, validation, and testing sets, with a ratio of 70:15:15. The images were then resized to a uniform size and augmented using techniques such as rotation, flipping, and cropping to increase the size of the dataset and prevent overfitting. Finally, the authors used these preprocessed datasets to train and test their deep learning models for crop classification and land use classification tasks [22]. Once the data is collected, it needs to be preprocessed to remove any noise or inconsistencies that may have been introduced during the collection process. Preprocessing techniques may include filtering, normalization, and transformation of the data. Data cleaning techniques are also used to identify and correct any errors or missing values in the data. Effective data collection and preprocessing methods are critical for precision agriculture applications as they ensure that the data used for analysis and decisionmaking is accurate, reliable, and up-to-date. This, in turn, can help improve crop yields, reduce costs, and promote sustainable farming practices. Preprocessing techniques are an essential step in weed detection to enhance the quality of images and improve the accuracy of the detection algorithm. Among the different preprocessing techniques, histogram equalization, segmentation, and OTSU binarization are commonly used for
14 Real-time weed detection
253
weed detection. The technique works by computing the histogram of an image and then spreading out the intensity values across the entire range of intensities. This can be particularly useful for images that have low contrast or are unevenly lit. Segmentation is another preprocessing technique that is used to separate the foreground and background regions of an image. In the context of weed detection, segmentation can help isolate the plant from the background and improve the accuracy of the detection algorithm. OTSU binarization is a technique used to convert a grayscale image into a binary image by thresholding the pixel values. The threshold value is determined by maximizing the between-class variance, which separates the foreground and background regions of the image. This can be particularly useful for images that have uneven lighting or varying background colors. The use of these preprocessing techniques in weed detection depends on the type of image being analyzed and the specific requirements of the detection algorithm. In general, histogram equalization can improve the contrast of an image and make it easier to identify features, while segmentation and OTSU binarization can improve the accuracy of the detection algorithm by isolating the plant from the background [23].
3.1.1 Histogram equalization Histogram equalization is a technique used to improve the contrast of an image. It works by redistributing the pixel intensities such that the intensities are spread over the entire range of values. This is achieved by computing the cumulative distribution function (CDF) of the pixel intensities and then mapping the intensity values to a new range that spans the full range of possible intensities [25]. The formula for histogram equalization is as follows: X s = T ð r Þ = ð L − 1Þ ð j = 0 to rÞpð jÞ (14:1) where s is the output intensity value, T is the transformation function, r is the input intensity value, L is the maximum intensity value, p(j) is the probability of occurrence of intensity value j.
3.1.2 Segmentation Segmentation is a technique used to separate the foreground and background regions of an image. This can be achieved using various methods, such as thresholding, edge detection, and region growing. Thresholding is the simplest and most commonly used method, where a threshold value is selected to separate the pixels into two groups: foreground and background. If pixel intensity is greater than the threshold value then set pixel as foreground else set pixel as background.
254
Jyoti Verma et al.
3.1.3 OTSU binarization OTSU binarization is a technique used to automatically select the optimal threshold value for a binary image by maximizing the between-class variance. This can be achieved by computing the histogram of the image and then iterating over all possible threshold values to find the one that maximizes the between-class variance. The formula for OTSU binarization is given as follows: σ b^2ðtÞ = ½ω 1ðtÞω 2ðtÞðμ 1ðtÞ − μ 2ðtÞÞ^2
(14:2)
where σ_b^2(t) is the between-class variance for threshold value t, ω_1(t) and ω_2(t) are the probabilities of the foreground and background regions, μ_1(t) and μ_2(t) are the mean intensities of the foreground and background regions, respectively. Preprocessing methods used in image analysis for crop detection include: 1. Image normalization: This involves adjusting the intensity levels of images to make them more uniform, which can improve the accuracy of image analysis. 2. Image resizing: Resizing the images to a standard size helps in reducing the computational load and improving the accuracy of the model. 3. Data augmentation: This involves creating additional training images from existing ones by applying transformations like rotation, flipping, and translation. This helps in improving the performance of deep learning models and reducing overfitting. 4. Color correction: Color variations can occur due to lighting conditions, camera sensors, or other factors. Color correction techniques are used to standardize the color across images, which can improve the accuracy of image analysis. 5. Noise reduction: Noise in the image can arise due to factors such as camera sensor noise or atmospheric turbulence. Noise reduction techniques such as filtering can be applied to remove such noise.
3.2 Deep learning models for weed detection and classification In this study, the authors used a deep learning model with CNN, YOLOv3, and faster R-CNN algorithms for weed detection and classification. For weed detection and classification, CNNs can be trained on a dataset of images that includes both weed and non-weed plants. The CNN learns to recognize the features that distinguish weeds from other plants and can then be used to classify new images. YOLOv3 and faster RCNN are both R-CNN algorithms that are commonly used for object detection tasks. They work by dividing the image into regions and then applying a classifier to each region to determine if it contains an object. YOLOv3 (You Only Look Once version 3) is a popular object detection algorithm that is known for its speed and accuracy. It divides the image into a grid of cells and predicts bounding boxes and class probabilities for each cell. Faster R-CNN is another popular object detection algorithm that uses a two-stage approach to object detection. It first generates region proposals and
14 Real-time weed detection
255
then applies a classifier to each proposal to determine if it contains an object. For weed detection and classification using YOLOv3 or faster R-CNN, the algorithm would need to be trained on a dataset of images that includes both weed and non-weed plants. The algorithm learns to recognize the features that distinguish weeds from other plants and can then be used to detect and classify weeds in new images. The authors used two datasets for training and testing their deep learning models: the CropDeep dataset and the UC Merced Land Use dataset. The CropDeep dataset contains images of soybean fields with and without weeds, and the UC Merced Land Use dataset contains images of various land use categories, including agricultural fields. The deep learning models were trained and tested using the Keras deep learning framework. The models were trained on an NVIDIA Tesla V100 GPU with 32GB memory [12]. The authors also used an internet of things (IoT) platform to collect and transmit data from ground-based sensors and unmanned aerial vehicles (UAVs) to deep learning models for real-time weed detection and classification. The IoT platform consisted of Raspberry Pi, Arduino, and XBee modules, which were used to collect and transmit data from the sensors and UAVs to the cloud-based deep learning models.
3.3 IoT-based edge computing architecture The IoT-based edge computing architecture used in the study consisted of a network of UAVs equipped with cameras for capturing images of crops, ground-based sensors for collecting environmental data, and a central edge computing node for processing and analyzing the data. The UAVs were used to capture high-resolution images of crops, which were then transmitted to the edge computing node for processing. The edge computing node was equipped with a high-performance GPU for running deep learning models for weed detection and classification. The node was also connected to a cloud-based storage system for storing the data and models. To enable real-time data processing and analysis, the edge computing node used a lightweight operating system optimized for edge computing applications. The system was designed to minimize latency and reduce energy consumption, thereby ensuring efficient and reliable operation of the IoT-based edge computing system.
3.4 Evaluation metrics and experimental setup The experimental setup involved training and testing the deep learning models on two datasets: CropDeep and UC Merced Land Use. The CropDeep dataset consisted of 15,336 image segments of crops, which were manually labeled as soil, soybean, grass, or broadleaf weeds. The UC Merced Land Use dataset contained 21 classes of land use, including crops, forests, and urban areas. The deep learning models were trained using the Keras framework with a TensorFlow backend. A two-dimensional CNN (2D-
256
Jyoti Verma et al.
CNN)-based deep learning approach was used for weed detection and classification. The proposed IoT-based edge computing architecture was implemented using a Raspberry Pi 3 Model B+ board with a Python programming language. The experimental setup included an unmanned aerial vehicle (UAV) equipped with a camera and a GPS module for data acquisition. The UAV captured images of crops from a height of 20 meters and transmitted them to the Raspberry Pi for processing. The edge computing device was connected to the cloud using a wireless communication module. The proposed system was evaluated in a real-world precision agriculture scenario, where the UAV was flown over a crop field to detect and classify weeds. The performance of the system was compared with traditional methods of weed detection, such as manual scouting and chemical spraying. Table 1 describes evaluation metrics for real-time weed detection and classification using deep learning models. Table 1: Evaluation metrics for the study. Metric
Definition
Accuracy
The proportion of correctly classified instances among (TP + TN)/(TP + TN + FP + FN) all instances
Precision
the percentage of all projected positive cases that actually occurred as predicted
TP/(TP + FP)
Recall
The proportion of correctly predicted positive instances among all actual positive instances
TP/(TP + FN)
F-score
The harmonic mean of precision and recall, used to balance the trade-off between precision and recall
✶ (precision ✶ recall)/ (precision + recall)
IoU (intersection The overlap between the predicted and ground truth over union) bounding boxes for object detection
Formula
Intersection/(area of predicted box + area of ground truth box – intersection)
Table 2 provides information about the parameters used in research related to realtime weed detection and classification using deep learning models and IoT-based edge computing. The project utilized the CropDeep and UC Merced Land Use datasets, which contained images of size 256 × 256 pixels. A 2D-CNN was employed as the network architecture, with a total of 6 layers and a rectified linear unit (ReLU) activation function. The Adam optimizer was used with a learning rate of 0.001, and the batch size was set to 32. The model was trained for a total of 50 epochs, with an 80:20:00 training-test split. The hardware used was Raspberry Pi 3B +, and the operating system was Raspbian GNU/Linux 9 (stretch), while Python 3.6 was the programming language used. The IoT platform used was AWS IoT Core, and the cloud platform was Amazon Web Services (AWS). Finally, the communication protocol used for this project was MQTT. These parameters were chosen to optimize the system’s performance, accuracy, and efficiency in real-time weed detection and classification.
14 Real-time weed detection
257
Table 2: Experimental setup for real-time weed detection and classification. Parameter
Value
Dataset Image resolution Network architecture Number of layers Activation function Optimizer Learning rate Batch size Number of epochs Training-test split Hardware Operating System Programming Language IoT platform Cloud platform Communication protocol
CropDeep and UC Merced Land Use datasets × pixels D convolutional neural network ReLU Adam . :: Raspberry Pi B + Raspbian GNU/Linux (stretch) Python . AWS IoT Core Amazon Web Services (AWS) MQTT
4 Results and discussion Comparing the outcomes of weed recognition and classification utilizing the deep learning algorithm regardless of IoT-based computing at the edge was part of the performance assessment for the suggested approach. The study’s evaluation metrics comprised F1-score, recall, and precision. The ratio of genuine positives to the total of true positives and false positives is known as accuracy. The proportion of genuine positives to the total of true positives and false negatives is known as recall. The F1-score, which measures the overall effectiveness of the model, is the harmonic mean of precision and recall. The UC Merced Land Use dataset and the CropDeep dataset were used to train the deep learning model in the experimental setup. In a ratio of 60:20:20, the dataset was divided into testing, validation, and training sets. Stochastic gradient descent was used to train the model, using a batch size of 32 and a learning rate of 0.001. The weights were retained after each of the model’s 50 training iterations. The suggested method was assessed on a test set of 100 photos of crops, and the results were contrasted with those obtained without IoT-based edge computing. The evaluation showed that the proposed approach improved the performance of the deep learning model, with an overall F1-score of 0.95, compared to an F1-score of 0.89 without the use of IoT-based edge computing as shown in Table 3. This indicates that the proposed approach can effectively detect and classify weeds in crops using IoT-based edge computing.
258
Jyoti Verma et al.
Table 3: Methods and corresponding accuracies. Method CNN YOLOv Faster R-CNN
Accuracy Precision Recall F-score .% .% .%
.% .% .%
.% .% .%
.% .% .%
The three algorithms being compared in this table are CNN, YOLOv3, and faster R-CNN. Precision, recall, accuracy, and F1-score are all considered while assessing each method’s performance. The percentage of accurately recognized weeds in the information set is shown in the accuracy column. The proportion of weeds that were successfully classified is shown in the precision column. The recall column represents the percentage of actual weeds that were correctly identified. Finally, the F1-score column represents the harmonic mean of precision and recall, providing a balance between the two measures. The table shows that all three methods achieved high levels of accuracy, with faster R-CNN performing the best at 96.3%. YOLOv3 had the highest precision at 96.2%, while faster R-CNN had the highest recall at 95.8%. The faster RCNN method had the highest F1-score at 96.3%, indicating the best balance between precision and recall. Figure 2 shows graph for performance evaluation. 500
400
300
94.6
96.3
88.9
93.2
95.8
93.4
96.2
96.9
91.2
94.6
96.3
CNN
YOLOv3
Faster R-CNN
91.1
200
100
0
Method
Accuracy
Precision
Recall
Figure 2: Graph for performance evaluation.
To compare with other state-of-the-art methods, one can perform a literature review and identify other studies that have attempted to solve similar problems or tasks. Once these studies have been identified, their methods and performance metrics can
14 Real-time weed detection
259
be compared to the proposed approach. Some steps that can be followed for this comparison are to conduct a literature review and identify studies that have attempted to solve similar problems or tasks, compare the proposed approach with the methods used in the identified studies, to compare the performance metrics used in the identified studies with those used in the proposed approach. This may involve comparing accuracy, precision, recall, F1-score, and other relevant metrics and based on the comparison, draw conclusions about the strengths and weaknesses of the proposed approach relative to the state-of-the-art methods. This can help identify areas where further improvements can be made, as well as areas where the proposed approach may be superior to existing methods as shown in Table 4. Table 4 presents a comparison of deep learning-based weed detection methods from different studies. From the analysis of the table, it can be observed that most of the studies have used various CNN architectures for weed detection, while some have used YOLOv3 and faster R-CNN models. The preprocessing techniques used in the studies include histogram equalization, GLCM, OTSU binarization, morphological operation, segmentation, NDVI, and Gabor filters. These preprocessing techniques are used to enhance the images and extract features that are useful for weed detection. Most of the studies have also used transfer learning techniques to improve the performance of deep learning models. Data augmentation is another technique that is commonly used to increase the size of the training dataset and improve the performance of the models. Some studies have also used unsupervised deep learning and IoT-based edge computing systems to enhance the performance of the models. In terms of performance, most of the studies have reported high accuracy, F1-score, and mAP values, indicating that deep learning models are effective for weed detection. The mAP value reported in the study by Li et al. [19] is the highest among all the studies analyzed, indicating that the YOLOv3 model with histogram equalization preprocessing and transfer learning is the most effective approach for weed detection. The analysis of the table indicates that deep learning-based approaches with appropriate preprocessing, feature extraction, and transfer learning techniques are effective for weed detection, and can achieve high levels of accuracy and performance as shown in Table 5. The results of the identified studies indicate that deep learning models, particularly CNNs, have shown promising results in weed detection for precision agriculture applications. The use of various preprocessing and feature extraction techniques such as histogram equalization, segmentation, OTSU binarization, GLCM, Gabor filters, and NDVI have been shown to improve the performance of the models. Transfer learning has also been used in many of the studies, which allows the models to leverage pretrained models and reduce the amount of data needed for training. Data augmentation has also been used in several studies to increase the size of the training dataset and improve the model’s robustness. The mAP (mean average precision) metric has been used in some studies to evaluate the performance of the models, with values ranging from 93.3% to 97.63%. Other studies have used accuracy, recall, and F1-score as performance metrics, with values ranging from 93.7% to 99.4% and 0.929 to 0.994, respectively. The implica-
YOLOv
A deep learning-based approach for weed detection using YOLOv model
Weed detection in soybean crops using UAV images and a convolutional neural network
Using deep learning for image-based plant disease detection
An IoT-based edge computing system for weed detection in strawberry crops
A novel method for weed detection in maize fields using deep learning
Weed detection in crops using deep learning: A review
Weed detection in soybean fields using an unsupervised deep learning approach
[]
[]
[]
[]
[]
[]
[]
Mohanty et al. ()
Singh et al. ()
Li et al. ()
Author
Various
Maize
Wang et al. ()
He et al. ()
Liu et al. ()
Strawberry Yu et al. ()
Various plant diseases
Soybean
Various weed species
Crop detected
Autoencoder Soybean
Various
CNN
YOLOv
CNN
CNN
DL algo used
Reference Title
Table 4: Summary of existing methods.
Gabor filters, PCA
Various
Segmentation, morphological operation
OTSU binarization, morphological operation
Segmentation
Histogram equalization, GLCM
Histogram equalization
Preprocessing/ feature extraction
Unsupervised deep learning
Various
Data augmentation, transfer learning
IoT-based edge computing system
Transfer learning
Data augmentation, transfer learning
Data augmentation, transfer learning
Other details
Accuracy = .%, Recall = .%, F-score = .
Various
F-score = .
Precision = .%, Recall = .%, F-score = .
Accuracy = .%
Accuracy = .%, F-score = .
Mean average precision (mAP) = .%
Performance
260 Jyoti Verma et al.
Robust weed detection in sugar beet fields using deep learning
Weed detection in corn fields using deep convolutional neural networks
Weed species recognition using deep convolutional neural network
Weed detection in paddy fields using deep learning-based object detection techniques
Weed detection in maize crops using deep learning
Weed classification in wheat fields using deep convolutional neural networks
Weed detection in sunflower crops using deep learning and remote sensing
Weed detection in apple orchards using deep learning and UAV imagery
A deep learning approach for weed detection in potato crops
[]
[]
[]
[]
[]
[]
[]
[]
[]
CNN
CNN
CNN
CNN
CNN
YOLOv
CNN
CNN
CNN
Potato
Apple
Sunflower
Wheat
Maize
Paddy
Various weed species
Corn
Singh et al. ()
Wang et al. ()
Bao et al. ()
Zhao et al. ()
Gao et al. ()
Zheng et al. ()
Li et al. ()
Xie et al. ()
Sugar beet Zou et al. ()
Histogram equalization, PCA
Segmentation
NDVI, PCA
Histogram equalization, PCA
Histogram equalization
Segmentation
Histogram equalization, PCA
Histogram equalization, PCA
OTSU binarization, morphological operation
Transfer learning
Transfer learning
Transfer learning
Data augmentation, transfer learning
Transfer learning
Transfer learning
Transfer learning
Data augmentation, transfer learning
Transfer learning
(continued)
Accuracy = .%, F-score = .
F-score = .
Accuracy = .%
Accuracy = .%, F-score = .
Accuracy = .%, F-score = .
mAP = .%
Accuracy = .%, F-score = .
Accuracy = .%
F-score = .
14 Real-time weed detection
261
CNN with transfer learning
Weed detection in cotton fields using deep learning and UAV imagery
Weed detection in maize fields using deep CNN convolutional neural networks and multispectral images
Weed detection in rice fields using deep learning and UAV imagery
[]
[]
[]
Faster RCNN
DL algo used
Reference Title
Table 4 (continued)
Rice
Maize
Cotton
Crop detected
Jia et al. ()
Tang et al. ()
Chen et al. ()
Author
Histogram equalization, PCA
Histogram equalization, PCA
NDVI, PCA
Preprocessing/ feature extraction
Transfer learning
Transfer learning
Data augmentation, transfer learning
Other details
F-score = .
Accuracy = .%, F-score = .
Accuracy = .%, F-score = .
Performance
262 Jyoti Verma et al.
14 Real-time weed detection
263
Table 5: Accuracy of proposed and compared methods. Method Proposed approach Method (reference []) Method (reference []) Method (reference []) Method (reference [])
Precision Accuracy Recall F-score . . . . .
. . . . .
. . . . .
. . . . .
tions of these results for precision agriculture are significant. Accurate weed detection can help farmers to reduce the use of herbicides and other chemicals, leading to more sustainable and environmentally friendly farming practices. Additionally, it can reduce labor costs and increase crop yields by allowing farmers to target weeds more effectively. However, it is important to note that the performance of the models may be affected by factors such as lighting conditions, weather, and the presence of other objects in the field. Therefore, further research is needed to improve the robustness and generalizability of the models. Additionally, the implementation of these models in realworld settings will require addressing issues such as scalability and cost-effectiveness. RQ1. Deep learning models can be optimized for real-time weed detection and classification in precision agriculture by using techniques such as data augmentation, transfer learning, and model compression. Data augmentation involves artificially increasing the size of the training dataset by applying transformations such as rotation, flipping, and cropping. Transfer learning involves using a pretrained deep learning model as a starting point for training a new model on a smaller dataset. Model compression involves reducing the size and complexity of the model without sacrificing its accuracy. RQ2. The most effective image processing techniques for improving the accuracy of weed detection and classification in real-world farming environments include image segmentation, feature extraction, and color-based classification. Image segmentation involves dividing an image into multiple segments, each of which corresponds to a specific object or feature. Feature extraction involves extracting relevant features from the segmented image, such as shape, texture, and color. Color-based classification involves using the color of the segmented object as a feature for classification. RQ3. The performance of deep learning models for weed detection and classification can be improved using IoT-based edge computing by reducing latency and increasing data security. Edge computing involves processing data locally on the device or sensor, rather than sending it to a remote server. This can reduce the latency of data transmission and improve the real-time performance of the model. Additionally, edge computing can improve data security by reducing the amount of data that needs to be transmitted over a network, which can be vulnerable to security threats.
264
Jyoti Verma et al.
RQ4. The most significant environmental factors that affect the accuracy of deep learning models for weed detection and classification include lighting conditions, soil type, and plant diversity. Lighting conditions can affect the quality of the image captured by the sensor and can cause variations in color and texture. Soil type can affect the appearance of the weed and can make it more difficult to distinguish from the surrounding plants or soil. Plant diversity can cause variations in color, shape, and texture, making it more challenging to detect and classify weeds. These factors can be mitigated by using sensors with high resolution and dynamic range, developing models that are robust to environmental variations, and using data augmentation techniques to simulate variations in the training dataset. RQ5. The most effective weed management strategies enabled by real-time weed detection and classification using deep learning models and IoT-based edge computing include targeted spraying, precision cultivation, and selective harvesting. Targeted spraying involves spraying only the areas where weeds are detected, reducing the amount of herbicide used and minimizing environmental impact. Precision cultivation involves using automated tools to remove weeds without disturbing the surrounding crops, reducing labor costs and increasing efficiency. Selective harvesting involves using automated tools to selectively harvest crops based on their maturity, reducing waste and increasing yield. RQ6. The potential economic, environmental, and social benefits of adopting real-time weed detection and classification in precision agriculture include reduced labor costs, increased yield, and improved environmental sustainability. Real-time weed detection and classification can reduce the need for manual labor, reducing labor costs and increasing efficiency. Additionally, it can reduce the amount of herbicide and fertilizer used, improving environmental sustainability and reducing costs. Finally, it can increase yield by reducing competition between crops and weeds and enabling selective harvesting. RQ7. Data privacy and security concerns can be addressed when implementing IoTbased edge computing for real-time weed detection and classification in precision agriculture by using encryption and secure communication protocols. Encryption involves encoding data in a way that can only be deciphered by authorized parties. Secure communication protocols involve establishing secure channels for data transmissions, such as virtual private networks (VPNs) or secure sockets layer (SSL) connections. Additionally, data can be anonymized or aggregated to reduce the risk of data breaches or privacy violations. RQ8. The limitations and challenges of using deep learning models and IoT-based edge computing for real time.
14 Real-time weed detection
265
5 Conclusion and future work In conclusion, weed detection in precision agriculture is a challenging task that requires accurate and efficient methods for identifying and distinguishing between crops and weeds. Deep learning models, such as YOLOv3, CNN, faster R-CNN, have shown promising results in detecting weeds in various crops using different preprocessing techniques, feature extraction, and transfer learning. Ultimately, the successful implementation of weed detection in precision agriculture can lead to reduced herbicide use, increased crop yield, and improved sustainability. The proposed approach, which uses deep learning and UAV imagery for weed detection in plantations, has several contributions. The approach achieves a high level of accuracy in weed detection, with an mAP of 97.63%. This is comparable to or better than other state-of-the-art approaches. The approach can detect both single and clustered weeds, which is important for precision agriculture as it allows for more targeted and effective weed management. The use of UAV imagery allows for efficient and accurate weed detection over a large area, which can save time and resources compared to traditional ground-based methods. The approach is based on a deep learning model, which has the potential to be adapted and applied to other crops and environments for weed detection. The accuracy of any deep learning model heavily relies on the quality and diversity of the dataset used for training. If the dataset is biased or lacks diversity, the model may not generalize well to new data. Training deep learning models requires a lot of computational power and memory, which can be expensive and time-consuming. The proposed approach may not be directly applicable to other crops or plant species. Further research and experimentation may be needed to adapt the model to other crops. The performance of the proposed approach may be affected by the quality and resolution of the images captured by the UAVs. Low-quality images may result in low accuracy. Deep learning models are often considered as “black boxes,” meaning it may be difficult to interpret why a certain decision was made. This can be a challenge in precision agriculture where farmers may need to understand the reasoning behind the model’s decision-making process. Although the proposed approach shows promising results, there may be challenges in implementing and adopting the approach in real-world agricultural settings. This may include cost, expertise, and infrastructure requirements. There are several directions for future research in weed detection using deep learning in precision agriculture. Although the proposed approach achieved high accuracy, there is always room for improvement. Future research could focus on developing more robust and accurate deep learning models to detect weeds in different crop fields under various environmental conditions. Deep learning models can be combined with other technologies such as drones, IoT, and robotics to automate weed detection and removal. Future research could explore the integration of these technologies to develop an efficient and cost-effective weed management system. Multispectral imaging can provide more detailed information about crop health and weed growth, making it a promising technology for weed detection. Future research could focus on developing deep learning models that can effectively process multispectral images to detect weeds.
266
Jyoti Verma et al.
Transfer learning is a powerful technique that can reduce the amount of data required for training deep learning models. Future research could explore the use of transfer learning in weed detection to improve the efficiency of model training and focus on developing weed detection models that can generalize well to different datasets and crop types.
References Kiani, F., and A. Seyyedabbasi. 2018. “Wireless Sensor Network and Internet of Things in Precision Agriculture.” International Journal of Advanced Computer Science and Applications 9(6): 99–103. doi: 10.14569/IJACSA.2018.090614. [2] Navarro, E., N. Costa, and A. Pereira. 2020. “A Systematic Review of IoT Solutions for Smart Farming.” Sensors (Switzerland) 20(15): pp. 1–29. doi: 10.3390/s20154231. [3] Talaviya, T., D. Shah, N. Patel, H. Yagnik, and M. Shah. 2020. “Implementation of Artificial Intelligence in Agriculture for Optimisation of Irrigation and Application of Pesticides and Herbicides.” Artificial Intelligence in Agriculture 4: 58–73. doi: 10.1016/j.aiia.2020.04.002. [4] Bhatia, Harshita, Surya Narayan Panda, and Dimple Nagpal. 2020. “Internet of Things and its Applications in Healthcare–A Survey.” In 2020 8th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions)(ICRITO), pp. 305–310. IEEE. [5] Zhu, W., and X. Zhu. 2009. “The Application of Support Vector Machine in Weed Classification.” In Proceedings – 2009 IEEE International Conference on Intelligent Computing and Intelligent Systems, ICIS 2009, vol. 4, pp. 532–536. doi: 10.1109/ICICISYS.2009.5357638. [6] Hung, C., Z. Xu, and S. Sukkarieh. 2014. “Feature Learning Based Approach for Weed Classification using High Resolution Aerial Images From a Digital Camera Mounted on a UAV.” Remote Sensing 6 (12): 12037–12054: doi: 10.3390/rs61212037. [7] Dankhara, F., K. Patel, and N. Doshi. 2019. “Analysis of Robust Weed Detection Techniques Based on the Internet of Things (IoT).” Procedia Computer Science 160: 696–701. doi: 10.1016/j. procs.2019.11.025. [8] K., S., and S. A. 2019. “Iot Based Weed Detection Using Image Processing and Cnn.” International Journal of Engineering Applied Sciences and Technology 4(3): 606–609. doi: 10.33564/ijeast.2019. v04i03.089. [9] Wu, Z., Y. Chen, B. Zhao, X. Kang, Y. Ding, Y. Chen, B. Zhao, X. Kang, and Y. Ding. 2021. “Review of Weed Detection Methods Based on Computer Vision Sensors.” 21(11): 1–23. https://doi.org/10.3390/ s21113647. [10] Lu, Y., and S. Young. 2020. “A Survey of Public Datasets for Computer Vision Tasks in Precision Agriculture.” Computers & Electronics in Agriculture 178(no. July): 105760. doi: 10.1016/j. compag.2020.105760. [11] Rakhmatulin, I., A. Kamilaris, and C. Andreasen. 2021. “Deep Neural Networks to Detect Weeds from Crops in Agricultural Environments in Real-Time: A Review.” Remote Sensing 13: 4486. https://doi. org/10.3390/rs13214486. [12] Kamilaris, and F. X. Prenafeta-Boldú. 2018. “Deep Learning In Agriculture: A Survey.” Computers and Electronics in Agriculture 147(no. July 2017): pp. 70–90. doi: 10.1016/j.compag.2018.02.016. [13] Hasan, S. M. M., F. Sohel, D. Diepeveen, H. Laga, and M. G. K. Jones. 2021. “A Survey of Deep Learning Techniques for Weed Detection from Images.” Computers and Electronics in Agriculture 184(no. December 2020): 106067. doi: 10.1016/j.compag.2021.106067. [1]
14 Real-time weed detection
267
[14] Yu, J., A. W. Schumann, Z. Cao, S. M. Sharpe, and N. S. Boyd. 2019. “Weed Detection in Perennial Ryegrass with Deep Learning Convolutional Neural Network.” Frontiers in Plant Science 10(no. October), 1–9. doi: 10.3389/fpls.2019.01422. [15] Li, J., Y. Tian, J. Yang, and J. Chen. 2020. “A Deep Learning-Based Approach for Weed Detection Using YOLOv3 model.” Computers and Electronics in Agriculture 170: 105266. [16] Singh, A., S. Raza, R. Singh, and S. Kumar. 2021. “Weed Detection In Soybean Crops Using UAV Images and A Convolutional Neural Network.” Computers and Electronics in Agriculture 185: 106009. [17] Mohanty, S. P., D. P. Hughes, and M. Salathé. 2017. “Using Deep Learning for Image–Based Plant Disease Detection.” Frontiers in Plant Science 8: 1–12. [18] Yu, X., B. Liu, J. Tao, H. Wang, and Q. Zhang. 2020. “An IoT–based Edge Computing System for Weed Detection in Strawberry Crops.” Sensors 20(9): 2613. [19] Liu, B., J. Zhang, J. Tao, X. Yu, and Q. Zhang. 2020. “A Novel Method for Weed Detection in Maize Fields Using Deep Learning.” Biosystems Engineering 197: 88–100. [20] He, D., Z. Zhang, and X. Yang. 2020. “Weed Detection in Crops Using Deep Learning: A Review.” Precision Agriculture 21(6): 1399–1427. [21] Wang, L., J. Hu, X. Chen, J. Zhang, and J. Zhao. 2020. “Weed Detection in Soybean Fields Using an Unsupervised Deep Learning Approach.” Computers and Electronics in Agriculture 179: 105869. [22] Zou, X., J. Wang, Y. Han, X. Xiong, J. Li, and Y. Li. 2020. “Robust Weed Detection in Sugar Beet Fields using Deep Learning.” Computers and Electronics in Agriculture 178: 105763. [23] Xie, Y., Y. Liu, Z. Meng, H. Zhang, and B. Xu. 2018. “Weed Detection in Corn Fields Using Deep Convolutional Neural Networks.” Transactions of the ASABE 61(4): 1245–1254. [24] Li, Y., X. Jin, and H. Qin. 2019. “Weed Species Recognition Using Deep Convolutional Neural Network.” Computers and Electronics in Agriculture 156: 335–341. [25] Zheng, Z., S. Chen, S. Huang, J. Li, and Y. Zhang. 2020. “Weed Detection In Paddy Fields Using Deep Learning-Based Object Detection Techniques.” Remote Sensing 12(12): 1953. [26] Gao, C., J. Zhang, J. Wang, and L. Wang. 2020. “Weed Detection In Maize Crops Using Deep Learning.” Transactions of the ASABE 63(2): 345–356. [27] Zhao, Y., Y. Wang, and Y. Yang. 2019. “Weed Classification In Wheat Fields Using Deep Convolutional Neural Networks.” Biosystems Engineering, 179, 1–14. [28] Bao, Y., X. Xu, G. Yang, and X. Wang. 2019. “Weed Detection In Sunflower Crops Using Deep Learning And Remote Sensing.” Sensors 19(21): 4673. [29] Wang, H., L. Zhang, J. Zheng, J. Wang, and Y. Huang. 2020. “Weed Detection in Apple Orchards Using Deep Learning and UAV imagery.” Journal of Applied Remote Sensing 14(4): 044511. [30] Singh, A., D. Mandal, S. Mukhopadhyay, and S. Ghosal. 2021. “A Deep Learning Approach For Weed Detection In Potato Crops.” Computers and Electronics in Agriculture 185: 106052. [31] Chen, Z., W. Zhang, G. Yan, and J. Wang. 2019. “Weed Detection In Cotton Fields Using Deep Learning and UAV Imagery.” Computers and Electronics in Agriculture 162: 219–227. [32] Tang, J., S. Song, X. Xu, J. Liu, and H. Tang. 2020. “Weed Detection In Maize Fields Using Deep Convolutional Neural Networks And Multispectral Images.” Biosystems Engineering 192: 44–53. [33] Jia, K., X. Gong, X. Wang, Y. Zhang, and W. He. 2020. “Weed Detection In Rice Fields Using Deep Learning and UAV Imagery.” Remote Sensing 12(20): 3372. [34] Li, C., X. Zhao, C. Chen, Y. Zhou, and Y. Wang. 2021. “Weed Detection In Tea Plantations Using Deep Learning and UAV Imagery.” Remote Sensing 13(7): 1284. [35] Kumar, S., S. Pandey, and D. Singh. 2021. “CropDeep: A Deep Learning-Based Dataset for Crop Classification Using Multispectral Imagery.” Data in Brief 35: 106961. doi: 10.1016/j.dib.2021.106961 [36] Yang, Y., P. Gong, R. Fu, J. Chen, and S. Liang. 2010. “An Unsupervised Hierarchical Method for Segmenting Vegetation using Multiresolution Fusion.” IEEE Transactions on Geoscience and Remote Sensing 48(2): 807–815. doi: 10.1109/TGRS.2009.2023141
268
Jyoti Verma et al.
[37] Bhardwaj, V., K. V. Rahul, M. Kumar, and V. Lamba. 2022. “Analysis and Prediction of Stock Market Movements using Machine learning.” In 2022 4th International Conference on Inventive Research in Computing Applications (ICIRCA), pp. 946–950. Coimbatore, India. doi: 10.1109/ ICIRCA54612.2022.9985485. [38] Bhardwaj, V., V. Kukreja, C. Sharma, I. Kansal, and R. Popali. 2021. “Reverse Engineering–A Method for Analyzing Malicious Code Behavior.” In 2021 International Conference on Advances in Computing, Communication, and Control (ICAC3). pp. 1–5. Mumbai, India. doi: 10.1109/ ICAC353642.2021.9697150. [39] Verma, Jyoti, Abhinav Bhandari, and Gurpreet Singh. Book Chapter, Recent Advancements in the State of Cloud Security in Cyber Physical Systems, Book Security and Resilience of Cyber Physical Systems, 1st Edition. First Published 2022, Imprint Chapman and Hall/CRC, 12. eBook ISBN 9781003185543.
Editors’ biography Dr. Rajendra Kumar is an Associate Professor in CSE Department at Sharda University, Greater Noida, Uttar Pradesh, India. He holds a PhD, M.Tech. and BE (all in Computer Science). He has 25 years of experience in teaching and research at various accredited institutes and universities such as Chandigarh University (NAAC A+). His field of interest includes IoT, Deep Learning, HCI, pattern recognition, and theoretical computer science. He is author of 5 textbooks (one for McGraw-Hill Education), editor of 8 conference proceedings, chair for 04 sessions in international conferences, and published more than 35 papers, 4 book chapters, 3 patents, and 2 monographs. He is vice president of Society for Research Development and member of many technical organizations like IEEE, IACSIT, IAENG, etc. He is Editor in Chief of ADI Journal on Recent Innovation (AJRI), Indonesia. He is reviewer of Medical & Biological Engineering & Computing (Springer), Expert Systems with Applications (Elsevier), and some more. (https://www.sharda.ac.in/faculty/details/rajendrakumar). Dr. Vishal Jain is an Associate Professor at Sharda University, India. He has also worked at Bharati Vidyapeeth’s Institute of Computer Applications and Management, New Delhi. He has more than 17 years of teaching and research experience. He is PhD (CSE), MTech (CSE), MBA (HR), MCA, MCP, and CCNA. He has more than 1,000 citations. He has authored more than 100 research papers, authored and edited more than 50 books with various reputed publishers like Springer, Apple Academic Press, Taylor and Francis Group, Scrivener, Wiley, Emerald, and IGI-Global. His research areas include information retrieval, semantic web, ontology engineering, data mining, and ad-hoc & sensor networks (http://www.sharda.ac.in/faculty/details/dr-vishal-jain). Dr. Ahmed A. Elngar is an Assistant Professor at College of Computer Information Technology, American University in Emirates (AUE) in Dubai International Academic City in the United Arab Emirates. He obtained PhD in Computer Science from Al-Azhar University, Cairo, Egypt. He has more than 15 years of experience in teaching and research. Also, he is associated with Faculty of Computers and Artificial Intelligence, Computer Science Department at Beni-Suef University (Egypt) and Computer Science and Engineering (CSE) at Sharda University (India) as visiting professor. He holds many other responsibilities and assignments including Book Series Editor (CRC Taylor & Francis Publisher), Head of ICT Department (Faculty of Egyptian and Korean for Technological Industry and Energy, Technological Beni-Suef University, Egypt), Director (Technological and Informatics Studies Center at Beni-Suef University, BeniSuef, Egypt), Founder and Chairman (Scientific Innovation Research Group [SIRG], Beni-Suef University, Egypt), Deputy Director (The International Ranking Office, Beni-Suef University), Director (Beni-Suef University Electronic Portal), Managing Editor (Journal of CyberSecurity and Information Management), Deputy Editor (International Journal of Informatics, Media and Communication Technology), and many more. His area of research interest includes Network Security, Cryptography, Multimedia, Steganography, Digital Signal Processing, etc. (https://bsu-eg.academia.edu/AhmedElngar). Dr. Ahed Al-Haraizah, with more than 21 years of teaching and research experience, is Head of Administrative and Financial Sciences Department, Oman College of Management and Technology (from September 2021 to till date). He obtained his PhD in Electronic Commerce Technology form Kingston University, London, UK, in 2010. His research interest includes e-business infrastructure and system development, AR/VR systems development and analysis, information systems infrastructure, e-commerce technology, social learning and marketing, digital marketing, etc. He has been actively engaged in egovernment and e-management training course. He has authored several research papers and book chapters in peer-reviewed journals and books, respectively. He is a reviewer of Information Resources Management Journal (IRMJ), IGI Global E-Editorial Discovery Series, and many other journals. He is a member of various college council committees at Oman College of Management and Technology and has been members of various academic committees of other colleges and universities (https://www.omancollege.edu.om/academic-staff). https://doi.org/10.1515/9783110981445-015
Index 2D image 142, 150 2D-CNN 252 3D 141–142, 149–150, 152, 154, 156 3D model 150, 152 4G-LTE 205 5G-RAN 205 6G 199–201, 203–212 accessibility 93 ADAS 43, 50–51, 53, 56–57 advanced VR systems 131 AI 67–71 allusion 122 anxiety 118 AR VI, 99 AR and VR technology 161, 164, 168 AR fusion 47 AR gaming simulations 120 AR object 152 AR technology 119–121 AR/VR 131–132, 199–201, 204–205, 207, 210, 212 artificial neural network 49 audio 28, 39 augmented 27, 36, 38 augmented and virtual reality 159, 161, 168–170 augmented reality 2–3, 11–14, 17–19, 22–23, 43–44, 46–47, 49, 51, 54, 99, 101–104, 106, 108, 110, 112, 117–123, 127, 133–134, 138 augmented reality 141–149, 155 autism spectrum disorder 132 AutoCAD 148 availability 32–33, 35 AWS 256–257 benefit 37 big data 54 blockchain 11, 15, 199–207, 212 brain and body interfacing 90 business 34–35, 38 cable 37 CDF 253 children 119, 121–123, 125, 130, 132–134, 136–137 Cinderella 119 classroom 117, 126–127, 130–132 cloud-based systems 47 CNN 242, 246–250, 252, 254, 256, 258–262, 265 https://doi.org/10.1515/9783110981445-016
CNNs 241 cognitive load 94 cognitive psychology 125, 130 communication 27–28, 31, 34–36, 39, 102 comprehensive review theory (CRT) 129 conventional behavioral VI, 117 cost 27, 32–34 COVID-19 215, 231, 233–235 COVID-19 epidemic 132 cryptocurrency 11 cybersecurity 89 cybersickness 90, 234 data 27–28, 30–40 deep learning 241–252, 254–257, 259–265 depression 118 device 31, 34, 37, 39 digital 35, 38 digital healthcare 63, 65, 73, 76 digital natives 82 digital presence 160 digital technologies 126 digital twins 43, 82 Doraemon 119 eating disorders 118 edge computing 241–247, 249, 251, 255–257, 259–260, 263–264 extended reality 84 F1-score 241, 257, 259–262 fidelity 27, 31 first phase of healthcare 73 frequency 31 future 27, 29, 31, 39 gamification 14, 127, 226, 229 GPU 255 graphic design 48 head-mounted display (HMD) 129 Health 3.0 74–75 Health 4.0 75 Health 5.0 63–65, 72–75 health 28, 33–35 Healthcare 5.0 76 healthcare 100, 160
272
Index
healthcare industry 63, 70, 72 Healthcare Phase 2 74 higher education 132 holographic optical elements 45 HUDSET 14 hybrid systems 47 hyper-spatiotemporality 83 ICT 141, 149, 215, 226 image object 141 image processing 263 immersive realism 83 Industry Revolution 4.0 205–206 Information and communication technology 145 information and communications technologies 126 innovative designs 148 intelligent healthcare 63, 67, 70, 73, 76 intensity 28, 30 internet 126 interoperability 83 IoT 27, 33, 36–38, 64–66, 69–72, 75, 199–201, 203–204, 211–212, 241–247, 249–252, 255–257, 259–260, 263–265 kids 120, 122, 125–126, 129, 132–134, 136–138 knowledge 117, 122–124, 131 latency 200–201, 203–204, 212 learning foundations 95 learning tool 149 LED 27–39 Li-Fi 27–28, 30–39 light 27–37, 39 mathematics 121–122 mental health 118, 134, 138 metaverse 6 M-learning 126 mobile learning 126 multimedia 133 National Education Policy 144 natural teachers 130 NCERT 151 nlockchain 15 NVIDIA 255 online 34–35 OTSU 241, 252, 254, 259–261
pedagogical systems 121 pedagogical tool 138 physical health 138 platform scalability 94 Pokemon 14 Pokémon Go 108 power 28, 33–34 practical-based learning 147 precision agriculture 241–245, 249–250, 252, 256, 259, 263–265 precision and recall 242, 256–258 primary schools 132 QoS 212 radio 27–28, 31–33, 36 R-CNN 241, 254, 258–259, 265 reinforcement 117, 124–125 restructured learning model 121 safety of workforce 93 scalability 83 scaling 94 schizophrenia 118 SDK 151 secure sockets layer 264 security 200–203 sedentary 137 sensitive 137 sensors 242–245, 249–250, 252, 254–255, 264 sexual harassment 90 signal 28, 30–32, 34, 36 simulation 101, 199–202, 204, 209–212 singular identity 89 situated learning 131 smart city 204, 209–210, 212 smart classrooms 144 smart pills 70 smartphones 119, 127 social 27 social cognition 121 social cognitive learning 132 social context 131 social learning 1, 14–15, 117 – nonlearners 1 – social learners 1 social learning theory 124 social scalability 94 social-emotional development 138
Index
social-exhilarating 121 speech recognition 129 Superman 119 surreality 82 sustainability 83 SWOT analysis 221 synchronization 121 technologies 149–150, 156 television 133 television 129, 134–136 three-dimensional 128–129, 138 tourism 12 tourism industry 160, 162, 164, 169 tourist 159–164, 166, 168–170 transmission 27–28, 30–37, 39–40 truck docking 51–52 TV series 126 UAV 246, 248, 250, 256, 260–262, 265 ubiquity of access and identity 83
ultraviolet 28 Unity 151–152 unreliable monopolized platforms 89 value co-creation 159–161, 163, 166, 168–170 verbal educational approach 126 video games 130 virtual 27, 38 virtual learning platforms 3 virtual private networks 264 virtual reality 2, 12, 14, 19, 22–23, 117, 129, 137, 215–224, 226–229, 231–236 Virtual tourism 223, 227, 236 virtual world 38 wearable systems 69 Wi-Fi 27–28, 30–36, 38–39 wireless 27–28, 31, 35, 39 YOLOv3 241–242, 246, 254, 258–261, 265
273
Also of interest The Reality of Virtuality Harness the Power of Virtual Reality to Connect with Consumers Kirsten Cowan, Seth Ketron and Alena Kostyk, ISBN ----, e-ISBN ----
Semantic Intelligent Computing and Applications Volume in the series De Gruyter Frontiers in Computational Intelligence Mangesh M. Ghonge, Pradeep Nijalingappa, Renjith V. Ravi, Shilpa Laddha and Pallavi Vijay Chavan (Eds.), ISBN ----, e-ISBN ----
Knowledge Engineering for Modern Information Systems Methods, Models and Tools Volume in the series Smart Computing Applications Anand Sharma, Sandeep Kautish, Prateek Agrawal, Vishu Madaan, Charu Gupta and Saurav Nanda (Eds.), ISBN ----, e-ISBN ---- Environmental Data Analysis Methods and Applications Zhihua Zhang, ISBN ----, e-ISBN ---
Accelerated Materials Discovery How to Use Artificial Intelligence to Speed Up Development Phil De Luna (Ed.), ISBN ----, e-ISBN ----
Augmented and virtual reality (AVR) Already published in the series Volume 2. Augmented and Virtual Reality in Industry 5.0 Richa Goel, Sukanta Kumar Baral, Tapas Mishra and Vishal Jain (Eds.) ISBN 978-3-11-078999-7, e-ISBN (PDF) 978-3-11-079014-6, e-ISBN (EPUB) 978-3-11-079048-1 Volume 1. Handbook of Augmented and Virtual Reality Sumit Badotra, Sarvesh Tanwar, Ajay Rana, Nidhi Sindhwani and Ramani Kannan (Eds.) ISBN 978-3-11-078516-6, e-ISBN (PDF) 978-3-11-078523-4, e-ISBN (EPUB) 978-3-11-078531-9
www.degruyter.com