134 67 86MB
English Pages 134 [140] Year 2023
Artificial Intelligence for Virtual Reality
De Gruyter Frontiers in Computational Intelligence
Edited by Siddhartha Bhattacharyya
Volume 14
Artificial Intelligence for Virtual Reality Edited by Jude Hemanth, Madhulika Bhatia and Isabel De La Torre Diez
Editors Dr. Jude Hemanth Karunya Institute of Technology and Sciences Karunya Nagar Coimbatore 641114 Tamil Nadu India [email protected]
Dr. Isabel De La Torre Diez Department of Signal Theory Communication and Telematics Engineering University of Valladolid C/Plaza de Santa Cruz 8 47002 Valladolid Spain
Dr. Madhulika Bhatia Department of Computer Science and Engineering Amity School of Engineering and Technology Noida 201313 Uttar Pradesh India [email protected]
ISBN 978-3-11-071374-9 e-ISBN (PDF) 978-3-11-071381-7 e-ISBN (EPUB) 978-3-11-071385-5 ISSN 2512-8868 Library of Congress Control Number: 2022941154 Bibliographic information published by the Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available on the Internet at http://dnb.dnb.de. © 2024 Walter de Gruyter GmbH, Berlin/Boston Cover image: shulz/E+/getty images Typesetting: Integra Software Services Pvt. Ltd. Printing and binding: CPI books GmbH, Leck www.degruyter.com
Contents Abhijit Sarkar, Mayank Bansal, Duvutu Lea and Deepa Bura 1 Virtual reality 1 Yash Verma 2 Video surveillance framework using virtual reality interface
15
Gulpreet Kaur Chadha, Ritu Punhani and Sonia Saini 3 Merging of artificial intelligence (AI), artificial life (AL) and virtual reality (VR) 27 Saksham Goyal 4 Pain relief management Saksham Goyal 5 Intelligent shopping malls
45
57
Sunishtha S. Yadav, Vandana Chauhan, Nishi Arora, Vijeta Singh and Jayant Verma 6 Challenges and the future of artificial intelligence in virtual reality in healthcare 71 Urvashi 7 Virtual trial room
87
Shubham Sharma and Naincy Chamoli 8 Merging of artificial intelligence (AI) with virtual reality (VR) in healthcare 103 Brief biography Index
131
129
Abhijit Sarkar, Mayank Bansal, Duvutu Lea and Deepa Bura
1 Virtual reality A simulated experience: a comprehensive view Abstract: The demand for virtual reality (VR) is increasing at a rapid rate and companies are also developing new ways to commercialize this as soon as possible. There are a lot of applications of VR in this present century, for example, in health care, education, entertainment, human welfare and especially in the gaming industry where full immersion will also be available soon commercially. It is also used for treating people using psychotherapy which are in the trial and research stage. The entertainment industry will also be set to new standards as VR develops. Likewise other fields will also benefit from the development of VR technology. Keywords: Augmented Reality, Virtual Reality, Full Immersive VR, VR Gaming, Applications of VR
1.1 Introduction Computer-generated reality or virtual reality (VR) is regularly characterized regarding mechanical equipment. VR’s sense relies on “nearness” and “telepresence” concepts, which separately means the sensation of being in a world produced by characteristic or interacted. In this current day and age, the utilization and request of VR is expanding consistently step by step and there is no uncertainty that VR will be a major thing sooner rather than later. VR, otherwise called computer-generated simulation is a kind of innovation which permits clients to encounter a virtual domain so that the clients feel just as they are in reality. With each passing day, the technology and the capability of VR is growing rapidly, and many companies and individuals are developing new devices which will allow users to be more immersed in the experience. This technology will be used in almost every field and industry and will become an important tech in the future. Also, more people will use it as it will be much more commercialized later.
Abhijit Sarkar, Mayank Bansal, Duvutu Lea, Deepa Bura, Department of Computer Science and Engineering, Manav Rachna International Institute of Research and Studies, Faridabad, Haryana, India https://doi.org/10.1515/9783110713817-001
2
Abhijit Sarkar et al.
Full-immersive VR They just need three elements to get the all-out VR experience. First, a potential and luxuriously unmistakable virtual universe to explore and a paradigm or generation of computers at the end of the day. Second, a popular device that can interpret in a complex way if we continue to change our participation. Third, PC-related technology that absolutely soaks us in the simulated universe when we walk about. Usually, we will have to place on what is known as a head-mounted grandstand with two screens and sound system, and wear one large glove at any point. Then again, we could keep moving around in a little room, equipped with noise intensifiers that are foreseen to change pictures from elsewhere.
Semi-immersive VR Often, when individuals consider VR, they picture completely vivid frameworks like the HTC Vive or Oculus Quest. In contrast to these advancements, semi-vivid VR offers clients a blend of genuine and virtually intuitive components. As it is generally essential, semi-vivid VR is probably the most established sort of this innovation, yet today organizations are applying it in an unexpected way. 4D films might be the customary utilization of semi-vivid VR, however now the innovation is fundamentally utilized for instruction. Pilot test programs including a moving cockpit and reproduced condition on screens permit pilots to prepare without the dangers of flying a genuine airplane. As VR innovation improves, engineers can create comparable frameworks for other confounded assignments or callings.
Non-immersive VR An incredibly responsive pilot program planning on a home computer device may qualify as a non-immersive expanded comprehension, particularly when utilizing a big screen, with headphones or enveloping music, and a viable joystick and various controls. In an elective truth, just one out of any odd person wants, or should be totally lowered. An artist can create another structure’s low-down 3D model to display clients that can be explored by clicking a mouse on a workstation. So far, everyone will define the reality that a CPU has created as such that whether or not it does not lower you absolutely. In addition, PC engineers consistently associate themselves with 3D consideration of appallingly perished settlements that you can move and explore. They do not take you back hundreds or thousands of years or make old occasional sounds, aromas, and tastes; anyway, they give a far more extreme experience than two or three pastel drawings or even an empowered movie.
1 Virtual reality
3
1.2 The path of augmented reality from virtual reality Looking logically at VR and AR enhancements, in 1962, when Morton Heilig made Sensorama, a reiterated energy about a cruiser experiencing Brooklyn depicted by a few material impressions, for example, sound, olfactory, and haptic updates, recalling path wind to give a practical person experience [1, 2]; we can follow the basic 3D obvious test structure. Ivan Sutherland created a complete show off over a relative period that included more than sound, scent, and haptic detail, normal frameworks that Sensorama did not have. Furthermore, engineers from Philco Corp. created the first headmounted presentation (HMD) that closed Damocles of Sutherland’s Bleeding Edge that had the decision to revive the visual images by observing the head location and bearing of the consumer [3, 4]. During the 70s, the North Carolina School saw Get, the basic power data game plan, and Myron Krueger made VIDEOPLACE, a phony reality, in which cameras captured the body figures of the customers and anticipated them on a screen [5, 6]. As such, two customers could interface in the 2D virtual space at any rate. In 1982, the US flight based army created the regular pilot structure system visually coupled airborne structure test system through which the pilot could monitor the route and the destinations with an HMD. In the 80s, the simple company contraptions started to move: for example, through 1985, the VPL Association introduced the Knowledge Glove, glove sensors. Another concept is the Eyephone, created by the VPL Association in 1988, an HMD device intended to completely immerse the user in a simulated environment. At the end of the 80s, Phony Space Labs created a Binocular-Omni-Orientation Screen (Effect), a stupid structure circled by a stereoscopic-indicating contraction, giving a moving and wide virtual condition, and a mechanical arm following. In comparison, Impact provided an unflinchingly stable image and gave reactions to redesigns more rapidly than the HMD contraptions. The NASA Ames Test Center, speaking about Impact and Knowledge Glove, created the digital air stream for researching and monitoring musical wind production in a simulated plane or space transport. Essentially, all the more beginning late, remarkable affiliations with videogames have improved the recent creation and design of VR contraptions, such as Oculus Break or HTC Vive, which offer a reliably far-reaching field of view and fewer inaction. In addition, the genuine HMD contraptions can coexist as eye-following structures (FOVE) and enhancement and bearing sensors (e.g., Razer Hydra, Oculus Contact or HTC Vive) after a short time with other tracker system. All the same, at the beginning of the 90s, the Boing Association made the primary model of AR construction to demonstrate laborers how a wiring contraction was formed [7–9]. Around the same time, Louis b. Steven and Rosenberg K. Feiner established an AR institution for repair assistance, showing that the core operation overhauled by computer knowledge required the mechanical social affair to be resolved [10–12]. In 1993, Loomis and accessories passed a
4
Abhijit Sarkar et al.
system based on AR GPS to assist the block in the supported path by providing spatial sound details [13, 14]. Julie Martin also ended up creating “Travel in Cyberspace” in 1993, an AR theater where on-screen actors were able to support simulated items. Some years earlier or later, Advantageous AR System was created to blend interactive tourism knowledge [15–17]. Beginning now and in the not all that far out, a few applications were created: AR-Quake was produced, a flexible AR PC game; in 2008, Wiki-tude was created that could enter knowledge about the customer’s biological components through the lucrative image, map and GPS [18, 19]. Many AR softwares, such as AR Toolbox and Site-Lens, were created in 2009 to attach technological knowledge to the standard components of physical customers. For planning encounters, Absolute Submersion made D-’Blend and AR structure in 2011 [20–22]. Finally, Google made Google Glass and Google HoloLens in 2013 and 2015, and their consolation has started to test in a couple of usage areas. Applications(a) Games (b) E-commerce and retail (c) Interior design, landscaping and urban planning (d) Tourism and travel (e) Education and training
1.3 Augmented reality Like semi-vivid VR, enlarged reality or augmented reality (AR) does not completely drench the client, yet it contrasts in that it overlays virtual and genuine components. AR applications show PC produced things over constant pictures of physical situations. Some believe AR to be a different innovation from VR, yet on a specialized level, it falls under the umbrella of computer-generated reality. AR shows a great deal of guarantee for a few businesses. Online retailers can utilize it to extend to scale 3D pictures of items onto the client’s home. AR goggles can assist laborers with enjoying experts and handymen by showing guidelines toward the edge of their vision. Comparable applications can assist instructors with making exercises progressively vivid for understudies.
Virtual reality concepts The chance of computer-generated reality could be followed in the mid-1960s when Ivan Sutherland, in an essential association, attempted to outline AR as a window through which a customer sees the world as being looked, felt, sounded authentic
1 Virtual reality
5
and in which the customer could act in proportion to everything [23]. From a very long time and as per the territory of applications, some definitions were planned; for an instance, Fuchs and Bishop [7] characterized VR as “ongoing intelligent designs with 3D models, joined with a showcase innovation that gives the client the drenching in the model world and direct control” [6, 7] depicted VR as “The fantasy of cooperation in an engineered domain as opposed to outside perception of such a situation. VR depends on a 3D, stereoscopic head-tracker shows, hand/body following and binaural sound. VR is a vivid, multi-tangible experience” [7] and “Augmented reality alludes to vivid, intuitive, multi-tactile, watcher focused, 3D PC created conditions and the blend of innovations required structure situations” [8]. Sequential degrees of submersion are divided into three kinds of VR frameworks: – Non-distinctive frameworks are the most clear and most moderate kind of VR application that utilized work domains to imitate photo reflections of the world. – Vivid or distinctive structures give a complete copied understanding through the help of multiple material yields gadgets, such as HMDs to upgrade the stereoscopic perspective on Earth through the client’s headway, also as sound and haptic contraptions. – For example, semi-striking or semi-undeniable structures, the Fish Tank VR are between the above two. They offer a picture of a 3D scene on a frame utilizing a projection point of view combined with the observer’s head location. Higher creative frameworks have shown closest experience of this current reality, forcing the customer to dream of mechanical nonmediation and to feel that person “being in” or present in the virtual state. In addition to higher clear structures, the other two frameworks are likely to fuse two or three material yields, allowing the organized effort and activities to be considered genuine [4, 13, 14]. Ultimately, the VR experience of the user may be exposed by assessing the degree of proximity, credibility and the truth. Proximity is a dynamic perceptual perception of “being there” in VR that combines the feeling and experience of spatial proximity, analogous to the probability of input and response as the user was in actual reality (Heeter, 1992). Similar to these terms, the degree of credibility analyzes the extent to which the consumer needs the changes and service [15, 24]. If the redesigns offered match fact, the expectation of VR customers would be consistent with the demand for fact, thus improving VR experience. As indeed, the greater the degree of realism in comparison to the technological improvements, the higher the degree of truth of the customer’s activities will be [15, 24].
6
Abhijit Sarkar et al.
1.4 Technologies in virtual reality Innovatively, the contraptions used under the virtual conditions expect an essential movement to truly bear fruit of gainful virtual encounters. When the company demonstrates, evidence can be shown and contraptions to produce can be shown. Software gadgets are the ones that award the client to talk with the virtual condition, and can go from a specific joystick or back to a glove that helps to get finger-heads or a sensor organized to get positions. More in depth, assist, cursor, trackball and joystick discuss the job an easy to use field input gadgets that award the client to dispatch unsurprising and confidential sociality. GPS reference focuses may tend to other information gadgets as curve perceiving gloves that catch hand progressions, positions and developments, or pulverize gloves that recognize finger upgrades, and trackers arranged to follow the customer’s headways in the physical world and interpret them in the virtual state. Despite what might be typical, the yield contraptions allow the client to see, hear, smell or contact anything that occurs in the virtual state. As mentioned above, a wide range of potential outcomes can be discovered among the visual gadgets, from the easiest or least striking (a PC screen) to the clearest, such as VR glasses or head guards, or HMD or Sinkhole frameworks. Furthermore, stable related speakers, comparatively as contraptions of haptic yield, can strengthen body assets giving a virtual experience that is considerably progressively genuine. Haptic contraptions, for example, may improve the sensation of contact and control models in the customer.
1.5 Applications of virtual reality After its introduction, VR has been used in diverse fields relevant to sports, strategic training, compositional structure, instruction, studying and getting ready for social skills, reenactment of surgical operations and assisting with old or psychiatric medicines are specific areas where VR has a significant effect. A progressive and extensive study by Slater and Sanchez-Vibes [17] outlined the main VR claims, including weakness and focal points, in a few research regions, for example, science, guidance, preparation, physical planning, similar to social miracles, moral practices, and could be used in different fields, such as travel, get-togethers, joint effort, industry, news and preoccupation. Additionally, another review conveyed by Freeman et al. [16] for the current year focused on VR in enthusiastic health, demonstrating the practicality of VR in looking at and viewing unmistakable mental problems as disquiet, schizophrenia, misery and dietary issues. There are different possibilities that allow VR to be used as an improvement, replacing true redesigns, recreating experiences that would be unfathomable as a general rule, with high legitimacy. That is the reason VR is widely used in studies for improved methodologies to implement psychiatric
1 Virtual reality
7
therapy or preparation, for example, to problems that occur out of anxiety (agoraphobia, anxiety of flight, etc.) or then again, essentially, it is seen as strengthening the normal motor rehabilitation systems, allowing games that improve the activities. In more detail, computer-generated reality presentation treatment has shown its amplitude in mental treatment, allowing patients to help little by little face fear or concentrate on conditions in a protected area where the expert can compel psychological and physiological reactions.
Virtual reality conferencing Virtually generated reality conferencing spaces are intended to furnish you with an inventive answer for conferencing and occasions in 3D, empowering you and your clients to be submerged in profoundly intelligent and similar drawing in encounters through augmented simulation conditions. From the virtualization of a gathering or explicit occasion, to constant community-oriented spaces for scattered groups, VR offers the perfect 3D condition to work together on the web. While we are still in the time of 2D virtual conferencing, VR conferencing and coordinated effort in 3D present another worldview in our relationship with computerized joint effort instruments. In VR communicating spaces, clients share the equivalent virtual condition progressively, they are not, at this point isolated by their screens; however, they show up in their advanced state of symbols, in a typical shared space, where they can associate with others and with the articles and condition around them.
Future of VR gaming VR also has a lot of scope in the gaming industry and is starting to amass a global following where people are eager to experience the VR in a 3D like environment. Even from the ideas of creators and inventors, we have now come to know the full potential of VR tech. At present stage, it is limited to only VR headsets and joysticks which capture the movement of the players and emulate it in the VR environment, and the headset allows the users to see and experience VR but it is limited only to the sight. But from movies and fictions, we are able to see how this technology can be improved much more. Till date, we have not had the option to accomplish full jump where in the client loses cognizance and totally get drenched in the virtual condition. The client can fully silence the entirety of his body parts as though he is in reality and to him it will feel like it. He will be able to do everything a normal human can do in the real world and much more. From fictions like Sword Art Online, 2013 (A Japanese made fiction), where it is about a boy who buys a VR-immersion headset wherein he is able to leave his real body and be transported to a manmade world where he fights with creatures and gains levels and progresses so on. From these examples, we
8
Abhijit Sarkar et al.
can see that it will soon be a reality because technology is getting more developed day by day and we will also be able to accomplish this full dive sooner or later. From pilot testing tasks to race-vehicle games, VR has glided on the edges of the gaming scene for a long time before – never completely satisfactory to change the experience of gamers, generally considering PCs to be excessively moderate, shows lacking full 3D and the lack of attendance of not too horrible HMDs and data gloves. Through the introduction of modest modern peripherals like the Oculus Rift, that might alter.
Virtual reality in maps and globes This paper investigates various approaches to render overall geographic maps in VR. We think about: (a) a 3D exocentric globe, where the client’s perspective is outside the globe; (b) a level guide (rendered to a plane in VR); (c) an egocentric 3D globe, with the perspective inside the globe; and (d) a bended guide, made by anticipating the guide onto an area of a circle which bends around the client. In every one of the four representations, the geographic focus can be easily balanced with a standard handheld VR controller and the client, through a head-followed headset, can genuinely move around the perception. For separation correlation, exocentric globe is more exact than egocentric globe and level guide. For region examination, additional time is required with exocentric and egocentric globes than with level and bended maps. For bearing estimation, the exocentric globe is more exact and quicker than the other visual introductions. Our examination members had a frail inclination for the exocentric globe. By and large, the bended guide had benefits over the level guide. In practically all cases, the egocentric globe was seen as the least powerful perception. In general, our outcomes offer help for the utilization of exocentric globes for geographic representation in blended reality.
The scientific perspective in virtual reality Everything that occurs on an industrial or chemical scale is practically imperceptible aside from being lined up to rest with the head adhering to the focal point of electron amplification. Nonetheless, assume you need to build new products or drugs and you need to explore specific ways as to what could be compared with LEGO. That is another obvious reality produced application for PC. You may snap abstract objects together not long before your eyes as opposed to grappling with figures, situations, or 2D sketches of nuclear structures. Such a research started in the 1960s at Church Slope’s College of North Carolina, where Frederick Creeks encouraged Grab to create a VR system to explore the interactions between protein particles and medicinal items.
1 Virtual reality
9
Virtual reality: psychotherapy This is also an interesting way of using VR where we are able to treat people with post-traumatic stress disorder (PTSD) or then again, some other type of mental sickness where general treatment cannot assist them with recuperating totally. This involves the patient getting immersed in a 3D environment where they try to relive a simulation of their traumatic experience in the hopes of overcoming them. They feel as though they are in that exact situation and if they overcome it in the simulation then maybe they can even be able to overcome them in real life. Even if this method is not accepted in the mainstream as if now, further research and studies maybe be able to help it become more accessible and acceptable by all. In a similar manner, it can also be used to help people with some mental and psychological problems. And the cost will also be minimal as this type of tech will be accessible to all in the near future.
Virtual reality in healthcare As medical grounds are getting advanced and costs are getting declined, VR has been in a steady growth in the healthcare department. Practices have been started on medical schools for the treatment using VR. In healthcare department two clinical leading databases, MEDLINE and PSYCINFO, are advancing faster. Besides its use in things like cautious preparation and medicine plan, the reality produced by PC also makes telemedicine possible (checking, taking a gander or remotely taking a shot at patients). A genuine extension of this has a master in one zone trapped in a PC-produced control board experience and a robot in another territory (maybe a whole landmass away) using the sharp edge. The most well-known example of this is the prudent robot Da-Vinci, published in 2009, of which a few thousands have already been installed in healthcare facilities across the globe. Present participation and there is the opportunity for a whole assembly of the best experts in the world to coordinate on an especially infuriating movement. In spite of its early days in production, VR has just been tried as a treatment for different sorts of mental issue (e.g., schizophrenia, agoraphobia and ghost appendage torment), and in recovery for stroke patients and those enduring degenerative infections, for example, numerous sclerosis. The healthcare sector is a Petri platform for market concepts in enhanced and VR. AR applications are now used in different activities, from helping patients recognize their symptoms to enable surgeons to peer into their body, without making significant incisions. AR applications are also available. Ultra-HD 4 K monitors and video game technologies are the way AR can be used in hospitals. Combined VR, gaming and medical imaging procedures, simulation capabilities and accurate maps of the organs are already available. Surgeons will prepare ahead to prevent unforeseeable complications previously. This would eliminate invasive procedures, reduce patient complications and facilitate quicker recoveries. VR has
10
Abhijit Sarkar et al.
also proved useful for the diagnosis, treatment or at least alleviation of dementia, depression, PTSD, phobia, autism and other psychological and pathological conditions. VR with medical haptic gloves allows quicker and more accurate even remote diagnosis.
VR therapy Physical wellbeing is not the main zone of health that VR can improve. Rewarding patients who have mental clutters can be a convoluted procedure, yet VR applications help specialists and advisors investigate beforehand unviable choices. Experts regularly treat patients with fears or related issue with progressive presentation to the wellspring of the issue – a procedure called efficient desensitization or introduction treatment. VR permits these patients to experience introduction treatment in a controllable, sheltered and private condition, expanding viability and reducing concerns related with customary types of this treatment.
Virtual reality in entertainment VR has advanced the most in the entertainment field, we have VR headsets, the cheapest of all which can be connected via our phones to experience the adventure. There are varieties of options in VR for the entertainment field. Even Entertainment programs have the use of Framework like – Avatar (2009) and Agents of Shield.
Virtual reality in education Education sectors are also coming up with the VR program in the department. With the involvement of VR in the education department, it will get a lot advanced, like medical students can learn to operate with the help of VR. It will be a learning ground for biology, history, space and so on. AR and VR, in particular, can provide a practical psychological and physical experience in a safe environment through immersive simulations. This ensures that the systems provide limitless teaching and learning opportunities. AR technology improves the efficacy, dedication and quality of schooling. For example, students can watch a 3D galaxy on their tablets or experience an extinct animal coming to life with AR applications. VR lets students see scientific developments from the viewpoint of scientists or even find them themselves in a virtual lab. As we have seen, flight cockpit test frameworks have been among the most timely VR applications; they can follow their history back to mechanical test frameworks made by Edwin during the 1920s. In a 2008 survey of 735 careful learners from 28 unique nations, 68% said that the opportunity to
1 Virtual reality
11
prepare with VR was “acceptable” or “astounding” to them and only 2% considered it futile or inadmissible.
Virtual reality in astronomy education Numerous subjects in cosmology are hard for understudies to get a handle on account of their tremendous scales and complex connections in existence. The ongoing expansion and diminished expense of vivid innovations, for example, VR may give stargazing instructors new open doors for adequately passing on these ideas. VR innovation not just permits the showcase of cosmic situations in 3D, however likewise submerges the understudy in them, permitting every understudy to take control and investigate thoughts by cooperating with objects and changing their perspectives.
Virtual reality in architecture and industrial design Coordinators used to construct templates out of card and paper; they are ultimately slowly inclined to build built reality PC templates that you can wander around and check. In an equal way, structuring cars, planes and other muddled, costly structures on a PC screen are usually more fair than seeing them in concrete, acrylic, or other true materials. Instead of developing a stunning 3D visual model for individuals to examine and test, you are developing a reliable model that can be followed for its structured, stability, or specific quality.
1.6 Conclusion The VR sector has indeed grown a lot but still has a lot to go forward. There are a lot of equipment in the field including headsets, gears, headphones, wearables and so on. The field is vast, and the next generation will be all about ARs and VRs. From the earliest age, the next gen will experience the full version of all the VRs. shopping, human interaction, intimacy will all be interacted by the means of VR. VR provides more convenience to make things better. People will have a different views to see the things in all departments. It may not be available widely in the markets, but it will be a huge break to the future. In the future, we will be able to use VR innovation that outperforms all the tech we have at the present time. What is more, we can meet all the capacities that are shown in fictions and it will assist us with having a superior comprehension of ourselves and our condition around us. And we should be careful as not to exploit this technology as it may lead to
12
Abhijit Sarkar et al.
addictions and lack of social interactions. So, the expense would be much less costly because it can become more commercialized over time and almost all will be willing to access and benefit from VR’s development.
References [1] [2] [3] [4] [5]
[6] [7] [8] [9] [10]
[11] [12] [13]
[14] [15]
[16] [17] [18]
Strickland, D. (1997). VR and health care. Communications of the ACM, 40(8), 32. Orme, C. F., & Uren, M. J. “The Art of 3-Dimensional Content Creation – Do 3DS and VR Share the Same Visual Grammar?”, IBC 2015 Conference. Woodford, C. (2018). Virtual Reality, The FREE online science and technology book, Illinois, USA. Bohil, C. J., Alicea, B., & Biocca, F. A. (2011). Virtual reality in neuroscience research and therapy. Nature Reviews. Neuroscience, 12(12), 752–762. Baus, O., & Bouchard, S. (2014). Moving from virtual reality exposure-based therapy to augmented reality exposure-based therapy: A review. Frontiers in Human Neuroscience, 8, 112. Gigante, M. A. (1993). Virtual Reality: Definitions, History and Applications. In Virtual Reality Systems (pp. 3–14). Academic Press San Diego, CA. Fuchs, H., & Bishop, G. (1992). Research Directions in Virtual Environments, University of North Carolina at Chapel Hill, Chapel Hill, NC. Cruz-Neira, C. (1993, July). Virtual reality overview. Siggraph, 93(23), 1–1. Sutherland, I. E. (1968, December). A head-mounted three dimensional display. In Proceedings of the December 9–11, 1968, fall joint computer conference, part I (pp. 757–764). Thomas, B., Close, B., Donoghue, J., Squires, J., De Bondi, P., Morris, M., & Piekarski, W. (2000, October). ARQuake: An outdoor/indoor Fourth International Symposium on Wearable Computers (pp. 139–146).IEEE, Warszawa, Poland. Ware, C., Arthur, K., & Booth, K. S. (1993, May). Fish tank virtual reality. In Proceedings of the INTERACT’93 and CHI’93 conference on Human factors in computing systems (pp. 37–42). Lombard, M., & Ditton, T. (1997). At the heart of it all: The concept of presence. Journal of Computer-mediated Communication, 3(2), 1–12, JCMC321. Loomis, J. M., Blascovich, J. J., & Beall, A. C. (1999). Immersive virtual environment technology as a basic research tool in psychology. Behavior Research Methods, Instruments and Computers, 31(4), 557–564. Heeter, C. (2000). Interactivity in the context of designed experiences. Journal of Interactive Advertising, 1(1), 3–14. Baños, R. M., Botella, C., Garcia-Palacios, A., Villa, H., Perpiñá, C., & Alcaniz, M. (2000). Presence and reality judgment in virtual environments: A unitary construct?. CyberPsychology & Behavior, 3(3), 327–335. Freeman, L. C. (1977). A set of measures of centrality based on betweenness. Sociometry, 35–41. Slater, M., & Sanchez-Vives, M. V. (2016). Enhancing our lives with immersive virtual reality. Frontiers in Robotics and AI, 3, 74. Godara, D., & Singh, R. K. (2015). Enhancing Frequency Based Change Proneness Prediction Method Using Artificial Bee Colony Algorithm. In Advances in Intelligent Informatics (pp. 535–543). Springer, Cham.
1 Virtual reality
13
[19] Godara, D., & Singh, R. K. (2017). Exploring the relationships between design measures and change proneness in object-oriented systems. International Journal of Software Engineering, Technology and Applications, 2(1), 64–80. [20] Bura, D., Choudhary, A., & Singh, R. K. (2017). A Novel UML based approach for early detection of change prone classes. International Journal of Open Source Software and Processes (IJOSSP), 8(3), 1–23. [21] Godara, D., & Singh, R. (2014). A new hybrid model for predicting change prone class in object oriented software. International Journal of Computer Science and Telecommunications, 5(7), 1–6. [22] Godara, D., Choudhary, A., & Singh, R. K. (2018). Predicting change prone classes in open source software. International Journal of Information Retrieval Research (IJIRR), 8(4), 1–23. [23] Sutherland, I. (1965). The Ultimate Display. Proceedings of the Congress of the Internation Federation of Information Processing (IFIP), 2, 506–508. [24] Heeter, C. (1992). Being there: the subjective experience of presence. Presence: Teleoperator and virtual Environments, 1(2), 262–271.
Yash Verma
2 Video surveillance framework using virtual reality interface Abstract: Video surveillance plays an important role in the security of any location, whether it is residential areas, industries, public spaces like shopping malls, museums and other monuments, banks, offices, building sites, warehouses, airports, railway stations and so on. Monitoring staff play an important role in the surveillance system. A system was needed to be developed to test the cognitive abilities of the staff monitoring video surveillance under different conditions and determine the factors important for monitoring task to improve the surveillance process. For this purpose, a system has been developed using Unity 3D in which the monitoring staffs have to identify the people exhibiting suspicious behavioral patterns from the queues of people with mixed behavioral patterns. The system has been successfully able to calculate the attributes of the security staff like response time of the staff members in identifying suspicious people. This system can be used in forming different strategies regarding the surveillance process. Keywords: Video surveillance, virtual reality, security
2.1 Introduction Today, various security forces rely on video surveillance systems to facilitate their work. This has proved to be a vital tool for security. Its role is much more important in large public spaces like bus terminals, railway stations, metro stations, popular monuments, shopping complexes and malls, schools, offices and so on. The live monitoring is still done manually by security personnel through live camera feeds. But it was observed that this approach has a few limitations. One of them is visual and mental fatigue. Research published by “RTI International” [1] for “Science and Technology Directorate, US Department of Homeland Security” focus on the “Transportation Security Administration (TSA)” on two fronts namely “body detection visual search” and “X-ray visual search.” The goal was to find out what characteristics are required for both the fronts and if the traits of trained personnel from one team would be helpful on other fronts. Regression analysis, one-way variance analysis and Pearson correlation was used for evaluation of the relation between different required traits on the two fronts. Visual and mental fatigue was
Yash Verma, Amity School of Engineering and Technology, Amity University, Noida, Uttar Pradesh, India https://doi.org/10.1515/9783110713817-002
16
Yash Verma
found to be playing a big role in the performance of both the teams as evident from the research; it was observed that much importance should be given to reducing the visual and mental fatigue of the security personnel. During the surveillance process, it is observed that not all signs of suspicion could be considered equally dangerous. Security issues can be of weak or strong nature. Bouma et al. [2] proposed the concept of strong and weak tags. Strong tags signify greater threat probability whereas weak tags signify the small probability of a security-related incident. The number of strong tags required for causing an alarm would be much lesser than the number of weak tags. When using the system, a trade-off must be made between the numbers of false hits of suspicious individuals and ignored suspicious individuals. If the accuracy of the system is increased, then the number of false hits also increases, and accuracy goes down when it is tried to reduce the false hits. Using multiple operators for monitoring a single area is also suggested to increase the accuracy of prediction. The signs of suspicion can be automatically detected using different techniques [3–6]. Khan et al. [6] suggested the methodology to convert the actions of humans in a video input to natural language sentences by extracting the high level features from video stream. The accuracy of these sentences is found to be comparable with the sentences given by actual human beings. The identification of individuals can be done by tracking [7] and information about space and time [8]. For implementing automatic suspicious activity detection, first thing is to detect the people present in the scene which can be implemented using different methods used for object detection like “Viola-Jones object detection framework” based on “Haar features” [9], “scale-invariant feature transform (SIFT)” [10], “histogram of oriented features (HOG)” [11], region proposals like “R-CNN [12], fast R-CNN [13], faster R-CNN [14], cascade R-CNN [15],” “single-shot multibox detector (SSD)” [16], “you only look once (YOLO)” [17], “single-shot refinement neural network for object detection (RefineDet)” [10], “Retina-Net” [18] and “deformable convolutional networks” [19]. All these methods have their advantages and disadvantages over one another. The technique suited for one application may not be as good results in other applications. Region-based methods are considered one of the most accurate ones but take much time for processing. YOLO approach showed competitive results with state-of-the-art models such as fast R-CNN, significantly overcoming them in speed, which allows us to apply it in real-time video surveillance and other video-monitoring systems. The development tool used is “Unity” along with “C#” language. Unity is a software development environment that is used for the development of video games, 3D modeling of different components and vehicles using virtual reality, film industry and automotive industry. C-sharp is a general-purpose language that can be used in developing a variety of applications like Mobile Apps, Windows Store apps, Website development, Enterprise applications, Office applications, backend service and cloud applications and so on.
2 Video surveillance framework using virtual reality interface
17
Given below are some signs of suspicious behavior in public spaces: Suspicious behavioral patterns include: – Standing still for a very long time – Fast movements – Unattended objects in public spaces – Standing against corner – A cluster of people suddenly breaking apart – The synchronized movement of many people – Repeatedly looking back – Movement of a few individuals in the opposite direction of all others Typical suspicious behavior in the corporate workplace: – Oversized clothes in hot summer days – Nervously seating, glancing and muttering something – Repeated entrances and exits from a building – Leaning to one side while walking or sitting (could be due to heavy weight of weapons) – Keeping hands in pockets like hiding some weapons or other prohibited item – Outsiders sneakily entering restricted areas – Analyzing the building access areas – Carrying unusually large packages or suitcases This structure of this chapter is as follows. First, the reader is introduced to the domain of video surveillance, the advancements in this domain, its importance, applications and related work. After this, the methodology for the implemented system is explained using different illustrations followed by the results and discussion. Results and discussion section include some screenshots from the implemented system along with improvements suggested by the author.
2.2 Methodology 2.2.1 Office environment An office environment has been created in which different nonplayable characters (NPC) are placed to stimulate the real-life office environment virtually. The NPCs are programmed to simulate random behavioral patterns according to a regular office environment. Some of which will exhibit suspicious behavior. Cameras are also placed at different locations to monitor the complete environment including all the rooms and halls inside the building as well as accessible locations outside the
18
Yash Verma
buildings. The user interacting with this environment would be provided with a user interface (UI) having the functionality to tag any NPC as weakly suspicious marked with yellow color and strongly suspicious marked as red. Other functionalities will include the untagging of an individual if it is marked by mistake or other similar situations, changing the active camera views and so on. The environment was made in the Unity game engine using C# as the scripting language. The character models and animations were downloaded and imported from “Mixamo” website (https://www.mixamo.com/#/). The map used for the development of the environment was taken from the popular multiplayer shooting video game “Counter-Strike: Global Offensive” (https://free3d.com/3d-model/cs-office-6260.html#).
2.2.2 Queues at public places A simulation environment has been created which consists of three side by side queues of people. These people are programmed to exhibit suspicious and nonsuspicious behavioral patterns. These people are being monitored using a camera for these patterns. The person monitoring these people has to tag the persons exhibiting suspicious behavior by clicking the left mouse button on the character and then left-clicking on the action by clicking on the respective button. On clicking, the individual will be marked visible by the increased size. If the user mistakenly selected the wrong individual, he/she can untag the individual by right-clicking on them which reduces its size. For simplicity, some common activities like kicking, punching, checking surroundings, being terrified and so on are included. The concept of weak and strong tags has also been implemented in this model as suggested by Bouma et al. [2]. This concept will help in differentiating the urgent threats with less probable threats. Strong tags signify that urgent action must be taken on the individual which is strongly tagged. Whereas the weak tags signify that the person must be monitored closely for other signs of suspicion. When multiple suspicious activities are detected on the individual, the weak tag is converted to strong tag after a certain predefined threshold. Some common examples of situations where the individual must be strongly tagged would be when a person attacks any other person with a strong force, or when an individual is detected to be possessing or holding a serious weapon like gun, sword, dagger and so on. An individual would be weakly tagged when he/she is staring at some other person, or swiftly running toward the opposite direction of movement of the crowd with a potential weapon like a bar of metal, bat, hockey and so on or a person frequently looking over his shoulder. The designed system is a part of surveillance framework illustrated in Figs. 2.1 and 2.2. – As illustrated in Fig. 2.1, the framework start with monitoring the surveillance video by the control room staff after selecting the mode of operation.
2 Video surveillance framework using virtual reality interface
– –
–
– –
19
The UI has been implemented for the staff to select one of the modes of operation: “manual,” “semi-automatic” and “fully automatic.” Manual mode is the most basic mode of operation which is used for collection of data for training purpose. In this, the user has to select the individuals exhibiting the suspicious behavior. Semi-automatic mode is the most widely used mode. In this mode, the user is supported with machine in which the machine suggests the user some suspected individuals and the user validates the suggestions and helps the machine in improving accuracy. Also, the user has full functionality of manual mode along with validation functionality. Automatic mode is the last mode of operation in which the machine automatically detects the suspicious individuals without the help of user. Based on the mode of operation user would be directed to “tag mechanism.”
Figure 2.2 illustrates the tag mechanism as mentioned in Fig. 2.1. – It starts with the monitoring of video by the control room staff. – If the staffs detect any signs of suspicion, they have to mark the individuals with weak or strong tags. – Whenever a person is tagged with some strong tags, it means immediate action must be taken. – The ground security staff is informed to take immediate action. – Weak tags signify that there is a need to focus more on that person and monitor him/her closely for other signs. – Weak tags add up to a certain number denoted here as threshold. After threshold is reached, they are converted to strong tags.
2.3 Results and discussion A prototype has been built for the surveillance system using the unity game engine. Figure 2.3 illustrates the model of the office environment. The position of NPCs is highlighted using green color to enhance visibility. Figure 2.4 illustrates a different view of the same office model with a white base to highlight the map rooms and other features. This map is used to simulate the real office environment so that the user can get the feel of real environment. The office environment contains different rooms, halls, office furniture, back alley, staircase, equipment like computer fax machine, projector and so on. Figure 2.4 illustrates two different individuals standing near the staircase illustrating different types of tags where yellow tags denote weak tags and red tags denoting the red tags. Yellow tag here indicates that the person above the staircase is
20
Yash Verma
Start
Monitoring by control room staff using user interface
Text
Select mode
Manual
Semi-automatic
Fully-automtatic
Tag Mechanism
Fig. 2.1: Framework design.
showing weak signs of suspicion. Person down the staircase has red tag which means he must have done something highly suspicious and urgent action has to be taken against him. The user has the capability to switch camera views in order to get a clear picture of the same area in the office from different perspectives. This feature is illustrated through Fig. 2.5a and 2.5b. A different view of the same objects and location is visible from different cameras in Fig. 2.5a and 2.5b. In Fig. 2.5a, a more close-up view of the individual can be observed whereas Fig. 2.5b illustrates a more wide and distant view of same location covering more area than Fig. 2.5b. This feature let the operator cover the whole office building as well as outside premises.
2 Video surveillance framework using virtual reality interface
Monitoring
No
No
Detect suspicious behaviour ?
Yes
Inform gorund staff
Strong
Strong or Weak
Weak
Threshold reached
Yes Convert to strong tag
Fig. 2.2: Tag mechanism.
Fig. 2.3: Map used for the office environment.
21
22
Yash Verma
Fig. 2.4: Types of tags.
Fig. 2.5a: Camera view 1 showing the same location.
Cameras are placed strategically throughout the map to cover the whole office environment. This will help the user to thoroughly monitor each segment that are all the rooms, halls, stairways, including the open spaces within office premises like back alleys as shown through Fig. 2.6. Figure 2.7 illustrates the environment with queues at a public place. This environment is used for simulation of airports boarding pass queues, ticket counter queues at theatres, hospitals, supermarkets and so on. The scenario illustrated in the picture below shows the people standing in three queues and waiting for their turn and moving forward slowly. Only a few character models are repeatedly used here to increase the difficulty for the monitoring person to introduce a certain level of visual fatigue which would help in testing the limits of the monitoring staff. The
2 Video surveillance framework using virtual reality interface
23
Fig. 2.5b: Camera view 2 showing the same location.
Fig. 2.6: Camera capturing the back alleys.
monitoring person has to choose among four different kinds of signs of suspicion, which are kicking, checking surroundings, punching and terrified represented using yellow, blue, pink and green colors, respectively. Kicking and boxing are very obvious signs of aggression, whereas being terrified and checking surrounding can often be attributed to the behavior of the people when they try to hide something or they are afraid of the consequences of their actions. It can be seen from Fig. 2.8, that a person is turning around in center queue in the center of the frame. This action is a sign of being anxious or afraid of getting caught. If this action is repeated many times over a short period, it means the probability of that person doing something which he is not supposed to do very high. It
24
Yash Verma
Fig. 2.7: Queue at public places.
Fig. 2.8: Person turning around repeatedly.
can be tagged by the monitoring by clicking on the person which highlight the person and then clicking on blue colored “Checking Surrounding” button. In Fig. 2.9, a few individuals are clearly visible to be in a punching stance and can be seen punching when viewed during the simulation. The user is supposed to click on these persons and then selecting the pink “Punching” button by clicking on it. And these will be recorded and add to get the final accuracy of the user. At last, the accuracy of the user is calculated based on his selection and response time and final result statistics could be recorded for performance evaluation of the monitoring staff. These results can be used to estimate the response times, their capacity of monitoring lag volume of people, and the time in which their mind is most active. These statistics can be used for selection of staff and deciding
2 Video surveillance framework using virtual reality interface
25
Fig. 2.9: Few people in punching stance.
the policies governing them like the shift timing, number of personnel required at a given time, their break times and so on. The office environment has been successfully implemented with the help of Unity 3D that can simulate an office environment in which the user can tag the characters exhibiting suspicious behavior with weak or strong tags. Also, the model has been proposed to automatically identify suspicious individuals from the live camera video feed. The automatic system in the current state can predict a few suspicious patterns and that too in sparsely populated areas and further development and refining are required for more accuracy and to cover large areas. Also, the system needs a human agent to monitor and validate the predictions by the system. But, in future, the system would be scaled to be able to automatically predict more suspicious behavior patterns with high accuracy and be able to cover more population-dense areas.
References [1] [2]
[3]
Behavior Detection Visual Search Task Analysis Project. (2008). RTI International, North Carolina. Bouma, H., Vogels, J., Aarts, O., Kruszynski, C., Wijn, R., & Burghouts, G. (2013). Behavioral profiling in CCTV cameras by combining multiple subtle suspicious observations of different surveillance operators. In Proceedings of SPIE – The International Society for Optical Engineering, Baltimore, Maryland, United States. Hanckmann, P., Schutte, K., & Burghouts, G. J. (2012). Automated Textual Descriptions for a Wide Range of Video Events with 48 Human Actions. In Fusiello, A., Murino, V. & Cucchiara, R. (eds). Computer Vision – ECCV 2012. Workshops and Demonstrations. ECCV 2012. Lecture Notes in Computer Science. vol. 7583, Springer, Berlin, Heidelberg. doi: https:// doi.org/10.1007/978-3-642-33863-2_37.
26
[4]
[5]
[6] [7] [8]
[9]
[10]
[11]
[12]
[13] [14] [15] [16]
[17] [18]
[19]
Yash Verma
Bouma, H., Burghouts, G., Penning, L. D., Hanckmann, P., Hove, J. M., Korzec, S., Kruithof, M., Landsmeer, S., Leeuwen, C. V., Broek, S. V. D., Halma, A., Hollander, R. D., & Schutte, K. (2013). Recognition and localization of relevant human behavior in videos. In Proceedings of the SPIE 8711. Bouma, H., Hanckmann, P., Marck, J. W., Penning, L., Hollander, R., Ten Hove, J. M., Van der Broek, S. P., Schutte, K., & Burghouts, G. (2012). Automatic human action recognition in a scene from visual inputs. In Proceedings of the SPIE 8388. Khan, M. U. G., Zhang, L., & Gotoh, Y. (2011). Towards coherent natural language description of video streams. In ICCV Workshops (pp. 664–671). IEEE. Hu, N., Bouma, H., & Worring, M. (2012). Tracking individuals in surveillance video of a highdensity crowd. In Proceedings of the SPIE 8399. Bouma, H., Baan, J., Landsmeer, S., Kruszynski, C., Antwerpen, G. V., & Dijk, J. (2013). Realtime tracking and fast retrieval of persons in multiple surveillance cameras of a shopping mall. In Proc. SPIE 8756. Viola, P., & Jones, M. (2001). Robust real-time object detection. International Journal of Computer Vision, Available:. http://citeseerx.ist.psu.edu/viewdoc/citations;jsessionid= EE5D40038108183D55A058CE874F7CFC?doi=10.1.1.110.4868. [Accessed 5-November-2020]. Lowe, D. G. (1999). Object recognition from local scale-invariant features. In (PDF). Proceedings of the International Conference on Computer Vision Vol. 2 (pp. 1150–1157). doi: 10.1109/ICCV.1999.790410. Dalal, N., & Triggs, B. (2005). Histograms of oriented gradients for human detection. International Conference on Computer Vision & Pattern Recognition (CVPR ’05) (pp. 886–893). San Diego, United States. doi: 10.1109/CVPR.2005.177.inria-00548512. Girshick, R., Donahue, J., Darrell, T., & Malik, J. (2014). Rich feature hierarchies for accurate object detection and semantic segmentation. In 2014 IEEE Conference on Computer Vision and Pattern Recognition (pp. 580–587). Columbus, OH. doi: 10.1109/CVPR.2014.81. Girshick, R. (2015). Proceedings of the IEEE International Conference on Computer Vision (ICCV). (pp. 1440–1448). Shaoqing, R. (2015). Faster R-CNN. Advances in Neural Information Processing Systems, arXiv:1506.01497. Pang, J., Chen, K., Shi, J., Feng, H., Ouyang, W., & Lin, D. (2019-04-04). Libra R-CNN: Towards balanced learning for object detection. arXiv:1904.02701v1. Liu, W., et al. (2016). SSD: Single Shot MultiBox Detector. In Leibe, B., Matas, J., Sebe, N. & Welling, M. (eds). Computer Vision – ECCV 2016. ECCV 2016. Lecture Notes in Computer Science. vol. 9905, Springer, Cham. doi: https://doi.org/10.1007/978-3-319-46448-0_2. Redmon, J. (2016). You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. arXiv:1506.02640. Zhang, S. (2018). Single-shot refinement neural network for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 4203–4212. arXiv:1711.06897. Lin, T., Goyal, P., Girshick, R., He, K., & Dollár, P. (2020). Focal loss for dense object detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 42(2), 318–327. doi: 10.1109/TPAMI.2018.2858826. Dai, Jifeng (2017). “Deformable Convolutional Networks”. arXiv:1703.06211.
Gulpreet Kaur Chadha, Ritu Punhani and Sonia Saini
3 Merging of artificial intelligence (AI), artificial life (AL) and virtual reality (VR) Abstract: Technology has been rightly called the wave of the future. Artificial intelligence (AI), artificial life (AL) and virtual reality (VR) are not exactly new terms, and it has been a while since we have been experiencing how these technologies are leading us to convergence. VR lets us produce simulated environments that we can then submerge ourselves into, AI is working towards outfitting technical devices and services with the help of insight and perception of a responsive being and thought, while AL is the art of examining natural life and humans through simulations with computer models, robotics and biochemistry. Major advances can be made to merge these three technologies to bring about a revolution in the world we live in. Bringing AI, AL and VR together can provide us with incredible opportunities in various domains like travel and tourism, lifestyle, healthcare, BFSI, retail, entertainment and many more. This chapter revolves around the various cases of using AI to augment the existing applications and the many areas where AI, AL and VR can be applied. Keywords: Artificial intelligence (AI), artificial life (AL), virtual reality (VR), technology, robotics, simulation
3.1 Introduction Technology has been rightfully defined as the wave of the coming years, and now the wave of the future is here! In this digital era, technology has now become an integral part of everyone’s life, and we are dependent on it for our everyday activities. When considering the various developments that would happen in the times to come, one cannot simply imagine a non-digital world. However, what one can imagine is a more advanced technological world, where human life and our activities are further simplified by use of various technologies. Various technologies like AI, AL, machine learning (ML), VR, Data science and analytics and Robotics are contributing to the life altering change that will be brought to our world, in the coming times. A combination of two or more intelligent techniques or tools, represented by independent beings and agents, along with operational means for their graphical depiction and interaction of various kinds, has given rise to a new zone at their meeting point, which we call intelligent virtual environments [1].
Gulpreet Kaur Chadha, Ritu Punhani, Sonia Saini, Amity school of Engineering and Technology, Amity University, Noida, Uttar Pradesh, India https://doi.org/10.1515/9783110713817-003
28
Gulpreet Kaur Chadha, Ritu Punhani and Sonia Saini
Implementing the intelligent virtual environment is an interdisciplinary challenge, which gains the understanding of various fields like human sensory system, psychology, anatomy, computer science, sensor technology, physics, audio systems, electronics etc. Intelligent Virtual Environments (IVE) merges the technologies behind AI and AL. An IVE is a virtual environment that is created to be a replica of the real world with autonomous virtual agents (IVA) that possess a wide range of behaviors. Some types of IVEs are simulations, when the agents are interacting with one another in an environment that is virtual in nature; training simulators, where the trainee can cooperate and consider other agents like machines or people that are also present in the environment; and computer games, where the gamer is interacting with various other characters in the real and virtual world. Real-time simulation of actual people and other independent agents contributes to making the overall experience more real than virtual. Innovative and improved modeling of realistic people – individual and group, is taking up the level of virtual environments, simulations, computer games and training systems, etc. [2]. The research work focuses on the tools and technologies behind AI, AL and VR, and how we can combine two or three of these technologies to lead to IVEs. We also discuss what these technologies hold for us in the future, and how the concepts of VR applications, the virtual world and dynamic travel would be implemented.
3.2 Literature review Research related to virtual and real environments on one side and AI and AL on the other side have been carried out by two different groups of people, over the last few years. Different groups of people have different interests and notions; however, some convergence does occur between the two groups, and consequently the two fields of study [3]. Technology touches our lives from various aspects and provides numerous benefits. The next phase in technological advancements is convergence [4]. As observed in the past, the merging and converge of two or more technologies has been beneficial. Following the same line, VR, AI and AL are some technologies that stand out, and their convergence has great potential. The emergence of technologies like VR, Augmented Reality, and Mixed Reality are giving rise to a new environment where the virtual and real objects and people can co-exist and integrate with each other at various levels in the environment. With the development of portable and advanced devices, that are highly interactive and have connections with the real and virtual world, the customer landscape for virtual environments is expanding and evolving into different types of hybrid experiences. However, the limitations of these new realities, technologies and experiences have not yet been clearly established by researchers and practitioners [5]. Nowadays, new technological developments are altering the experiences of masses in the virtual and
3 Merging of artificial intelligence (AI), artificial life (AL) and virtual reality (VR)
29
physical environments. Precisely, VR is expected to give rise to many advancements in several industries [6], like retail [7, 8], tourism [9], education [10], healthcare [11], entertainment [12] and research [13]. The “Reality-Virtuality Continuum” proposed by Milgram and Kishino [14], for years, has led to more and more new ideas and theories that researchers proposed about the various realities and environments. The classification covers the virtual and real environments at the extremes of the continuum (Fig. 3.1). Real environments (RE) refers to the real environment . They include the actual views and representations of the real-life scenarios [14]. On the other hand, virtual environments (VE) are completely controlled by computers, and the objects and items displayed in them are not real. These objects are being made visible to the user on their devices, and the users can interact with them in real-time scenario through a technological interface. Within this category, Virtual Worlds (VW) for example, Second Life, are said to be uninterrupted virtual environments that are always open, thus enabling the users being represented by virtual avatars to create, play and interact with each other in real-time [15, 16]. VR is, in essence, a computer-controlled ecosystem where the users can interact and move, and can trigger the real-time actions by their senses, thus providing a sensory immersive experience [17].
Fig. 3.1: Reality-virtuality continuum [14].
The authors [14] observed that as one moved towards the righter side of the continuum, there was an increase in the number of virtual stimuli and computer-controlled activities. The existing reality of the rightmost and leftmost sides of the continuum have been termed as ‘Mixed Reality’ (MR) environments. Therefore, MR has been thought of as a separate area on the continuum, where the real and virtual objects are perceived to be merged [18, 19]. Hence, Augmented Reality (AR) and Augmented Virtuality (AV) are a part of MR, as showed in Fig. 3.1. AR is focused on modifying the user’s actual appearance and the surroundings by using the overlaying techniques for images, videos and other virtual elements [20, 21]. A few other proposed methodologies of classification have further extended the Milgram and Kishino’s continuum and explain the new realities that are emerging
30
Gulpreet Kaur Chadha, Ritu Punhani and Sonia Saini
as more specialized technologies. Mann [22] further described about mediation of the actual continuum theory. Mediation is being addressed as the effect through which some devices or objects can alter real or virtual environments by changing the sensory inputs. Schnabel et al. [23] assimilated new elements to the “Reality-Virtuality Continuum”: amplified reality (where an amplified object can control the flow of information), mediated reality [22] and virtualized reality (similar to 360-degree videos). Jeon and Choi [24] also added to the existing theory by proposing the addition of a new sensory dimension that focused on the sense of touch. Their “visuo-haptic reality-virtuality continuum” comprises nine environments ranging from the real world to interactive virtual simulators. In today’s times, the ICT industry is experiencing new launches every day that are introducing the digital world to a new version of MR at every step. Therefore, it has become necessary that clear boundaries are established about the realities that can be created by using technologies, especially the ones that used MR. MR could not be a large part of the continuum that includes AR and AV as it was earlier, as noted by Milgram and Kishino. MR had earlier been classified as an independent dimension that lay between AR and AV and possessed a blend of virtual and real world. Along the line of this approach, Carlos, Sergio and Carlos [5] adjusted the Reality-Virtuality Continuum by distinguishing the impartial dimension of “Pure Mixed Reality” (PMR) (Fig. 3.2). The differences between the realities are depicted in Tab. 3.1.
Real Environment
Augmented Reality Virtuality overlaps reality
Pure Mixed Reality Virtuality and reality are merged
Augmented Virtuality
Virtual Environment
Reality overlaps virtuality
Fig. 3.2: Pure mixed reality [5].
Users, in today’s world, can interoperate with real and virtual items in actual scenarios and situations, and instantaneously, these items can cooperate with one another. “Environment awareness” means that virtual objects and devices can interact with the actual environment and actual items and devices can interact with virtual elements, irrespective of the location and time. It has been rightly said that the Real Environment is a genuine establishment, where users interact solely with the elements
3 Merging of artificial intelligence (AI), artificial life (AL) and virtual reality (VR)
31
Tab. 3.1: Summary of differences between the reality-virtuality realities. Real Augmented environment reality (AR) (RE)
Pure mixed reality (PMR)
Augmented virtuality (AV)
Virtual environment (VE)
The main ecosystem is the real world (R) or the virtual world (V).
R
R
R
V
V
Users interact with the virtual (V), real (R) or both (R-V) areas in real time.
R
R-V
R-V
R-V
V
Digital content is overlaid on the real ecosystem.
–
√
–
–
–
Real content is overlaid on the virtual ecosystem.
–
–
–
√
–
Digital content is combined into the real world so that both digital and real content can interact in real-time.
–
–
√
of the actual world, whereas Virtual Environment is a completely computer-generated environment, where users can interact solely with virtual objects in real time. Between these extremes, research has found technology-mediated realities where physical and virtual worlds are integrated and can coexist at different levels [5]. AR is illustrated by digital graphics that is overlaid on the users’ actual ecosystem; Augmented Virtuality uses actual content overlaid on the user’s virtual ecosystem. Ultimately, in PMR, users are able to be present in the actual world, and the digital graphics and content are merged with their ecosystem, enabling them to interact with both digital and real contents, and these elements also interact [5]. PMR forms the core of all the digital inventions taking place these days, and this part will remain the focus, in times to come. AI, AR and VR technologies are complementary, and their fusion is the doorstep of transformation and opportunities. While AI has been around for quite some time, a blend of VR-AI and AR-AI will change how we live and do business. AI, AR and VR – these three technologies have the capability to gain traction and penetrate our lives. This research work aims to contribute to the pre-existing work with an attempt to analyze the role of technologies that focus on reality-virtuality and how they can be applied in the real world for a better experience to the customers. This chapter will aid in understanding and analyzing the impact these technologies can have on customers in the real world. It can also aid in coming to various conclusions like
32
Gulpreet Kaur Chadha, Ritu Punhani and Sonia Saini
deciding which technology is the most appropriate to develop and design valuable customer journeys. Lastly, we have outlined how the reality-virtuality technologies and devices can bring about a change in the overall customer experience at different touchpoints and areas by empowering, supporting and developing new experiences. This can benefit companies develop better and more suitable services for the customers and clients with appropriate value-added propositions during various stages of the journey. The chapter concludes with a series of suggestions and offers a future research agenda.
3.3 Artificial intelligence-based technologies In simple words, the much-used term these days, “Technology,” refers to the use of scientific knowledge for practical implementation. Therefore, what is a better combination than AI and Technology? AI has a profound influence on almost every sector in our society. All organizations, big or small, are in a race to implement AI for operational excellence. There are various technologies that involve AI for their use. One of the many famous technologies available today is Speech Recognition. To name a few: Siri, Alexa and Google Assistant are known to almost every tech-savvy or, maybe, even not so tech-savvy person, as it makes our lives so much easier. Another example that is not unknown to many is Biometrics. It allows for more natural interface between people and technology as it can identify the physical aspects of humans. Be it fingerprint or retina recognition, they are used at the airports, passport offices, lockers, etc. to ensure stronger security and accessibility functions. Also, AI can be very effectively used in decision-making, which can be used by corporations for better management, levelled implementation of rules, profit-making, etc. One of the biggest applications that have come into the picture, now, is for COVID-19, the pandemic which has taken over the whole world [25]. Early detection plus diagnosis, identifying hotspots, monitoring the treatment and developing vaccines are just a few of these approaches. Other than these, Emotional Recognition, Virtual Agents, Education, Image Recognition, Robotics, Marketing are some more of these technologies, and the list goes on and on. It would be adequate to say that AI takes technology to a sky-high level. We cannot even imagine what all lies in the laps of AI-based technologies, what wonders it possesses if used properly, and how it showcases computational models of intelligence. Figure 3.3 depicts the technology landscape of AI. AI has given rise to various fields of research and implementation starting from autonomous systems, machine learning, deep learning, neural networks, pattern recognition, NLP, chatbots, neuromorphic computing, cognitive cyber security, robotics, gaming, etc., to name a few.
3 Merging of artificial intelligence (AI), artificial life (AL) and virtual reality (VR)
33
Fig. 3.3: AI technology landscape.
3.4 Artificial life-based simulations Life on Earth has been amazingly complex. There are innumerable small things that contribute to our living system. Thousands and thousands of species have been networking with each other for years to make what the Earth is today and continue to do so. But we cannot travel back the evolutionary clock to examine the growth of features of this ecosystem [26]. This is where AL comes into play. AL can be easily defined as a kind of artificial intelligence that caricatures natural life and evolution with the use of simulations with robotics, computers as well as biochemistry to study what humans might have been struggling to understand for years. In many cases, AL has the potential of discovering life’s essential character by making artificial aliens and comparing their behavior to real biology. Artificial life is not only important to biologists but also to engineers, who wish to understand the ability of evolution to create such marvelous structures and processes that need no human intervention. The generic relevance provided by simulations becomes so clear that a few scientists and scholars have considered it as a third method of implementing science.
34
Gulpreet Kaur Chadha, Ritu Punhani and Sonia Saini
AL is furthermore divided into three kinds. One is bio-chemical AL, wherein scientists create synthetic DNA to study it more effectively. Another is hardware-based AL, which consists of robots that are automatically guided to do various tasks. The last is software-based, in which we strive to model a machine’s brain. There are many AL-based simulators that have come to light over the years; a few examples are: Tierra – for executable DNA, Polyworld – to create a neural net, Creatures –simulated biochemistry, ApeSDK – language/social simulation, etc. [27] Therefore, the impact of devices on the next major evolutionary transition is quite evident. Creating life or a whole living system from scratch might sound unbelievable, but this is where science has reached. Last and probably the biggest challenge of AL would now be discovering life on another planet. It would change our approach toward the question: “What is life?” AL is the future of science and biology. Another area of interest is multi-agent replications that need the expansion of multiagent-based artificial life simulations. The aim here is to present complex real-world systems as dynamic systems with the capability to make decisions. Traditional methods are not suitable to deal with complicated scenarios and hence, these multi-agent simulations are quite necessary in today’s times. Examples of these multi-agent-based modeling include Individual-Based Modeling (IBM) and Agent-Based Systems (ABS). The basic idea followed by an agent-based model is the development of social structures and clusters of behaviors from the collaborations of single agents. These agents function in the virtual or artificial environments, and when exposed to certain scenarios are only valid when taking into consideration the limitations of each agent about their computational and memory capabilities. Table 3.2 shows a comparison between a traditional and agent-based modeling. Tab. 3.2: Comparison between traditional and agent-based modeling. Traditional modeling
Agent-based modeling
Focus on continuous time Mathematical language (equations) Aggregate level granularity Top-down approach Pre-defined behavior Global control
Emphasis on discrete time Descriptive model Personal level granularity Bottom-up approach Evolving behavior Local power
3.5 Virtual reality systems In the last few years, VR and AR have drawn the interest of researchers and technology enthusiasts. VR and its study generally started with the graphics side of computers and now it has been extended to various subjects. Presently, videogames that
3 Merging of artificial intelligence (AI), artificial life (AL) and virtual reality (VR)
35
use VR and are endorsed by VR tools are more common than they were in the past, and the tools they are using can prove to be useful in other areas like neuroscience, biology, psychology, etc. [28], as well. A crucial aspect of the concept of VR is the factor of immersion. The degree of immersion depends on the types of VR system one is using. Depending on that, the degree of immersion can vary from high to low. – Non-immersive systems: These types of systems are the easiest to use and pocket-friendly, as well. These are used by desktop applications to reproduce images in VR applications for the system. – Immersive systems: These systems offer a totally virtual experience in a simulated environment by using many sensory devices like head-mounted displays, etc. – Semi-immersive systems: An example of such system is the Fish Tank VR. It provides a stereo image of a three-dimensional (3D) scene viewed on the system’s monitor by using a perspective projection coupled with the head position of the observer [29]. Advanced technological systems have showed that it is the nearest capability to reality and gives the user an impression of the technological non-mediation and makes the user feel that he or she is present in the virtual environment. Figure 3.4 depicts the architecture of a typical VR system. The information processor controls the gadget used to enter data to the PC and to send the organized information to the remainder of the framework (mouse, trackers and the voice acknowledgment framework) in a diminished time span. The reenactment processor addresses the code of the VR framework. It takes the client contribution alongside any errands modified and decides the moves that will take place in the virtual world. The delivering processor causes the sensations, the yield to the client. Distinctive delivering measures are utilized for haptic, visual or hearable sensations. A VR framework additionally has a world data set, which stores the items from the virtual world. In today’s world, VR is very popular when it comes to the gaming industry; however, limited research has been done on this domain. Many technologies that are used in creating and implementing virtual environments can also be explored on the research side of things. For instance, current advances in VR can allow for simpler merging of humans for management and control of complicated, multimodal system of systems. A virtually matched system can be merged into an AIbased system of systems. The outcomes of implementing VR as a simulation environment prove the effectiveness of the device to system of systems researchers [30].
36
Gulpreet Kaur Chadha, Ritu Punhani and Sonia Saini
Position & Orientation
Visual, auditory, haptic, touch...
Input Processor or
Rendering Processor Simulation Processor
World Database
Fig. 3.4: Architecture of a VR system.
3.6 Artificial intelligence in virtual reality Quite easily, intelligence demonstrated by machines can be termed as AI. However, VR is a simulated experience which may or may not be like the real world. Basically, it spots the user inside a 3D environment without it all really happening. All in all, AI and VR are two technologies that are set to merge, considering the pace with which such advances are taking over our lives. Therefore, there are is no second opinion needed for the fact that combining two or more technologies is only going to make everything better and more efficient. Considering this statement, AI and VR are one of the best combinations anyone can imagine. AI in VR offers a perspective that is seemingly endless, and that is the reason why about eight out of 10 biggest tech firms of the world have invested in this field. Some applications of this integration include: Travel and Tourism: wherein hotels, airlines, tourist places, etc. give their customers a prospective of the experience they are about to live, Engaging Entertainment: it immerses a gamer, for instance, into a simulated environment, which allows the user to feel as if they’re in the game, and one can only imagine the thrill this would bring, Immersive Shopping; this would enable a customer to test out whatever they wish to purchase, before placing an order. This not only helps a consumer but also the businesses, as it helps one understand more about their customer and, thereby, boost their sales. The COVID pandemic has made people understand this concept even better because one may shop according to their convenience, without having to visit a certain place. To conclude, it is not wrong to say that AI in VR presents amazing opportunities and their convergence is on its way to change our lives indefinitely. This is what we call The Future. Together, AI and VR bring out the reality of virtual AI. With time, VR-based systems are increasing, and by layering these systems with AI, unreal experiences
3 Merging of artificial intelligence (AI), artificial life (AL) and virtual reality (VR)
37
have been developed. It is believed that, together, AI and VR played a role in accelerating the development of the COVID vaccine.
3.7 Convergence of AI and AL Artificial Life (AL) is a narrative technical search that aims at analyzing man-made structures displaying behaviors that are instinctive to the living organisms and their environment. AL complements the traditional biological sciences that are focused on analyzing and studying the living organisms by trying to create life-like behaviors within computers or other artificial media. Exploration in the area of basic artificial intelligence (AI) focuses at replicating the most intricate faculties of the human brain like problem solving, natural language understanding, and logical reasoning [31].
3.8 Merging the three technologies AL, AI and VR are two letter acronyms that are changing the World [32]. The mix of all these technologies is set to transform our lives to an easier way of life and make the studies in the field of science more effective. They hold the potential to reinvent the way we presently process things. These three technological tools have collaborated to achieve major milestones in innumerable aspects of life [33]. Be it education, travel, science, culture etc. – every field has tasted their flavor, at least, once. There are not many things to name which cannot be done with the help of this merging. When humans touch, experience, live and interact with world, they tend to learn better and comprehend more. Seeing the possibilities that AI, AL and VR can offer, there is no looking back on using them. Also, with the Covid-19 pandemic taking over our lives, the demand and need for them have only risen. The situations we could not have imagined to have handled without stepping out of home has now been dealt with, sitting inside. Though, it is quite true that AI/VR/AL systems, headsets, cameras, glasses, etc. are expensive, in the not-too-distant future [33], these may become as accessible to us as any basic computer or phone that people are familiar with. With the introduction of 5G network, one may expect it to be a complete game changer that commits to raising technologies and bring together the physical and virtual world, beyond our expectations [33]. The opportunities that the merging of AI, AL and VR offer are endless and the advancements that are coming forward are breaking borders left, right and center.
38
Gulpreet Kaur Chadha, Ritu Punhani and Sonia Saini
3.9 Future trends With an ever-increasing number of innovative progressions, Internet of Things (IoT) is digitizing the real factors of labor, all over. By setting up everything with AI, we can make shrewd real factors, which guarantee to add a layer of ongoing knowledge to improve laborers at dominating their real factors. An intelligent reality is a mechanically upgraded reality that improves human psychological execution and judgment. It can improve a specialist by showing data from the IoT in the actual reality. Since the reality is based in real-time, streaming investigation is additionally a pivotal segment of intelligent realities. Figure 3.5 from my intelligent realities paper endeavors to work on this area. It likewise attempts to de-emphasize the job of the new head gear, with regards to operationalizing investigation and AI.
Augmented Reality Flat Screen
Physically local
Monocle HMD
Local or remote Mixed Reality
Abstraction ->
Smartphone/tablet
Physical Reality
Remote Fully immersive VR Data Reality Flat Screen Virtual World Virtual Reality
Reality
Fig. 3.5: A systems architecture view of future reality.
Starting with the right of the diagram: reality can be the general actual reality, or it l may, very well, be a theoretical reality. For example, a machine before a professional is an actual reality, while the inventory network for that machine is unique – an information reality. The left half of the diagram outlines the three key ideas of increased reality, blended reality and computer-generated reality. AR gadgets do not possess the client’s whole field of view. We can see the general reality, so AR is consistently close to the general actual reality. However, because we can see the encompassing actual reality, it does not imply that it is critical for the job needing to be done. A completely vivid VR headset totally shut out the actual reality. We are thus, consistently distant and confined from actual reality. Blended reality sits between AR and VR. Blended reality can be either local or remote.
3 Merging of artificial intelligence (AI), artificial life (AL) and virtual reality (VR)
39
Merging of technologies like AI, AL and VR will give rise to many new concepts and technologies that will solve many problems of the present and forthcoming world and emerge as useful concepts. Few of them are:
3.9.1 Pedagogical agents A pedagogical agent is an anthropomorphic virtual character used in an online learning environment to work for instructional uses. The design of pedagogical agents changes over time, depending on the desired objectives for them [34]. Figure 3.6 depicts the cognitive models used for multimedia learning with the pedagogical agents. Pedagogical agent multimedia presentation Spoken words
Sensory memory Ears
Written words Agent image Other picture
Eyes
Long-term memory
Working memory Verbal mental model
Integrating
Prior knowledge
Pictorial verbal model
Fig. 3.6: Cognitive models used for multimedia learning with pedagogical agents.
Fundamentally, a pedagogical agent is a simulated human-like interface between the user and the substance, in an instructive climate. A pedagogical agent is intended to demonstrate the kind of collaborations between a user and another person. Mabanza and de Wet characterize it “as a character authorized by a computer that interfaces with the user in a socially captivating way.” [35] A pedagogical agent can be assigned various parts in the learning climate, like guide or co-student, contingent upon the ideal motivation behind the specialist. “A tutor agent plays the role of a teacher, while a co-learner agent plays the role of a learning companion”. – Pedagogical specialists can be intended to help psychological exchange to the student, working as antiques or cooperates with collective part in learning. To help the exhibition of an activity by the client, the pedagogical agent can go about as a psychological device, if the specialist is outfitted with the information that the client needs. The associations between the client and the academic specialist can work with a social relationship. The pedagogical agent may satisfy the job of a functioning accomplice. – A pedagogical agent can: intercede when the client demands, give help to undertakings that the client cannot address and, possibly, expand the student’s
40
–
Gulpreet Kaur Chadha, Ritu Punhani and Sonia Saini
psychological reach. Cooperation with the educational specialist may evoke an assortment of feelings from the student. The plan of a pedagogical agent frequently starts with its computerized portrayal, regardless of whether it will be 2D or 3D and static or enlivened. A few investigations have created pedagogical agents that were both static and energized., At that point, the relative advantages were assessed.
3.9.2 Virtual worlds A virtual world is a computer-recreated climate [36], which might be populated by numerous clients who can make an individual symbol, and at the same time, freely investigate the virtual world, partake in its exercises and speak with others. These symbols can be printed, graphical portrayals, or live video symbols with hearable and contact sensations. Some application areas of virtual universes are: – Social: Users may create characters inside the network adjusted to the specific world they are associating with, which can affect the way they think and act. Web fellowships and investment online networks will, in general, supplement existing companionships and community cooperation, as opposed to supplanting or lessening such connections. – Medical: Incapacitated or persistently invalided individuals of all ages can profit massively from encountering the psychological and enthusiastic opportunity picked up by briefly abandoning their inabilities and doing, with the help of their symbols, things as straightforward and possibly available to capable, solid individuals as strolling, running, moving, cruising, fishing, swimming, surfing, flying, skiing, planting, investigating and other proactive tasks, which their sicknesses or handicaps keep them from doing, in actuality. They may, likewise, have the option to mingle, structure kinships and connections significantly more effectively and maintain a strategic distance from the disgrace and different hindrances, which would ordinarily be appended to their handicaps. This can be substantially more productive, genuinely fulfilling and intellectually satisfying than detached side interests, for example, TV watching, playing PC games, perusing or more regular sorts of web use. – Commercial: As organizations contend, they additionally contend in virtual universes. As there has been an expansion in the purchasing and selling of items on the web (internet business), this twinned with the ascent in the popularity of the web, has constrained organizations to conform to oblige the new market. Many companies and organizations are now using virtual worlds as the new type of advertisement. Using VR for Ads has many advantages. For starters, the area of commercialization seems to be very attractive and becomes a crowd puller.
3 Merging of artificial intelligence (AI), artificial life (AL) and virtual reality (VR)
–
41
Education: Virtual environments are an amazing new mode for guidance and training that presents numerous chances, yet, in addition, a few difficulties. Constancy considers proceeding and developing social cooperation, which, themselves, can fill in as a reason for collective schooling. The utilization of virtual universes can offer instructors the chance to have a more prominent degree of understudy support.
3.10 Conclusion Artificial intelligence offers unwavering quality, cost-adequacy, takes care of confounded issues and decisions; moreover, AI limits information from getting lost. Simulated intelligence is applied, these days, in many fields, whether business or design. One of the incredible apparatuses in AI is designated “support realizing,” which depends on testing achievement and dissatisfaction to build the dependability of utilizations. AL can be easily described as a type of artificial intelligence that mirrors natural life and evolution by using simulations with robotics, computers as well as biochemistry to study what humans might have been struggling to understand for years. VR lets us create simulated environments that we can submerge ourselves into. The mix of all these technologies is set to bring in an easier life for us and make the studies in the field of science more effective. Our chapter highlights how these three technologies hold the potential to reinvent the way we presently process things. These three technological tools have collaborated to achieve major milestones in innumerable aspects of life. Be it education, travel, science, culture. etc. – every field has tasted their essence once and will continue to do so, in the years to come.
References [1]
[2] [3]
[4] [5]
Aylett, R., Luck, M., Coventry, M., & Al, C. (2001). Applying artificial intelligence to virtual reality: Intelligent virtual environments. Applied Artificial Intelligence, 14. doi: 10.1080/ 088395100117142. Laukkanen, K., Kotovirta, M., & Ronkko (2004). Adding Intelligence to Virtual reality. Proceedings of the 16th Conference of the European Intelligence (ECAI), pp. 1136–1141. Luck, M., & Aylett, R. (2000). Applying artificial intelligence to virtual reality: Intelligent virtual environments. Applied Artificial Intelligence, 14(1), 3–32. doi: 10.1080/ 088395100117142. Fade, L. (2019). VR and AI: Two Technologies Set to Merge. VR Vision Group. Flavián, C., Ibáñez-Sánchez, S., & Orús, C. (2019). The impact of virtual, augmented and mixed reality technologies on the customer experience. Journal of Business Research, 100, 547–560. ISSN 0148–2963.
42
[6] [7] [8]
[9]
[10]
[11]
[12]
[13]
[14] [15] [16] [17] [18] [19]
[20]
[21]
[22] [23]
[24]
Gulpreet Kaur Chadha, Ritu Punhani and Sonia Saini
Berg, L. P., & Vance, J. M. (2016). Industry use of virtual reality in product design and manufacturing: A survey. Virtual Reality, 21(1), 1–17. Bonetti, F., Warnaby, G., & Quinn, L. (2018). Augmented reality and virtual reality in physical and online retailing: A review. Synthesis and Research Agenda. Van Kerrebroeck, H., Brengman, M., & Willems, K. (2017). Escaping the crowd: An experimental study on the impact of a virtual reality experience in a shopping mall. Computers in Human Behavior, 77, 437–450. Griffin, T., Giberson, J., Lee, S. H., Guttentag, D., Kandaurova, M., Sergueeva, K., & Dimanche, F. (2017). Virtual reality and implications for destination marketing. Proceedings of the 48th Annual Travel and Tourism Research Association (TTRA), International Conference, Quebec City, Canada. Merchant, Z., Goetz, E. T., Cifuentes, L., Keeney-Kennicutt, W., & Davis, T. J. (2014). Effectiveness of virtual reality-based instruction on students’ learning outcomes in K-12 and higher education: A meta-analysis. Computers & Education, 70, 29–40. Freeman, D., Reeve, S., Robinson, A., Ehlers, A., Clark, D., Spanlang, B., & Slater, M. (2017). Virtual reality in the assessment, understanding, and treatment of mental health disorders. Psychological Medicine, 47(14), 2393–2400. Lin, J. H. T., Wu, D. Y., & Tao, C. C. (2017). So scary, yet so fun: The role of self-efficacy in enjoyment of a virtual reality horror game. New Media & Society. doi: 10.1177/ 1461444817744850. Bigné, E., Llinares, C., & Torrecilla, C. (2016). Elapsed time on first buying triggers brand choices within a category: A virtual reality-based study. Journal of Business Research, 69(4), 1423–1427. Milgram, P., & Kishino, F. (1994). A taxonomy of mixed reality visual displays. IEICE Transactions on Information and Systems, 77(12), 1321–1329. Penfold, P. (2009). Learning through the world of second life – A hospitality and tourism experience. Journal of Teaching in Travel & Tourism, 8(2–3), 139–160. Schroeder, R. (2008). Defining virtual worlds and virtual environments. Journal of Virtual Worlds Research, 1(1). Guttentag, D. A. (2010). Virtual reality: Applications and implications for tourism. Tourism Management, 31(5), 637–651. Pan, Z., Cheok, A. D., Yang, H., Zhu, J., & Shi, J. (2006). Virtual reality and mixed reality for virtual learning environments. Computers & Graphics, 30(1), 20–28. Tamura, H., Yamamoto, H., & Katayama, A. (2001). Mixed reality: Future dreams seen at the border between real and virtual worlds. IEEE Computer Graphics and Applications, 21(6), 64–70. Javornik, A. (2016). Augmented reality: Research agenda for studying the impact of its media characteristics on consumer behavior. Journal of Retailing and Consumer Services, 30, 252–261. Yim, M. Y. C., Chu, S. C., & Sauer, P. L. (2017). Is augmented reality technology an effective tool for E-commerce? An interactivity and vividness perspective. Journal of Interactive Marketing, 39, 89–103. Mann, S. (2002). Mediated reality with implementations for everyday life. Presence Connect. Schnabel, M. A., Wang, X., Seichter, H., & Kvan, T. From virtuality to reality and back. Proceedings of the International Association of Societies of Design Research (2007), pp. 1–15. Jeon, S., & Choi, S. (2009). Haptic augmented reality: Taxonomy and an example of stiffness modulation. Presence Teleoperators and Virtual Environments, 18(5), 387–408.
3 Merging of artificial intelligence (AI), artificial life (AL) and virtual reality (VR)
43
[25] Vaishya, R., Javaid, M., Khan, I. H., & Haleem, A. (2020). Artificial Intelligence (AI) applications for COVID-19 pandemic. Diabetes & Metabolic Syndrome: Clinical Research & Reviews, 14(Issue4), Pages 337–339. ISSN 1871–4021. [26] Introduction to Artificial Life for people who like AI. The Gradient. (2019). [27] Ruas, T. L., Marietto, M. D. G. B., de Moraes Batista, A. F., Santos França, R. D., Heideker, A., Noronha, E. A., & da Silva, F. A. (2011). Modeling Artificial Life through Multi-Agent Based Simulation, Multi-Agent Systems – Modeling, Control, Programming, Simulations and Applications. In Alkhateeb, F., Maghayreh, E. A. & Doush, I. A. IntechOpen. doi: 10.5772/ 14313. [28] Pietro, C., Irene Alice Chicchi, G., Mariano Alcañiz, R., & Giuseppe, R. (2018). The past, present, and future of virtual and augmented reality research: a network and cluster analysis of the literature. Frontiers in Psychology, 2, 2086. [29] Ware, C., Arthur, K., & Booth, K. S. (1993). Fish tank virtual reality. In Proceedings of the INTERACT’93 and CHI’93 Conference on Human Factors in Computing Systems, (Amsterdam: ACM), 37–42. doi: 10.1145/169059.169066. [30] Lwowski, M. J., Majumdar, A., Benavidez, P., Prevost, J. J., & Jamshidi, M. The utilization of virtual reality as a system of systems research tool. In 2018 13th Annual Conference on System of Systems Engineering (SoSE), Paris, 2018, pp. 535–540, doi: 10.1109/ SYSOSE.2018.8428750. [31] Meyer, J.-A. Artificial Life and the Animat Approach to Artificial Intelligence, in Handbook of Perception and Cognition, Artificial Intelligence. Academic Press, 1996, 325–354. ISBN 9780121619640. [32] Andrew Fowkes on AI, ML, AR, VR – Are two-letter acronyms changing the world? On Hidden Insights, 29 January 2018. [33] Liu, A. Back to the Future Classroom: VR/AR/AI Transformation, August 22, 2020. [34] Martha, D., & Santoso, H. (2019). The design and impact of the pedagogical agent: A systematic literature review. The Journal of Educators Online, 16. doi: 10.9743/ jeo.2019.16.1.8. [35] Mabanza, N., & de Wet, L. (2014). Determining the usability effect of pedagogical interface agents on adult computer literacy training. E-Learning Studies in Computational Intelligence, 528, 145–183. doi: 10.1007/978-3-642-41965-2_6. ISBN 978-3-642-41964-5. [36] Bartle, R. (2003). Designing Virtual Worlds. New Riders. ISBN 978-0-13-101816-7.
Saksham Goyal
4 Pain relief management Abstract: Abstract: Pain relief management is a critical aspect of healthcare that plays a crucial role in improving patients’ quality of life. With advancements in technology, the integration of artificial intelligence (AI) and virtual reality (VR) has shown promising potential in revolutionizing pain management approaches. This paper presents an overview of the use of AI and VR in pain relief management, highlighting their synergistic benefits and advancements in recent years. AI-powered systems have been developed to provide personalized pain assessment and treatment plans based on individual patient data, including medical history, physiological parameters, and pain response patterns. Machine learning algorithms analyze this data to identify optimal pain relief strategies, allowing healthcare professionals to customize treatment approaches for each patient. Furthermore, AI algorithms can continuously adapt and refine pain management plans based on real-time feedback and patient outcomes. Keywords: Artificial intelligence (AI), virtual reality (VR), machine learning (ML)
4.1 Virtual reality The employment of computer technology to construct a simulated world is known as Virtual Reality (VR). In contrast to typical user interfaces, virtual reality creates an artificial environment that helps users to have an in-depth experience. Users are immersed and are able to engage with 3D worlds rather than watching a screen in front of them. The computer is converted into a gatekeeper to this artificial world by replicating as many senses as possible, such as vision, hearing, touch and even smell. The only constraints to near-real VR experiences are content availability and low computational power. There are two forms of virtual reality: immersive VR and text-based networked VR (also known as “Cyberspace”). When you move your head, the immersive VR adjusts your vision. While both VRs are suitable for teaching, remote learning is best done in cyberspace. These two categories are often complimentary to one another [1, 2]. This article focuses mostly on immersive VR. Currently, conventional VR systems provide fictional but realistic visuals, sounds and other sensory stimuli that impersonate a user’s physical presence in a VR world such that it tries to trick human senses with the help of Virtual reality headsets or various multi-projected environments. A person virtually experiencing a VR system may see or feel the virtual environment moving around it and interact with virtual
Saksham Goyal, Amity University, Noida, Uttar Pradesh, India https://doi.org/10.1515/9783110713817-004
46
Saksham Goyal
objects, making it seem very realistic to human senses. VR headsets are one of the tools that make it realistic for a human; these headsets include a tiny screen in front of the eyes that is typically used to achieve maximum immersion effect, though it can also be achieved in specially constructed rooms with huge displays [3]. Auditory and visual feedbacks are common in virtual reality, but haptic technology may allow for various forms of sensory and physical feedback. In medical services, the introduction of Virtual Reality technology to ongoing therapy programs might assist the treatment of mental diseases, particularly posttraumatic stress disorder [4]. VR has the ability to help individuals recover from, reconcile with and comprehend real-world experiences beyond gaming, industrial and commercial uses, whether it’s helping veterans to confront problems in a controlled setting or conquering phobias in tandem with behavioral treatment. Virtual reality (VR) is quickly reforming the healthcare business today, impacting how patients and hospital professionals receive and provide treatment. Some expectant mothers have tried to incorporate virtual reality headsets to help them cope with the discomfort of childbirth.
Fig. 4.1: VR patient care.
New research from Cedars-Sinai Medical Center backs up the notion that therapeutic virtual reality can securely and productively relieve intense pain of hospitalized patients. According to a new study published in the journal PLOS ONE Trusted Source, virtual reality can considerably reduce people’s pain signals, especially in those who experience severe pain. Because pain management has traditionally relied on pharmaceutical drugs, many of which have the potential to be addictive,
4 Pain relief management
47
these findings imply that virtual reality might be a safe, effective and drug-free alternative for treating some forms of pain. Between 2016 and 2017, researchers evaluated 120 patients at Cedars-Sinai Medical Center in Los Angeles to see how effectively therapeutic VR decreases pain. Prior to utilizing VR headsets, the participants had a variety of medical issues and were in moderate-to-severe discomfort. Sixty-one persons were given a virtual reality headgear, the Samsung Gear Oculus, with access to 21 distinct immersive experiences, such as a simulated helicopter flight over harsh Iceland or guided relaxation while gazing at peaceful ocean or mountain views. Over the course of 48 h, they utilized the headsets for three 10-min periods every day. The remaining 59 persons watched television shows that included guided relaxation techniques such as yoga and meditation, as well as poetry readings. The researchers monitored how the patients’ pain rating changed during various VR and TV sessions. Self-reported pain levels decreased by 0.46 points in the group that watched television programs and 1.72 points in patients who utilized VR headsets, on a scale of 1 to 10 [5]. Patients with the most severe pain, in particular, reported the biggest advantages from VR headsets, with their pain score lowering three points, on average. While a two- or three-point decline may appear little, it actually indicates a significant reduction in pain levels.
4.2 VR aids people from their pain Researchers aren’t sure why virtual reality helps people cope with pain so well. Many health specialists believe that virtual reality diverts people’s attention away from their discomfort. According to Dr. Medhat Mikhael, a pain management specialist and medical director of the non-operative program at the Memorial Care Orange Coast Medical Center [6, 7], the most acceptable theory is the Gate theory of attention, which states that VR reduces pain perception by absorbing and diverting attention away from pain. When people are immersed in an activity, they learn to tune out other inputs, including pain signals from their bodies. Many VR experiences also contain relaxation techniques, such as guided meditations, which are key abilities for managing acute and chronic pain, according to Darnall.
48
Saksham Goyal
4.3 More about the possibilities of VR There are a couple of more issues that need to be answered before we can fully comprehend and utilize the potential of virtual reality in the healthcare industry. Additional research is required to study the impact and consequences of VR in a range of different surgical procedures, along with its potential therapeutic usefulness to enhance the functions, surgical recovery and to reduce consumption of long term pain medication. Researchers want to see if different forms of virtual reality have distinct health consequences, and if specific personality types respond better to VR solutions [8]. The most pressing concern appears to be whether virtual reality can alleviate pain while simultaneously lowering the quantity of opiates that patients require. If future research continues to reveal that VR may help individuals better manage pain and other symptoms that typically need medication, the potential healthcare cost savings from VR therapies might be tremendous.
4.4 How virtual reality can help seniors with dementia Virtual reality (VR) technology may be used to enhance health in various ways, incorporating improved quality of life for dementia patients. A recent study illustrates how VR can aid even individuals with a severe dementia disease like Alzheimer’s disease.
4.4.1 What the study found? A program was held with doctors to help choose the appropriate soothing locations. Each participant wore a virtual reality headset and visited five different virtual locations. Countryside, a sandy beach, a rocky beach, a church and a forest were among them. Patients were free to pick their own setting. Researchers kept track of several 15-min virtual reality sessions and analyzed inputs from patients and their carers. Some patients preferred to stay in the same setting over and over, whereas others preferred to branch out [9]. The participants were able to recollect historical experiences thanks to the use of virtual reality, according to the team. It did so by exposing them to new stimuli that they would not have been exposed to otherwise, owing to sickness or inaccessibility. Because the carers were able to understand more about the patients’ life, their social interactions improved. One patient thought about the VR experience and subsequently created a seashore painting during an art lesson a few weeks following
4 Pain relief management
49
the session. Researchers concluded that VR had a good effect on his mood as well as his capacity to appreciate the creative process. The study discovered that a higher quality of life on the wards reduces anxiety, sadness and hostility. VR technology was previously tested on dementia patients in day care centers and residential care settings by the team. They anticipate that further study will be conducted to evaluate the virtual world features that make VR so beneficial, as well as how to use it more efficiently. As it gets simpler to create virtual worlds, designers may be able to create unique VR settings for patients [10] – allowing them to explore their house or a favorite area is one example.
4.5 Pain from cancer VR technology has also been found as a means for the reduction of pain, discomfort or anxiety driven by painful cancer treatments, such as chemotherapy, lumbar puncture and port access [11]. Schneider and Workman looked at 11 youngsters (ages 10–17) who were given chemotherapy with or without VR. VR therapy was better than prior therapies for 82% of the youngsters. “With a group of 30 teenagers (ages 10–19 years), Sanders Wint et al. explored VR usage during lumbar puncture. VR distraction was found to be a lot more effective and indulging, as compared to standard of care, in lowering the physiological trauma or the pulse rate and pain
Fig. 4.2: VR ease of use.
50
Saksham Goyal
ratings [13, 14]. The time-elapse compression effect has shown to reduce the symptom of discomfort and the perceived time spent in getting a treatment.
4.6 Pain relief during labor – – – – – – – – –
Using virtual reality for women in labor could have many benefits, such as: lower costs few side effects trusted Source (though it may not be appropriate for those with motion sickness) low risk to mother or baby (the most common side effect reported is nausea) effective pain relief a medication-free option choices to empower a mother in her birthing experience It may also offer relief during post-birth procedures, like stitches for tears or incisions.
Fig. 4.3: Maternity care.
4.7 Burn care One of the major investigated uses of VR technology is for pain and anxiety reduction during burn operations and burn survivor rehabilitation. Burn-wound treatment clearly causes patients a great deal of pain, worry and discomfort [15]. A case study
4 Pain relief management
51
published by Hoffman et al. in 2000 compared the efficacy of virtual reality to that of a typical video game for two teenagers (16 and 17 years old) receiving burn wound care [16]. Virtual reality was proven to reduce discomfort and anxiety, along with the time spent thinking about the pain. During burn wound treatment, Das et al. conducted a randomized control experiment, comparing the standard of care (analgesia) to analgesia with VR, for children (5–18 years old). Analgesia, combined with VR, proved more effective than analgesia alone in lowering pain and suffering. More recently, during wound debridement for 11 patients (9–40 years old), a water-friendly VR system was studied, indicating that VR reduced discomfort and improved enjoyment for those who reported feeling involved in the VR game [17]. Burn victims, receiving physical treatment, have also been researched using virtual reality technology. During physical therapy, a comparison of pharmaceutical analgesia alone vs VR in addition to analgesia was done by Hoffman et al. Patients in the VR group had less discomfort and had more range of motion. Hoffman et al. compared the use of virtual reality with no distraction during physical therapy in another research. Patients reported less discomfort and a higher range of motion after the VR treatment [18, 19]. Sharar et al. analyzed data from three investigations and found that using virtual reality, in addition to traditional analgesics, lowered pain intensity, unpleasantness and time spent thinking about pain.
Fig. 4.4: Patient assistance.
52
Saksham Goyal
4.8 Recovery from stroke or head injury Virtual reality in the medical area may potentially aid patients recuperating from strokes or brain trauma. Virtual reality settings can aid patients in overcoming balance and mobility issues. Patients may feel more confidence while moving around in the actual world after practicing in virtual surroundings. A physical therapist can use virtual reality to manipulate the patient’s environment and assess the influence on their balance and mobility [20]. They might progressively expose kids to more difficult situations while providing feedback on how to respond. VR games that stimulate regions of the brain that have been affected by a stroke might aid stroke victims in their rehabilitation. Medical experts may trace patients’ motions as they play games using 3D motion-tracking cameras. Although further study into these approaches is needed, early studies on the use of virtual reality in treatment for increasing arm and hand movement in patients with cerebral palsy and stroke patients showed encouraging results.
Fig. 4.5: VR support.
4.9 VR and Neurobiology In functional imaging investigations of a human brain’s reaction to painful stimuli, an increase in activity is seen in certain areas such as the anterior cingulate gyrus, the insula or the thalamus [21]. The response to a variety of aversive and nonaversive stimuli varies in different degrees, and to a variety of task-related attention, distraction or emotional problems. Imaging, using well-controlled psychophysical measurements, has linked the responses of select subsets of these areas to sensation/pain intensity and
4 Pain relief management
53
pain unpleasantness. However, there is now a lot of discussion over how to distinguish between brain areas that process sensory information and those that mediate emotional responses [22, 23]. There is no clear understanding of how the brain’s sensory and affective systems engage together on the dimension of pain, in response to potentially unpleasant inputs, nor has functional imaging or other techniques of inquiry established an objective, direct brain correlation with the pain experience.
Fig. 4.6: Body illusion.
Virtual reality has been discovered to reduce pain, a phenomenon known as ‘VR analgesia.’ Functional MRI (fMRI) research, demonstrating lower brain activity increases in areas usually significantly engaged by experimental thermal pain stimulation [24], has confirmed subjective judgments of pain reduction by VR. Hoffman’s research, on the other hand, is primarily concerned with whether playing VR games lessens the increase in brain activity in the traditional pain regions linked with painful heat stimuli. Another research investigated the impact of VR and opioids on brain activity associated with pain, finding that both opioids and VR dramatically decreased pain-related brain activity in the insula and thalamus, but not in other areas of the pain circuitry.
4.10 Future perspective Coming to the existing VR uses for various pain treatments, scientists, physicians and educators are only scratching the surface. Historically, VR technology is known to be expensive, making it accessible to a very select part of the population, and
54
Saksham Goyal
primarily by researchers and game developers. Virtual reality, as a pain treatment technique, is still in its early phases of development. VR is swiftly receiving attention, as a supplemental pain management technique, due to the rapid growing technology, increased interest in additional non pharmacological therapies, and the burden of documents and impairment related with rising incidence of chronic pain [25, 26]. Professionals like the neuroscientists or clinical researchers, and pain management specialists are all interested in what was long regarded purely as a high tech entertainment equipment [27]. Over the next 5–10 years, virtual reality is going to have a remarkable impact on chronic and acute pain management, along side physical and mental rehabs. VR is probably going to have multiple uses for patients with various acute and chronic medical illness, as and when the price associated with VR technology reduces and there is an increase in the customizability of game settings. VR will eventually be included in various medical settings as a part of the healthcare provider’s toolkit for routine painful medical procedures, pain rehab and chronic pain management, and for the treatment of different psychiatric conditions (such as anxiety, post traumatic stress disorder or others). VR is a highly effective tool since it may instantaneously take a patient subjected to a dreaded circumstance to a virtual environment for diversion, or offer assistance in diaphragmatic breathing through guided imagery, and/or for self-hypnosis.
References [1]
[2]
[3]
[4]
[5]
[6]
Hoffman, H. G., Doctor, J. N., Peterson, D. R., Carrougher, G. J., & Furness, T. A. (2000). Virtual reality as an adjunctive pain control during burn wound care in adolescent patients. Pain, 85, 305–309. Das, D. A., Grimmer, K. A., Sparon, A. L., McRae, S. E., & Thomas, B. H. (2005). The efficacy of playing a virtual reality game in modulating pain for children with acute burn injuries: A randomized controlled trial. BMC Pediatrics, 5, 1–10. One of the first randomized control trials of VR for pediatric burn care. Hoffman, H. G., Patterson, D. R., Seibel, E., Soltani, M., Jewett-Leahy, L., & Sharar, S. R. (2008). Virtual reality pain control during burn wound debridement in the hydrotank. The Clinical Journal of Pain, 24(4), 299–304. Hoffman, H. G., Patterson, D. R., & Carrougher, C. J. (2000). Use of virtual reality for adjunctive treatment of adult burn pain during physical therapy. The Clinical Journal of Pain, 16, 244–250. Hoffman, H. G., Patterson, D. R., Carrougher, C. J., & Sharar, S. R. (2001). Effectiveness of virtual reality-based pain control with multiple treatments. The Clinical Journal of Pain, 17, 229–235. Sharar, S. R., Carrougher, G. J., Nakamura, D., Hoffman, H. G., Blough, D. K., & Patterson, D. R. (2007). Factors influencing the efficacy of virtual reality distraction analgesia during postburn physical therapy: Preliminary results from 3 ongoing studies. Archives of Physical Medicine and Rehabilitation, 88(12 Suppl 2), S43–S49.
4 Pain relief management
[7]
[8]
[9]
[10]
[11] [12]
[13]
[14] [15] [16]
[17]
[18] [19] [20]
[21]
[22] [23]
[24]
55
Carrougher, G. J., Hoffman, H. G., Nakamura, D. et al. (2009). The effect of virtual reality on pain and range of motion in adults with burn injuries. Journal of Burn Care & Research, 30(5), 785–791. Patterson, D. R., Hoffman, H. G., Palacios, A. G., & Jensen, M. J. (2006). Analgesic effects of posthypnotic suggestions and virtual reality distraction on thermal pain. Journal of Abnormal Psychology, 115(4), 834–841. Oneal, B. J., Patterson, D. R., Soltani, M., Teeley, A., & Jensen, M. P. (2008). Virtual reality hypnosis in the treatment of chronic neuropathic pain: A case report. The International Journal of Clinical and Experimental Hypnosis, 56(4), 451–462. Konstantatos, A. H., Angliss, M., Costello, V., Cleland, H., & Stafrace, S. (2009). Predicting the effectiveness of virtual reality relaxation on pain and anxiety when added to PCA morphine in patients having burns dressings changes. Burns, 35(4), 491–499. Schneider, S. M., & Workman, M. L. (2000). Virtual reality as a distraction intervention for older children receiving chemotherapy. Pediatric Nursing, 26, 593–597. Sander Wint, S., Eshelman, D., Steele, J., & Guzetta, C. E. (2002). Effects of distraction using virtual reality glasses during lumbar punctures in adolescents with cancer. Oncology Nursing Forum, 29, E8–E15. Gershon, J., Zimand, E., Pickering, M., Rothbaum, B. O., & Hodges, L. (2004). A pilot and feasibility study of virtual reality as a distraction for children with cancer. Journal of the American Academy of Child and Adolescent Psychiatry, 43, 1243–1249. Talbot, J. D., Marrett, S., Evans, A. C., Meyer, E., Bushnell, M. C., & Duncan, G. H. (1992). Multiple representations of pain in human cerebral cortex. Science, 255(5041), 215–216. Coghill, R. C., Talbot, J. D., Evans, A. C. et al. (1994). Distributed processing of pain and vibration by the human brain. The Journal of Neuroscience, 14(7), 4095–4108. Casey, K. L., Minoshima, S., Morrow, T. J., & Koeppe, R. A. (1996). Comparison of human cerebral activation pattern during cutaneous warmth, heat pain, and deep cold pain. Journal of Neurophysiology, 76(1), 571–581. Becerra, L. R., Breiter, H. C., Stojanovic, M. et al. (1999). Human brain activation under controlled thermal stimulation and habituation to noxious heat: An fMRI study. Magnetic Resonance in Medicine, 41(5), 1044–1057. Craig, A. D., Chen, K., Bandy, D., & Reiman, E. M. (2000). Thermosensory activation of insular cortex. Nature Neuroscience, 3(2), 184–190. Hofbauer, R. K., Rainville, P., Duncan, G. H., & Bushnell, M. C. (2001). Cortical representation of the sensory dimension of pain. Journal of Neurophysiology, 86(1), 402–411. Derbyshire, S. W., Jones, A. K., Gyulai, F., Clark, S., Townsend, D., & Firestone, L. L. (1997). Pain processing during three levels of noxious stimulation produces differential patterns of central activity. Pain, 73(3), 431–445. Coghill, R. C., Sang, C. N., Maisog, J. M., & Iadarola, M. J. (1999). Pain intensity processing within the human brain: A bilateral, distributed mechanism. Journal of Neurophysiology, 82, 4. Iadarola, M. J., Berman, K. F., Zeffiro, T. A. et al. (1998). Neural activation during acute capsaicin-evoked pain and allodynia assessed with PET. Brain, 121, 931–947. Rainville, P., Duncan, G. H., Price, D. D., Carrier, B., & Bushnell, M. C. (1997). Pain affect encoded in human anterior cingulate but not somatosensory cortex. Science, 277(5328), 968–971. Tolle, T. R., Kaufmann, T., Siessmeier, T. et al. (1999). Region-specific encoding of sensory and affective components of pain in the human brain: A positron emission tomography correlation analysis. Annals of Neurology, 45(1), 40–47.
56
Saksham Goyal
[25] Gracely, R. H., & Kwilosz, D. M. (1988). The descriptor differential scale: Applying psychophysical principles to clinical pain assessment. Pain, 35(3), 279–288. [26] Fields, H. L. (1999). Pain: An unpleasant topic. Pain, (Suppl 6), S61–S69. [27] Bushnell, M. C., Duncan, G. H., Hofbauer, R. K., Ha, B., Chen, J. I., & Carrier, B. (1999). Pain perception: Is there a role for primary somatosensory cortex? Proceedings of the National Academy of Sciences of the United States of America, 96(14), 7705–7709.
Saksham Goyal
5 Intelligent shopping malls Abstract: Intelligent Retail Mall is the most basic shopping mall management system. It is essentially a combination of hardware and software. This is equipped with cutting-edge technologies and a smart access system. The user is supplied with an all-in-one administration platform as a result of this. With the smart retail mall system, the admin has access to all available functions. It may combine all of the devices and systems, making operation much easier for the operator. The technology will also guarantee comprehensive security for clients, employees and anybody else connected to it. It will create a very lovely environment. For savvy business, accurate traffic information will also be offered. Keywords: Artificial intelligence (AI), virtual reality (VR), machine learning (ML)
5.1 Introduction Shopping malls, as we all know, now offer a variety of weekly and even daily deals and schemes. They do so using tools like magazines and brochures available at the shop’s entryway. For example, if the stock on which the offers were offered was depleted, the retailer would be unable to reorder the same item or compensate for the shortage with another item. To avoid this, we must substitute innovative and modern ways for the conventional approach. The first is smart phones, which are now quite widespread, and with which consumers may acquire information about the goods they want to buy, as well as a list of discounted products, via the app of the particular store or mall. The second is attainable by utilizing low-cost sensors to feel presence, position and also to determine the buyer’s true desires, which will assist them in traversing the mall in order to visit that business and take advantage of deals while shopping. Both of the aforementioned strategies contribute to the creation of an intelligent environment with the primary goal of directing consumers to establishments that provide the greatest bargains so that they may purchase quickly. We assume that offering together with items is limited to the stores that sell them. This framework offers a number of significant benefits to purchasers. Customers are directed via the app to locate the deals and items that they are looking for, all at a fair price. On the other side, it aids dealers in locating specific customers
Saksham Goyal, Amity University, Noida, Uttar Pradesh, India https://doi.org/10.1515/9783110713817-005
58
Saksham Goyal
and presenting them with offers that they are interested in. In cases where many stores provide different offers for comparable items, the lowest cost providing store receives first consideration. The merchants who are offering the finest deals are identified as a result of this since they are visible to buyers before other dealers.
Fig. 5.1: Overview of shopping malls.
5.2 Related work In the literature, similar works may be found. The authors [1] advocate an indoor location-based recommender system with the goal of recommending acceptable stores to clients based on their prior preferences. In the Phoenix Mall, for example, over 10,000 consumers are followed for a week. This information will now be utilized to create a recommender system that will assess buyer preferences and recommend shops based on their needs. The buyer will be directed to all of the best stores. Winkler et al. [2] also include a definition of virtual mall. A user interface based on projector phones using augmented reality principles is provided by this system, which is primarily focused on human–computer interaction concerns. In comparison to static guides and even regular or augmented reality mobile applications, projected interfaces provide a number of significant advantages. The authors provide five suggestions for a shopping mall indoor help system based on projector phones, including shop selection aid, accurate way finding, “virtual fitting” of garments and context aware and ambient advertising.
5 Intelligent shopping malls
59
Furthermore, according to Yang et al. [3], a location-aware recommender system is offered, which matches a customer’s wants with location-based merchant offers and promotions. The authors Purohit et al. [4] describe the Sugar Trail method for interior mapping in retail spaces, which eliminates the requirement for active tagging and eliminates the necessity for existing maps. The solution also allows the retailer to have more precise data than current radio figure printed alternatives by exploiting structured mobility patterns of customers in retail store surroundings. Olugbara et al. [5] provide a work that demonstrates how visual material may be utilized to recognize a location-based retail recommender system for spontaneously assisting mobile consumers in decision-making. It uses generic Fourier descriptors image material collected from a picture to rate suggestions using knowledge stored in the item and databases of the user’s profile. There are various advantages to our approach when it comes to the previously described task. The first is that it [1, 2] does not provide INS functionalities. On the other hand, it [4] solely provides navigational capabilities. Finally, Winkler et al. [2] concentrate on augmented reality, whereas Yang et al. [3] concentrate on picture analysis. The focus of these works is on a few distinct aspects. They make no attempt to provide a strong platform on which to construct application scenarios. This is exactly what we have accomplished by using cellular automata. Cellular automata allow a well-distributed information management system with adaptive capabilities for the entire system. In comparison to the studies identified in the previous literature, privacy methods bring significant value to our concept.
Smart malls thrill visitors Retailers began sending advertising directly to customer smartphones some time ago, but these promotions were limited to the domain inside and immediately around a particular store. Malls are now possible to have economical mall-wide data networks that can be used by stores for promotions, but only via the permission-based mall app, thanks to the arrival of new technologies in the approaching age. In malls that provide such apps, rapid customer adaptation has been observed. The reason for this is because buyers might be influenced by in-store discounts, coupons and flash deals that they would otherwise overlook. Apps can also give interactive mall maps, as well as propose convenient routes for shopping combinations. For example, the Qwartz shopping mall in Villeneuve-la-Garenne, north of Paris, gives consumers with mobile access to digital product information, pricing comparisons and a search engine for all items offered in that mall. Personalized reward programs and way finding are also available through the Qwartz app.
60
Saksham Goyal
Consider this scenario Maria’s smart phone immediately downloads the mall’s app as she goes through the mall’s door. It also displays a list of stores who can complete her shopping list, as well as the best deals they have. She picks a shoe store that is providing a 25% discount as well as her favorite summer clothing store. She receives a signal from her chosen restaurant in the food court, “Pad Thai combination 50% off till 12:30,” and she stops for lunch while following the optimum path recommended by the malls’ app. Later, in the restroom, she is pleased to notice that the toilets are clean and fragrant, that the counters are wiped clean and that the soap dispensers are adequately stocked. After a while, another notice arrives informing her that children’s winter clothing is on sale for 55% off. Maria is happy and satisfied at the conclusion of the day, and she is looking forward to her next trip to the mall [6].
Fig. 5.2: How it will work.
5.3 Features of smart shopping mall 1.
2.
Smart access: A dapper access will be given to the users with their devices and system via system of smart shopping mall. All the technologies will be implanted in it and also it will grant a single sighted window. Unified platform: An unified platform has been provided through smart shopping Mall system for all the required aspects. All the crucial considerable and inconsiderable arrears will also be enveloped in the system. It consists of a
5 Intelligent shopping malls
61
62
Saksham Goyal
single viewed platform for its users which will help the users to operate and modify the functions. 3. Better security access: Provisions of advanced security system are there in this System. All the important areas which have the possibility of bad views are put in it. 4. Smart parking: Parking area can also be handled by the user from his or her seat. Number of facilities related to parking can also be incorporated with the system such as counting, unknown access, vehicle recognition and vendor flow. 5. Data center: All numbers, diagrams and their statistics will be provided through data center. This will be provided from day to day traffic as well as vendor–customer engagement. This will be provided in the desired format [8]. 6. Count and go! At the doorstep of the entrance count, go facility will be provided to the user. This facility will help in managing the traffic and improving customers. 7. 24x7 support: A team of skilled, talented and well-trained engineers is formed which will help you 24 × 7 in case of any glitch. This will help in removing the fear of crashing of System.
5.4 Benefits of smart malls 5.4.1 Low cost, high return The advantages that smart malls provide are appealing. These are the ones: A large number of loyal customers, decreased operating costs and increased mall income would be paid for by the smart mall’s typical refit, which would pay for itself in months rather than years. The smart mall is not a novel concept, Saarijärvi argues; what is new is the ease and cost with which it can be implemented. The technology has reached a point of maturity. The technology in question is known as wide area mesh, or simply “mesh.” It is essentially a wireless network that smart malls may utilize to send various sorts of data. Messages may also be sent to devices, and data collected from tags and sensors of various kinds is automatically collected. The trouble with such smart mall approach is that it is inherently pricey. A tremendous advancement in signal transmission technology has a lot to do with solving the affordability challenge. Wirepas’ patented beacon signal consumes over 400 times less energy than rival mesh technologies, allowing battery-powered beacons to operate for years without needing to be recharged. However, because the mesh network protocol runs on the same 2.4 GHz hardware layer as Bluetooth, any Bluetooth-enabled smart phone or tablet may receive messages from it. The phone
5 Intelligent shopping malls
63
Fig. 5.3: Advantages.
app’s back end technology may then react instantly to any selections made on shopper devices.
5.4.2 Sales alerts and way finding The mall marketing team’s imagination is the only limit to the frequency, nature and configuration of alerts; regular in-mall notifications may be paired with daily or weekly out-of-mall messages that inform customers about sales events wherever they are. For a price or as part of a lease package, retailers may push out their promotions and show advertisements [9]. Way finding capabilities in mall apps are a wonderful method to navigate malls, especially for individuals who do not frequent them or who are on a tight schedule. They may design a shopping itinerary based on what consumers wish to buy and show them where they are in the mall. The app may be set up to allow a shopper to select merchants that offer sales on the products on her list, for example. In an emergency, they can also point to the nearest escape.
5.4.3 Maintenance needs How long do you spend looking for cleaning carts, maintenance equipment, fire extinguishers and delivery carts to transport products to retailers? Is it possible for a retailer’s delivery to be misdirected? Keeping track of all assets is a breeze using mesh. Embedded tags may provide signals that indicate the location of all assets within a five-square-meter area, regardless of asset density. A single asset may be
64
Saksham Goyal
found on a map by searching for their allocated ID, and a remote LED light on a tag can signify which asset to utilize.
5.4.4 The perfect environment Temperature, air quality, humidity, ambient light, surface cleanliness and other environmental parameters may all be measured and accurately controlled using sensors. Sensor data may be used to customize HVAC, lighting and other systems to react automatically. Electric lighting, for example, may automatically brighten when natural light dims. Sensors can notify employees to things like the need for more towels in the restroom or a spilled beverage on the floor. Clean washrooms, as well as ideal ambient light, temperature and humidity, have been shown in studies to increase the length of a mall visit.
5.4.5 Shopper movement Visits to the mall may be measured not only in terms of time, but also in terms of location. With an interactive heat map display, mall staff can watch customer movement over hours, days or months. Color-coding is used to show high-traffic regions (red), low-traffic areas (blue) and movement patterns throughout the day. Heat mapping is a highly effective sales technique. It may be used to justify higher rents or to persuade a store to invest in a certain area by demonstrating without a shadow of a doubt how much traffic the place receives and when. It also allows teams to anticipate the type of traffic they would encounter on Mondays as compared to Saturdays, and to estimate how traffic will fluctuate during the day. This can aid mall management in properly positioning kiosks, preparing maintenance workers for food court surges, and allocating extra mall assistance or security personnel, among other things. In-store heat maps may, of course, be utilized for targeted in-store promotions.
5.4.6 Security preparedness Securing a mall consists of two parts: preventing theft and preparing for fires and other security events. Both can be aided by sensors [10]. Apps can automatically indicate the fastest route to safety or the nearest escape in the case of a security breach. Sensors can detect smoke and communicate that everything is fine, such as if an alarm goes off or if there is mysterious smoke but no real fire. Sensors can be
5 Intelligent shopping malls
65
used to geofence mall assets in the event of theft, triggering an alarm when they leave a defined region. Mesh technology is a foundational technology with a wide range of applications. It allows for advertising, message and navigation. It offers the network for asset tracking anchors and asset tags, as well as ambient, illumination and acceleration sensors to assist control the environment. Other uses include emergency preparation alarm systems, security alert systems and network-connected cameras, among others. The applications that earn the greatest money straight once are those that need customer contact, although sensors may assist boost spending in a variety of ways. Shoppers will stay longer and spend more if the mall is made more pleasant, safe and interesting.
5.4.7 Absolute footfall count 1. 2. 3.
Within the mall, we can track the visitor’s buying journey. Recognize the visitors’ brand preferences and places of interest. Assist appropriate mall management by handling traffic in various areas such as the entrance, stores, passageways and exits, among others. 4. Determine which stores in malls are functioning well by assessing their individual performance. 5. Use heat maps to produce space rentals in malls based on “hot areas” or address less penetrated “cold spots.”
5.4.8 Optimize visitor experience 1. 2.
The busiest sections within a mall at different times of the day by tracking congestion The underutilized parts in the building by highlighting “cold patches” in the heat map
The data can assist in improving overall shopping center management by 1. Easing the flow of visitors in popular areas with more efficient management 2. Providing an influx of visitor engagement activities in these areas of visitor interest 3. Trigger personalized offers in the form of offers, discounts, rewards, exhibitions, recommendation lists and others by taking into account visitor’s browsing history in malls and museums.
66
Saksham Goyal
5.4.9 “Smart” parking solution for intelligent malls 1.
A smart parking lot is one that is well-organized. Malls can optimize parking space utilization and increase overall parking efficiency by having a wellorganized parking lot. “Smart parking solutions,” says the author. 2. Using the authorized smartphone app, visitors can reserve a parking place in advance. 3. Provide a real-time picture of car traffic, parking spot availability and usage to the mall administrator. 4. Reduce visitors’ “car-search problems” by directing them to where their automobiles are parked.
5.4.10 Improve operational efficiency Operational efficiency is a “must-achieve” aspect in all industrial sectors. Improve staff scheduling to account for traffic fluctuations throughout the week. Deploy intensive security at the right moment, focusing on the most frequented places. 4. Enable adequate planning in terms of cleaning and maintenance activity time and frequency. 1. 2. 3.
5.5 How it will be called as intelligent shopping mall? The intelligent environment that we are considering is a shopping mall that is equipped with specific hardware to aid the operation of the proposed context aware recommender system. We took into consideration a single-story commercial mall. If there are more floors, the matter will be handled similarly. The mall’s floor can be split into zones, with each zone containing a shop, a restroom, a hallway and so on. A cell is assigned to each zone in the model. The following items are included in each cell: Sensors: These are used to monitor the environment and its inhabitants, that is, the consumers. In essence, the sensors are there to detect both the presence of shoppers in that specific zone. Actuators: These have been installed to engage with the nearby shoppers. These are output devices that may be used to transmit recommendations to shoppers in a specific zone. Processors: These are present to collect data from sensors and create outputs.
5 Intelligent shopping malls
67
Fig. 5.4: Proposed methodology.
5.5.1 Smart shelves A smart shelf is a shop that has a radio-frequency identification (RFID) reader installed. RFID is either integrated into the shelf or added afterwards, either within or above standard shelves. All of the tagged products may be scanned using an RFID reader on the shelf, which also informs the backend system about the objects that are now there. The following are the three primary components of smart shelf systems: an antenna, an RFID tag and an RFID reader. The tags of the objects comprise an integrated circuit as well as a microchip that transmits data to the RFID reader. This read data is then forwarded to an IT platform, where it is processed and translated into the customer’s preferences, similar to how the largest retail outlet in the United States, Kroger, does it.
5.5.2 Digital signage IoT applications have resulted in a rise in the usage of ‘smart signage’ in retail stores. It has also changed the way people purchase by allowing them to have a personalised shopping experience. According to the biggest digital signage business in South Africa, roughly 41% of consumers were motivated to make a purchase in the store
68
Saksham Goyal
because of the digital signs. Customers benefit from digital signs since they are not only appealing and relaxing, but they also provide rational and important ideas. It is accomplished by collecting contextual clues and patterns from consumers and developing relationships based on expectations. The programme calculates the primary goods in a group in the same purchase by the consumer, the time of year when these things were purchased, and also uses this information to direct the user to their next probable shopping.
5.5.3 Self-checkout kiosks Everyone enjoys shopping, but no one enjoys waiting in lines for hours to pay for their purchases. This is where IoT technologies and self-checkout kiosks come into play, and they are altering the game in retail. Customers may pay for their goods online or digitally without having to interact with a human at a self-checkout kiosk. Over time, this technology has advanced fast. Retailers may now provide mobile application-based payments using the QR code on the items, as well as contactless purchases. Customers can avoid huge lines only to pay their bills using Self Checkout. Using an AI system, the interactive kiosks allow managers to keep track of the customer’s purchasing history as well as favored promotions. Customers can receive SMS messages along with dependable promos.
References [1] [2]
[3] [4]
[5]
[6]
Lin, Z. (2013) Indoor location-based recommender system. Ph.D. thesis, University of Toronto, Toronto. Winkler, C., Broscheit, M., & Rukzio, E. (2011) Navibeam: Indoor assistance and navigation for shop-ping malls through projector phones. In CHI 2011 workshop on mobile and personal projection. Yang, W. S., Cheng, H. C., & Dia, J. B. (2008). A location-aware recommender system for mobile shopping environments. Expert Systems with Applications, 34(1), 437–445. Purohit, A., Sun, Z., Pan, S., & Zhang, P. (2013) Sugartrail: Indoor navigation in retail environments without surveys and maps. In 2013 10th annual IEEE communications society conference on sensor, mesh and Ad Hoc communications and networks (SECON) (pp 300–308). IEEE, New York, Olugbara, O. O., Ojo, S. O., & Mphahlele, M. (2010). Exploiting image content in locationbased shopping recommender systems for mobile users. International Journal of Information Technology & Decision Making, 9(05), 759–778. Bohnenberger, T., & Jameson, A. When Policies are Better than Plans: Decision-theoretic Planning of Recommendation Sequences. In Lester, J. (ed.). IUI 2001: International Conference on Intelligent User Interfaces. 21–24. ACM, New York.
5 Intelligent shopping malls
[7]
69
Asthana, A., Cracatts, M., & Krzyzanowski, P. An indoor wireless system for personalized shopping assistance. In Proceedings of the First IEEE Workshop on Mobile Computing Systems and Applications [8] Bell, M., & Koren, Y. (2007). Lessons from the Netflix prize challenge. In Workshop Session: KDD (pp. 75–79). [9] Mavridis, N., Datta, C., Emami, S., Tanoto, A., BenAbdelkader, C., & Rabie, T. (2009). FaceBots: Robots utilizing and publishing social information in facebook. In Proceedings of the 4th ACM/IEEE International Conference on Human Robot Interaction (pp. 273–274). [10] Kim, W., Choi, D. W., & Park, S.-U. (2004). Intelligent Product Information Search Framework Based on the Semantic Web. In Proceedings of ISWC 2004 (pp. 7–11).
Sunishtha S. Yadav, Vandana Chauhan, Nishi Arora, Vijeta Singh and Jayant Verma
6 Challenges and the future of artificial intelligence in virtual reality in healthcare Abstract: Artificial intelligence (AI) has a great and very promising potential when it comes to its implementation in biomedical and medicine science. It has significantly disrupted the way techniques like radiology will be utilized and resourced in the coming decades. COVID-19 has further shown the advantages of incorporating AI and virtual reality (VR) into the healthcare domain. Both these domains, i.e. AI and VR, have drastically impacted the human understanding of chronic diseases like cancer, cardiovascular disorders, etc., by exploiting all possibilities related to the disease pathophysiology. This has not just opened the doors for better disease understanding but also for individualization of therapeutic regimen. However, some points, issues and concerns are to be resolved well in advance prior to implementation of both AI and VR in daily practices in the field of medicine and biomedicine. In spite of the fact that there are several circumstances where Al can execute health-related assignments, outperforming or performing comparatively well than human beings, implementation considerations will impede comprehensive mechanization of medical personnel occupations for a substantial period of time. The present chapter discusses the potential advancements and practical applications of AI in the prospective contemporary medicare system. Keywords: Artificial intelligence, virtual reality, healthcare, challenges
6.1 Introduction With the advancement in almost all domains of life and technology, we are now in an era where artificial intelligence is no more the demand but the necessity of better life. The period of COVID-19 and the lockdown had also proven the importance of artificial intelligence (AI) and virtual reality (VR). All systems across all disciplines, domains and sectors were forced to switch to the most advanced AI and VR methods, which too was the demand of that time. From the education sector to finance to healthcare, everything was run with the help of AI and VR. And the human race has witnessed successful implementation and fruitful outcomes too. Technologies were
Sunishtha S. Yadav, Vandana Chauhan, Nishi Arora, Vijeta Singh, Jayant Verma, Centre for Medical Biotechnology, Amity Institute of Biotechnology, Amity University, Noida, Uttar Pradesh, India https://doi.org/10.1515/9783110713817-006
72
Sunishtha S. Yadav et al.
developed and introduced whereby people, from their homes, could consult doctors who could see and examine their patients online. All regular and emergency tasks were carried out very smoothly with the implementation of AI and VR [1–4]. Use of technology in healthcare sector has expended the horizons of opportunities, from better disease diagnosis, to disease staging, to better treatment outcomes, which means lesser toxicities and more effective drug responses. However, certain challenges are also associated with the incorporation of technological advancements like AI and VR in this sector [4–6, 13]. Implementation of AI and VR into healthcare widens and broadens multiple aspects for better treatment outcomes in every sense as AI is not just a single technology but an integration and collection of technologies. A significant point related to the application of AI and VR in healthcare sector is that it finds utilization in a range of applications, depending upon the necessity and demand [1–4]. It can lead to more personalized care and a decrease in the percentage of drug-induced toxicities and adverse drug reactions (ADRs). To sum up, all these are in the near future of medicine science and biomedical science. From the point of view of a physician, AI- and VR-based solutions will provide extra and more quantitative information. AI algorithm offers an easy way for obtaining a second opinion and it can therefore serve as a double check of the diagnosis. These machine learning algorithms and their ability to produce complex database will therefore illuminate several new horizons for targeted therapies, which will be as per the genetic makeup of an individual patient and can thus definitely serve as a new powerful hope for better treatment in several forms of cancer. Furthermore, the user friendly interface of AI is going to help easy learning and implementation of the same in disease screening, diagnosis, prognosis and standardization of treatment regimen as per the individual patient. It will empower clinical decisions support tools by revolutionizing predictive analytics. The intricacy and surge of statistical records in health services portend that AI would be increasingly implemented in the field of healthcare. Diverse types of AI are currently being utilized by healthcare providers and their sponsors – various biological sciences enterprises. The significant types of practical applications entail the recommendations for prognosis and therapy, compliance and participation of the patient, and administrative operations [4–6].
6.2 Areas of AI function in healthcare The distinctive areas where AI and VR play a very significant role are from diagnosis of the disease to the selection of treatment regimen to patient engagement/management to health system management. This clearly indicates that AI is a key player in almost all the segments of healthcare domain [4–7]. The respective functions of AI in all these segments are described in Tab. 6.1.
6 Challenges and the future of artificial intelligence in virtual reality in healthcare
73
Tab. 6.1: Roles and functions of AI in healthcare domain. S. no. AI in healthcare
Functions/roles
Disease diagnosis
AI is employed for a significant understanding of disease pathophysiology, disease staging, identification, etc.
Treatment regimen
Precision medicine approach is the best example of AI implementation in individualizing the treatment regimen. It has successfully led to decreased ADRs and enhanced treatment outcomes in several forms of cancer.
Patient engagement and management
Patient data collecting devices created with the implementation of AI have made it more easy to monitor the health aspects of patients e.g. digital therapeutics, eHealth, etc.
Healthcare support and simulation
A support system simulation created with AI obviously helps healthcare organization in upgrading as per the demands of the healthcare sector.
6.3 Artificial intelligence and virtual consultation: new era of physician and patient interaction When we talk about technology and modernization, one thing that often comes to our mind is Artificial Intelligence. Artificial intelligence is a boon in the technological world, having shed its light over every aspect and field these days. Artificial intelligence is basically a way to incorporate human intelligence into a machine, to act, think, function and perform every task like a living being. The machine would not only exhibit the basic traits of performance of an individual but also focus on problem solving abilities and planning, basically the complete psychological mindset of a human being in order to work efficiently to meet everyday challenges. The machine exhibits all key traits as that of a human being. Just like in the old times when computers were the innovative step in technology, now it is artificial intelligence, as it doesn’t just reduces manpower but also provides skilled work without any errors, unlike human beings. Artificial intelligence, in other words, is rationalizing human behavior appropriately with the help of a machine [7–9]. In layman’s language, when an individual hears about artificial intelligence, one thing that pops up in the mind is robots, as it is more often seen in movies and depicted in stories and novels, but artificial intelligence is way more than just robotics. It primarily includes three basic traits, perception, reasoning and learning. With the evolving technological world, artificial intelligence is also widening its
74
Sunishtha S. Yadav et al.
influence to various fields. In artificial intelligence, the machines adopt a cross- disciplinary approach that covers mathematics, psychology, science, language, computers and many other. While we use artificial intelligence, some basic algorithms are kept in mind for simpler innovations and usage, and complex algorithms for complex work area so as to avoid any sort of chaos [10–13]. One of the most rapid growing areas for the involvement of artificial intelligence in healthcare industry is hospitals and laboratories. Hospitals and laboratories are one of the fastest growing sectors for artificial intelligence as it keeps human life less at risk and also reduces the chances of errors [14, 15].
6.4 Mind and machine coalesce through brain-computer interface The idea of developing a system that can help disabled humans regain sensory function, communication, and control is forming a new branch of science called brain-computer interfaces (BCI) or brain-machine interfaces (BMI). They are the mechanisms that link the human brain to the rest of the world. BCIs have a wide range of uses right now. The most basic use of a brain-computer interface is neuroprosthesis, which is a piece of hardware that can replace or augment nerves that aren’t functioning properly. Cochlear implants are the most commonly used neuroprosthesis [16–19]. In the coming years, such devices will be widely used. There are methods currently being tested that can translate brain activity. Electrical impulses are translated into signals that the software understands, and the brain activity can be recorded or read in real time by a remote device [18–20]. Medicine pays a great deal of attention to BCIs. Our brain transmits and receives thoughts via a sequence of electrical impulses. Though detecting these signals is not novel, monitoring the electrical activity of the brain with an EEG (electroencephalography) and of the muscles with an EMG (electromyography) is already used in medicine to detect illnesses and other nerve disorders with the help of electrical activity in the patient’s nerve. However, researchers and businesses are now investigating if those electrical impulses might be decoded to provide insight into an individual’s thinking [18–23]. BCIs may provide a solution for those who have suffered nerve injury, to restore lost functions. For instance, in some spinal injuries, the electrical connection between the brain and the muscles in the limbs is severed, making the patient unable to move his or her arms or legs. BCIs may be able to assist in such injuries by either transmitting electrical signals directly to the muscles, bypassing the damaged link and allowing people to move again, or by allowing patients to use their thoughts to operate robots or prosthetic limbs that can perform movements for them [17, 18].
6 Challenges and the future of artificial intelligence in virtual reality in healthcare
75
Military personnel have expressed interest in BCIs as well. Soldiers in the field might patch in teams back for further intelligence and communicate silently. At the moment, BCIs are available in two configurations: invasive and noninvasive. Invasive methods use technology that comes in contact with the brain, whereas non-invasive systems use head-worn sensors to pick up brain signals. Each of these two approaches has its pros and cons. Because electrode arrays are in direct contact with the brain, invasive BCI systems can collect far more precise and fine-grained information [13–18]. However, they require brain surgery, and the brain is not always happy about having electrode arrays connected. When the brain reacts to having electrode arrays attached, it undergoes a process called glial scarring, which can make it more difficult for the array to pick up signals. Due to the inherent hazards, intrusive technologies are often restricted for medical applications alone. Non-invasive systems, on the other hand, are more consumer friendly because they do not involve surgery. These systems collect electrical impulses from the skin via sensor-equipped caps worn on the head or equivalent hardware worn on the wrist, such as wristbands [18–23].
6.5 Artificial intelligence in healthcare industry The healthcare industry is among the most rapidly growing industries as it has not only emerged as being vital to every aspect of our life but it has also established itself significantly in all senses. When we talk about the health sector, few terms that come to our minds are doctors, patients, technicians, lab workers, administration, medicine, etc. Healthcare isn’t just about the health of an individual but is also relevant for the economy, so it is even referred as the health economy industry [3–5]. The healthcare industry attracted more interest and got a boost because of the fast growing network of bacterial and viral infections as also other ailments that need to be cured. Many of these diseases and ailments are curable and some are still under observation to develop a solution. In the 14th century, the plague gave the healthcare industry a sign of concern with respect to both the healthy living and the economy. The situations worsened due to a less-developed infrastructure in the sector. But later in the year 1961, one of the fast growing ailments got its cure through a vaccination. The polio vaccine didn’t just give a sign of relief for the ailment but also gave a boost to the healthcare industry, leading to development of its infrastructure in every country around the globe. Many discoveries kept on enhancing the industry, both for the individual as well as the economy of a country [5, 6]. One of the most eye catching and vital technology that boosted the healthcare sector is artificial intelligence. Artificial intelligence didn’t just enter the pharmaceutical industry but also the complete healthcare industry. One of the clear uses of artificial intelligence, apart from its use in pharmaceuticals, at the ground level was
76
Sunishtha S. Yadav et al.
during the novel coronavirus pandemic in the year 2020. With the help of artificial intelligence, the front line warriors at the hospitals obtained relief from the pandemic. For instance, we heard about robots and machinery being used to reach out to patients in the covid-19 ward. Medicines, disposables, sanitizers, etc. were being delivered to the patients through these mini robots and machineries at many hospitals, to offer relief to the hospital staff. Apart from these, artificial intelligence has also helped in proper screening, tracking and predicting the scenario at that moment as well as the future. One of the key roles of artificial intelligence in the pandemic has been in the detection and diagnosis of infection through various applications and tests. For instance, the Aarogya Setu Application developed in India is one of the key examples of artificial intelligence as it provides its users with proper testing and also helps in detecting the situation in the nearby places, ranging from 500 meters up to 10 kilometers. The application also helps the user to receive advice on proper precautions by giving all the necessary information to reach out to the authorities for proper guidance. There are many more such technologies that were used during the pandemic, starting from detection to diagnosis and further to finding a cure or a vaccination for the same. Artificial intelligence is not just limited to computer applications but even to healthcare infrastructure nowadays. Technology has spread its wings to every platform with the help of artificial intelligence in an excellent way [1, 8, 10, 12].
6.6 AI and virtual consultation: redefining the practice of medicine When we discuss about the healthcare industry, consultancy comes along with it. Artificial intelligence offers such support to the patients while being at their respective place, especially in an emergency or in a pandemic condition. Tele-health or telemedicine is one field among the healthcare industry that is evolving with every day, with artificial intelligence being a part of it. Usage of tele-health and the related telemedicine advancement has upgraded the healthcare industry. The International Medical Interpreters Association has linked around 30 members with itself, which includes people of every domain from the healthcare industry [1, 8, 10, 12]. When we talk about tele-health, what comes to mind is how it will be delivered in a defined manner such that it reaches out to a large number of people. Tele-health uses the information of the patient and communicates it through technology to various clinical and educational services and institutions. Tele-health is a boon to users to overcome a variety of barriers like distance, time zone, cost-efficiency, etc. This is not just in developing countries of the world but also in the developed countries as these are common physical barriers [1, 8, 10, 12]. Due to the increase in the number of chronic diseases and ailments around the globe, there is an increasing demand for tele-health as it provides a traffic-free environment between the patient and the
6 Challenges and the future of artificial intelligence in virtual reality in healthcare
77
healthcare provider. The increasing ailments have increased the demand as well as the complexity of tele-health as it is providing interaction without any time interference for proper support [1, 8, 10, 12]. Historically, tele-health has been using two modes of communication: synchronous and asynchronous. Synchronous communication refers to real-time electronic communication whereas asynchronous communication refers to the store-and-forward type of communication. The most recent advancement in communication in healthcare sector is remote monitoring that includes collection of data by distributed devices, which includes internet of things. As per the last observational survey data by the World Health Organization, there are mainly four well-embedded tele-health services available across the global that have emerged significantly. These four services are: tele-pathology, tele-radiology, tele-psychiatry and tele-dermatology. These services have difficulty in replacing real-time clinician-delivered services. Also, from the survey it was clear that over 60% of the respondents lacked information related to telehealth clinical practices [1, 8, 10, 12].
6.7 AI and management of electronic health records The two fields of healthcare and data science are an excellent match. To perform at a realistic level, healthcare operations require insights into patient data. In the healthcare industry, there is a plethora of data. The adoption of Electronic Health Records (EHR) that uses a data science toolset for the benefit of medical operations emerged from the combination of these two fields. EHR is a digital repository of all available patient data in a single database. Medical history, treatment record data such as diagnoses, prescriptions, treatment plans, immunization dates, allergies, radiological images, labs and test results are all included in the EHR. Advances in medical imaging and the growth of clinical diagnostics and screenings generate enormous amounts of data about patient health [2–4, 21, 22]. The primary disadvantage of EHRs for big, integrated healthcare delivery systems is that they are frequently perceived as rigid, difficult to use, and costly to configure. EHRs are also incapable of properly capturing data about medical procedures, patients and administrative processes. In various instances, EHR is referred to as organized data pertaining to the patient’s medical history, treatment plans, and drugs. However, if this information is not properly harnessed, it can jeopardize the veracity of the information included in EHRs [2–4, 21, 22]. Advanced technologies, such as artificial intelligence (AI), have an astonishing capacity for decoding the electronic data required to improve healthcare services. Artificial intelligence approaches offer a viable method for analyzing these multimodal data sets. As AI becomes more adept at categorizing data, it is increasingly viewed as a valuable tool for diagnostic reasons and medical
78
Sunishtha S. Yadav et al.
imaging analysis. Additionally, the tools can use comparable data to make recommendations to clinicians and help shape specific treatment approaches. Apart from the fact that data is always available to medical professionals, the way medical data is organized in an EHR makes it ideal for various machine learning-driven data science activities [4, 7, 8, 21, 22]. The application of artificial intelligence to EHR data has been effective in a range of sectors. For instance, cardiology studies have extensively used AI approaches in conjunction with EHR data to detect heart failure early, forecast the onset of congestive heart failure, and improve risk assessment in patients with suspected coronary artery disease. Similarly, in ophthalmology, machine learning classifiers trained on EHR data have been used to predict the risk of cataract surgical complications, improve glaucoma and age-related macular degeneration (AMD) diagnosis, and estimate the risk of diabetic retinopathy (DR) [5–8]. In general, machine learning is a potential choice for data mining, natural language processing, medical transcribing, document search, data analytics, data visualization, predictive analytics, and privacy and regulatory compliance in the electronic health record (EHR). The introduction of electronic health records and machine learning elevate healthcare operations to a new level [10–13]. On the one hand, it broadens our perspective on patient data by situating it within the broader context of healthcare proceedings. On the other hand, an EHR powered by machine learning gives doctors with a far more capable and transparent platform for data science, resulting in more accurate data and deeper insights.
6.8 Health monitoring through personal devices: the virtual reality Personal devices and wearable systems such as smart watches, smart glasses, Pedometers, Ear wear, Virtual reality and augmented reality headsets, activity tracking bands, smart shoes, smart clothing, mobile electrocardiogram (ECG), mobile blood pressure monitors, chest, calf and ankle straps / bands are used for monitoring the health, are regarded to be the next generation of personal mobile devices for the practice of telehealth [6]. The health monitoring systems are designed on tracking various types of biometric signals generated by people via infectious skin hidrosis, breathing, saliva and urine [27]. Personal wearable health trackers send notifications to the person to exercise more and send the data to the Al systems and healthcare providers for further information on the requirements and practices of the clients or patients. Health monitoring devices can be used to maintain health as also prevent diseases. Rendering prophylactic interferences to the elderly population so as to enhance health performances is a significant research topic. Health monitoring personal devices can be employed to tackle the issues associated to uncover and oversee
6 Challenges and the future of artificial intelligence in virtual reality in healthcare
79
detrimental health effects in elderly people [24]. Monitoring of physical activity and interaction are also accomplished via health monitoring devices. Extended immobile disposition is related to several untoward health consequences. Personal wearable devices can be used to monitor students’ actions and send reminders to alter the posture and beneficially impact their health [10]. The effectiveness of utilizing smart-phones and wearable personal devices has been assessed for monitoring the patterns of language. Language tracking personal devices have been used to carry out a LENA (Language Environment Analysis) to obtain data of the communication between the mother and the child [4, 5]. Some personal devices are fitted with the sensors to identify the status of human physiology, for instance temperature, heartbeat blood pressure, etc. These indications can be utilized to formulate new methods to supervise the mental status such as stress detection [3–5]. Further, the personal devices can be extremely helpful in sports medicine to maximize the performance and minimize the injury to a sportsperson for weight management; in public education about topics related to health; improve management efficiency of patients in hospitals; assist survivors of cancer in improving their health; aid patients with stroke, brain and spinal cord injuries, as well chronic pulmonary patients; and patients with blood disorders, heart disorders, diabetes care management, blood disorders, Parkinson’s disease, Autism and depression [24, 25]. Although a majority of the personal wearable technology-based devices are in the prototype phases; challenges like user acceptance, big data issues, safety, ethics in personal wearable devices technology, etc. remain to be resolved so as to reinforce the user-friendliness and functions of the health monitoring personal devices for practical applications.
6.9 Artificial intelligence and precision medicine: explaining new horizons of the treatment regimen The word “precision medicine” also known as “personalized medicine” is a comparatively novel term in medical care; however, the concept has been in existence for a very long time in the industry. The objective of precision medicine is to devise and enhance the route for therapeutic interference, prognosis and diagnosis via utilizing massive multifaceted biological set of information that include personal gene variability, ambience and function. Precision medicine assists medical practitioners ascertain better individualized interventions for patients, taking into account personalized methods, in contrast to the blanketed technique for all the patients [8, 9, 14]. Artificial intelligence leads precision medicine to the next stage and accentuates the precision and prognosis of outcomes for patients. In order to fully utilize the
80
Sunishtha S. Yadav et al.
prospective of precision medicine, it is necessary to complement it with artificial intelligence and machine learning through deep machine learning algorithm and the ability to analyze large volumes of data more rapidly than physicians and medical examiners [8, 9, 14]. Al can more precisely establish outcomes to arrive at conclusions regarding a patient’s medical care alternatives and the feasible effects of the therapy. In addition to this, AI has the capability to foretell the patient’s possibility of having illnesses, which is a huge advantage for precision medicine. Through the prior comprehension of the cause of the occurrence of the diseases and in what conditions and environments they occur more frequently, AI can assist in the training of health professionals to recognize what to consider before a disease presents symptoms. The capability to estimate the danger of an ailment in segments of patient population is innovative for health services and the lives of so many people. Hence, AI has high prospects in preventive medicine. Al has been extremely valuable in cardiovascular medicine. Various studies have been carried out, for instance in the assessment of the hazards of cardiac arrest with the aid of a network of artificial neurons [8, 9, 14, 27, 28]. Additional studies have utilized machine intelligence to detect melanoma, depression, HIV transmission, respiratory virus affinity, colorectal cancer, mortality in smokers, etc. Al could also be practically effective in palliative medicine implementation by alleviating the advancement of the disease. Dente et al. [8] utilized machine learning strategies to determine the parts of pneumonia bacteria, which is predictive in nature. Researchers have applied AI to address the complications of the diabetes or in the prediction of the outcomes of the focal epilepsy, thromboembolism and ischemic stroke. Many studies have reported enhanced diagnostic performance by employing computer assisted recognition of diseases, namely 3-D magnetic resonance of brain and thermography revelation of breast cancer. Additional applications in healthcare monitoring comprehend the evaluation of tolerance to surgery or metabolic disorders or chemotherapy. Nowadays, robots can effectively carry out surgery. Software applications can diagnose aliments and disorders well over conventional practices and personal devices are successful in monitoring the health and transmitting up to date information concerning startling conditions in near-real time.
6.10 Challenges faced in using AI in medical sector The complexity of machine learning science, the difficulties of putting AI systems into practice, and the need to think about how people will use them and how they will change their social, cultural, or path habits are all big issues for the use of AI systems in healthcare. The best way to get solid evidence is to have a clinical assessment that has been peer-reviewed and is part of a randomized controlled trial, but this isn’t always possible or acceptable in real life. Performance measurements
6 Challenges and the future of artificial intelligence in virtual reality in healthcare
81
should be able to show how well something works in the real world while still being understandable to the people who are looking at them. To make sure that patients don’t get harmful treatments or lose out on using beneficial technologies, regulation that balances the speed of innovation with the risk of harm is needed, as well as careful post-market surveillance. There must be ways to compare AI systems in a direct way, such as by using test sets that are both autonomous and representative of the area where they are used. People who make AI algorithms must be aware of the possible risks, such as changing the data set or fitting in confounders that aren’t there on purpose. They also need to think about how to generalize their algorithms to new groups of people and how new algorithms might have unintended negative health effects [26, 27].
6.10.1 Metrics often do not reflect clinical applicability The term “AI chasm” was coined to emphasize the reality that clinical efficacy does not always imply precision. Despite its widespread use in machine learning research, the region under the curve of a receiver operating characteristic curve is not always the optimal statistic for representing clinical applicability, and many physicians struggle to understand it. Positive and negative predictive values as well as sensitivity and specificity for a particular model operating point should be included in papers (required to transform continuous model performance into distinct decision categories). Clinicians should be able to visualize how the proposed calculations could improve patient care in a relevant work process, but the majority of papers fail to do so; several potential solutions have been suggested, including choice-bend analysis, which entails determining the net benefit of using a model to control subsequent activities. To enhance comprehension, clinical trainees and practicing physicians should be provided with an easily accessible AI instructional plan that enables them to fundamentally analyze, acquire and securely utilize AI apparatuses during their training [26, 27].
6.10.2 Challenges in generalization to new populations and settings The bulk of the AI systems are still far from reaching accurate generalizability, let alone clinical applicability, for the majority of types of medical data. Blind spots in a fragile model can result in particularly poor decisions. Technical variances between locations (including differences in equipment, coding definitions, EHR systems, and laboratory equipment and assays) as well as regional clinical and administrative processes complicate generalization [28, 29].
82
Sunishtha S. Yadav et al.
6.10.3 Digitizing and consolidating data Artificial intelligence initiatives operate mostly according to the garbage in-garbage out principle, which means they require massive amounts of relevant and reliable data. Finding reliable sources of information in medical care can be challenging, as healthcare information is typically fragmented and scattered over multiple associations and information frameworks, as patients frequently see multiple suppliers and frequently switch insurance agencies. Sorting, consolidating, and digitizing medical records are all time-consuming processes that require a large amount of computer resources and the participation of data owners. However, digitized and improved record systems provide increased efficiency and accuracy in medicine. Medical data must be properly kept and processed by AI, which means healthcare stakeholders must improve data consolidation and digitization [28].
6.10.4 Updating regulations Clinical records are ensured by severe security and classification laws, so that sharing such information even with an AI framework might be understood as a violation of these laws. To guarantee that clinical information can be utilized for these reasons, consent from patients should be acquired. Regulatory bodies must enact regulations to protect individuals’ identities while also allowing healthcare providers to obtain high-quality data for their AI technologies to process. Medical organizations, likewise, must exercise caution in order to comply with these regulations and be responsible for how they collect patient data [29].
6.10.5 Human involvement and cost-effectiveness Clinical experts and patients also stay incredulous about AI. For instance, radiologists are uneasy about being “replaced by robots.” Patients are similarly careful about the innovation’s capacity to satisfactorily address their individual health concerns. Conquering the nerves of health experts and the incredulity of patients toward AI is critical to building an AI-driven medical care framework. There must be a clear understanding that AI is only meant to supplement healthcare providers’ diagnostic abilities. All will be encouraged to adopt AI-assisted medical practices as a result of this. Cost is also a big impediment for businesses. Effective AI implementation necessitates a significant investment, which means that organizations that are already strapped for cash will be hesitant to finance AI. “AI technologies are crucial for addressing a variety of long-term challenges, including the development of advanced
6 Challenges and the future of artificial intelligence in virtual reality in healthcare
83
healthcare systems, a sustainable intelligent transportation infrastructure, and resilient energy and telecommunication networks,” according to the study [26–29].
6.10.6 Electronic healthcare-related challenges Execution of EHR in a medical services isn’t just about as simple as it sounds. There are various likely difficulties in executing an electronic wellbeing records framework [21, 22]. Some of the major problems faced while implementing EHR could be: 1. Implementation cost It should come as no surprise that EHR implementation is costly. The determination, execution, and roll-out of EHR will reduce a lot of the planned capital spending venture. Even sometimes just finding the financial resources could be a major problem. 2. Staff resistance Not everyone on the medical team is on board with the concept of implementing technology in the hospital. Furthermore, some healthcare professionals are skeptical of the effectiveness of electronic health records. They might be hesitant to abandon the documentation process. 3. Lack of usability Physicians find it difficult to adapt to an EHR scheme that does not fit with their current workflow. The one-size-fits-all rule does not apply to EHR systems since a dentist’s workflow varies from that of a cardiologist, and vice versa. The software’s ease of use is hampered by design defects or insufficient preparation. 4. Lack of communication To develop an EHR framework that delivers the desired results, an effective communication between the healthcare provider and the IT vendor is required. It is a continuous operation, not a one-time task, to ensure that all parties’ needs are met. Electronic Health Records In India: India is providing quality medical services on worldwide principles at a generally minimal effort and has pulled in the patients from across the globe. The rules depend on the suggestions made by the EMR guidelines advisory group, which was comprised by a request from the Ministry of Health and Family Welfare. It is composed of members from the Federation of Indian Chambers of Commerce and Industry. The rules prescribe a set of guidelines to be followed by various medical services specialist organizations in India such that clinical information gets compact and are effectively portable. India, having a population of 1.27 billion people, with just 160 million web clients, the upkeep of EHR
84
Sunishtha S. Yadav et al.
is an overwhelming undertaking; however, with the interest and backing of the Government of India in its usage, it will triumph soon [27–29].
6.11 Conclusion The influence of AI in healthcare via machine learning and natural language processing (NLP) is remodeling healthcare delivery. It is predicted that AI will henceforth progress at an accelerated pace in the forthcoming years. The thrust of AI in medical services could integrate functions that vary from straightforward to complicated, such as from responding over phone to health documents analysis, people’s health condition analysis and patterns, designing of restorative drug and appliances, interpretation of radiology images, creating clinical prognosis and therapeutic regimen and even interacting with the patients. AI is actually functioning to enhance the comfort and productivity, to mitigate the expenses and inaccuracies, and to facilitate for a greater number of patients to procure medical care. Hence, AI in healthcare offers benefits to patients by providing more choices. It can also offer convenience, faster and easier scheduling of the appointments, easier bill payments, less time spent to fill out or update the medical forms, etc. Further, AI in healthcare has the futuristic potential to assist in patient care via personalized medicine. Restricted interactions amongst physician and patients were amongst the most important concerns in the beginning of the COVID-19 era, despite inhibiting any chances of viral infection in the already immune-compromised groups (patients affected with chronic diseases like cancer). Due to this, a huge number of patients (cardiovascular, vascular and cancer patients) suffered across the globe, as hospitals were either delaying or cancelling their surgeries and other procedures, like chemotherapy and radiation therapy. All this happened because of insufficient supplies of personal protective equipment (PPE) for healthcare providers. Hospital capacities, like ICUs, were also limited. Furthermore, sero-prevalence data and lack of point-of-care testing also complicated the process. However, with an interdisciplinary approach and by combining technology, computers and medicine; not only could a solution to this problem be found but a more precise manner of medicine practice could be followed. An integration of AI and VR is the best way of quantification of the diagnosis and adopting the most appropriate method of medicine, which is called individualization of therapeutic regimen for better treatment outcome in chronic diseases.
6 Challenges and the future of artificial intelligence in virtual reality in healthcare
85
References [1]
[2] [3]
[4]
[5] [6] [7] [8]
[9] [10]
[11] [12]
[13] [14] [15]
[16]
[17]
Anastasopoulos, C., Weikert, T., Yang, S., Abdulkadir, A., Schmülling, L., Bühler, C., Paciolla, F., Sexauer, R., Cyriac, J., Nesic, I., Twerenbold, R., Bremerich, J., Stieltjes, B., Sauter, A. W., & Sommer, G. (2020). Development and clinical implementation of tailored image analysis tools for COVID-19 in the midst of the pandemic: The synergetic effect of an open, clinically embedded software development platform and machine learning. European Journal of Radiology, 131, 109233. Aronson, S., & Rehm, H. (2015). Building the foundation for genomic-based precision medicine. Nature, 526, 336–342. Choi, Y., Jeon, Y. M., Wang, L., & Kim, K. (2017Aug 23). A biological signal-based stress monitoring framework for children using wearable devices. Sensors (Basel), 17(9), 1936. doi: 10.3390/s17091936. PMID: 28832507; PMCID: PMC5620521. Choo, D., Dettman, S., Dowell, R., & Cowan, R. (2017). Talking to toddlers: Drawing on mothers’ perceptions of using wearable and mobile technology in the home. Studies in Health Technology and Informatics, 239, 21–27. PMID: 28756432. Coorevits, P., Sundgren, M., Klein, G. O., et al. (2013). Electronic health records: New opportunities for clinical research. Journal of Internal Medicine, 274, 547–560. Cowie, M. R., Blomster, J. I., Curtis, L. H., et al. (2017). Electronic health records to facilitate clinical research. Clinical Research in Cardiology, 106, 1–9. Davenport, T., & Kalakota, R. (2019). The potential for artificial intelligence in healthcare. Future Healthc Journal, 6(2), 94–98. Dente, C. J., Bradley, M., Schobel, S., et al. (2017). Towards precision medicine: Accurate predictive modeling of infectious complications in combat casualties. The Journal of Trauma and Acute Care Surgery, 83, 609–616. Filipp, F. V. (2019). Opportunities for artificial intelligence in advancing precision medicine. Current Genetic Medicine Reports, 7, 208–213. Frank, H. A., Jacobs, K., & McLoone, H. (2017). The effect of a wearable device prompting high school students aged 17-18 years to break up periods of prolonged sitting in class. Work, 56 (3), 475–482. doi: 10.3233/WOR-172513. PMID: 28282846. Hatsopoulos, N. G., & Donoghue, J. P. (2009). The Science of Interface Systems. Annual Review of Neuroscience, 32, 249–266. Jiang, M., Chen, Y., Liu, M., et al. (2011). A study of machine-learning-based approaches to extract clinical entities and their assertions from discharge summaries. Journal of the American Medical Informatics Association, 18, 601–606. Kelly, C. J., Karthikesalingam, A., Suleyman, M., et al. (2019). Key challenges for delivering clinical impact with artificial intelligence. BMC Medicine, 17, 195. Lee, S. I., Celik, S., Logsdon, B. A., et al. (2018). A machine learning approach to integrate big data for precision medicine in acute myeloid -leukemia. Nature Communications, 9, 42. Li, H., Luo, M., Zheng, J., et al. (2017). An artificial neural network prediction model of congenital heart disease based on risk factors: A hospital-based case-control study. Medicine, 96. Lin, W. C., Chen, J. S., Chiang, M. F., & Hribar, M. R. (2020). Applications of artificial intelligence to electronic health record data in ophthalmology. Translational Vision Science & Technology, 9(2), 13. Low, L. L., Lee, K. H., Ong, M. E. H., et al. (2015). Predicting 30-Day readmissions: Performance of the LACE index compared with a regression model among general medicine patients in Singapore. Biomed Research International, 2015, 169870.
86
Sunishtha S. Yadav et al.
[18] Olthof, A. W., van Ooijen, P. M. A., & Rezazade Mehrizi, M. H. (2020). Promises of artificial intelligence in neuroradiology: A systematic technographic review. Neuroradiology, 62(10), 1265–1278. [19] Ramesh, A., Kambhampati, C., Monson, J. R., & Drew, P. (2004). Artificial intelligence in medicine. Annals of the Royal College of Surgeons of England, 86, 334. [20] Recht, M. P., Dewey, M., Dreyer, K., Langlotz, C., Niessen, W., Prainsack, B., & Smith, J. J. (2020). Integrating artificial intelligence into the clinical practice of radiology: Challenges and recommendations. European Radiology, 3 0(6), 3576–3584. [21] Rysavy, M. (2013). Evidence-based medicine: A science of uncertainty and an art of probability. The Virtual Mentor, 15, 4–8. [22] Santhanam, G., Ryu, S. I., Yu, B. M., Afshar, A., & Shenoy, K. V. (2006). A high-performance brain-computer interface. Nature, 442, 195–198. [23] Schmidt-Erfurth, U., Bogunovic, H., Sadeghipour, A., et al. (2018). Machine learning to analyze the prognostic value of current imaging biomarkers in neovascular age-related macular degeneration. Opthamology Retina, 2, 24–30. [24] Wolpaw, J. R., & McFarland, D. J. (2004). Control of a two-dimensional movement signal by a noninvasive brain-computer interface in humans. Proceedings of the National Academy of Sciences of the United States of America, 101, 17849–17854. [25] Wu, M., & Luo, J. (2019). Wearable technology applications in healthcare: A literature review. Online Journal of Nursing Informatics Contributors.(OJNI), 23(3). [26] Yadav, S., Zain, M., Sahai, P., Porwal, S., & Chauhan, V. (2020). Challenges encountered in cancer care and management during covid-19 in South Asian Countries. Asian Pacific Journal of Cancer Care, 5(S1), 101–107. [27] Zacksenhouse, M., Lebedev, M. A., Carmena, J. M., O’Doherty, J. E., Henriquez, C., & MA, N. (2007). Cortical modulations increase in early session with brain-machine interface. Plos ONE, 2, 619. [28] Lou, Z., Wang, L., Jiang, K., Wei, Z., & Shen, G. (2020). Reviews of wearable healthcare systems: Materials, devices and system integration. Materials Science and Engineering: R: Reports, 140, 100523. ISSN 0927–796X. [29] Zech, J. R., Badgeley, M. A., Liu, M., Costa, A. B., Titano, J. J., & Oermann, E. K. (2018). Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: A cross-sectional study. PLoS Medicine, 6 15(11), e1002683.
Urvashi
7 Virtual trial room Abstract: The emergence of virtual trial rooms has revolutionized the retail industry by providing customers with an immersive and interactive shopping experience. This paper explores the concept of virtual trial rooms, their technology, and their adoption by leading businesses worldwide. Virtual trial rooms utilize virtual reality (VR) technology to enable customers to try on clothes, accessories, and other products virtually, eliminating the need for physical fitting rooms. The technology behind virtual trial rooms involves the use of electronic devices such as goggles or gloves fitted with sensors, creating a computer-generated simulation that allows users to experience and interact with a three-dimensional virtual environment. By simulating the appearance and fit of various products, virtual trial rooms offer customers a realistic preview of how they would look or feel wearing the items. Numerous prominent businesses have embraced virtual trial rooms to enhance the shopping experience for their customers. Companies such as Ralph Lauren, Rebecca Minkoff, and Neiman Marcus have integrated virtual trial rooms into their stores, leveraging technologies like smart mirrors, RFID tagging, and interactive displays. These innovations allow customers to visualize themselves wearing different outfits, customize lighting and backgrounds, and even share images on social media. The advantages of virtual trial rooms are substantial. They enable customers to save time by eliminating the need for physical try-ons, provide a personalized and engaging shopping experience, and offer a wide range of products and styles to choose from. Additionally, virtual trial rooms reduce the need for physical inventory, enhance customer satisfaction, and facilitate better decision-making for online purchases. However, virtual trial rooms also come with certain limitations. Technology requires significant investment in hardware, software, and infrastructure. Technical challenges, such as achieving accurate sizing and realistic product representation, need to be addressed. Moreover, some customers may still prefer the traditional inperson fitting experience, and concerns about privacy and data security should be addressed to ensure customer trust. In conclusion, virtual trial rooms offer tremendous potential for the retail industry, providing customers with an immersive and convenient way to try on products virtually. While there are challenges to overcome, the adoption of virtual trial rooms by leading businesses demonstrates their value and potential for transforming the shopping experience. By continuously improving the technology and addressing customer concerns, virtual trial rooms can reshape the future of retail.
Urvashi, Amity University, Noida, Uttar Pradesh, India https://doi.org/10.1515/9783110713817-007
88
Urvashi
Keywords: Virtual trial rooms, Virtual reality, Immersive technology, Customer experience, Virtual fitting rooms, Technology adoption, Smart mirrors, RFID tagging, Interactive displays, Online shopping, Enhanced visualization, Personalization, Inventory management, Decision-making, Privacy concerns, Data security, Retail innovation, Customer satisfaction, Future of retail
7.1 Virtual reality Virtual reality is a computer-generated simulation in which a person can actually experience and interact with the three-dimensional virtual environment, with the help of electronic devices such as goggles, with a screen or gloves fitted with sensors. The main objective of virtual reality is to place the participants into a virtual world, which gives them a feeling of being there. Virtual reality is, basically, virtual and real, together, that is, one experiences and interacts with the artificial world that seems quite real, with the use of technologies. Computer software creates and serves up virtual reality world that is experienced by the user who wears hardware devices. To understand virtual reality, let us understand how a human body creates a perception about any environment. Our body has five senses that include touch, smell, taste, sight and hearing, as well as spatial awareness and balance. The inputs by these senses are processed by our brain and then, we have a response to these senses. For instance, if a human body touches a very hot object, the “touch” senses send back a signal to the brain, the brain then processes the signal, say, the given temperature, considering that the object touched by the body is hotter than what our skin can tolerate; then, at that moment, the brain immediately sends back a signal saying that the object is too hot and we need to remove our hand. Similarly, Virtual reality attempts to construct a deceptive environment for our brain such that it can be presented to our senses with fabricated information, making our mind believe as if it is real. Virtual reality has applications in different fields like education, gaming, and entertainment. A real-world and a very interesting example of virtual reality is Flight Simulators. Flight Simulators are devices that artificially construct aircraft flight and the environment in which it flies. A flight simulator actually looks like a real cockpit and has all the buttons like the ones present in a real cockpit, and it has a virtual screen, which looks like that of an aircraft’s windshield. Now, the virtual screen displays a runway, clouds and the environment like the ones visible through a real windshield. The flight simulators create an illusion in our brain that it is flying a real aircraft, whereas it is actually a simulator. It is a 3D device, where, if someone crashes the airplane (simulator), the pilot can actually experience a huge jerk as a result of crash. The main purpose of these simulators is to basically train or familiarize the trainee pilots before taking them into the real expensive aircraft. This is how virtual reality works. Virtual reality helps individual get the real-life experience,
7 Virtual trial room
89
while not actually being there. It is, however, expensive considering that one pays for the software optimization, a better control system, and it also includes motion sensors, lenses and custom display screens. There are various components of virtual reality, the first one being HARDWARE. The hardware devices produce stimuli that override the senses of the user based on human emotions; the VR hardware accomplishes this by using different sensors. It also considers the physical surroundings of the user, thus considering the fact that only hardware devices and software does not constitute to the entire VR system. The second is SOFTWARE. Software plays an equally important role in the VR system in that it is responsible for taking the input from the user appropriately, and responding to it in time, such that it does not destroy the feeling of immersion. All the high-level graphics, movement of the user inside the VR, the backend of a 3D game is created and handled by the software. Next is AUDIO. Audio may not be technically complex but is very important and plays a vital role in immersion. It is important to stimulate a user’s senses, because it helps keep the brain active and create a better illusion of being mentally as well as physically present in the game. Most virtual reality headsets deliver an option to its users to use their own headphones in conjunction with a VR headset. A variety of headsets provide their own integrated headphones to its users. Virtual reality audio works via positional, multispeaker audio (often called Positional Audio) as well as spatial audio, so as to give the deception of a 3D world. The last and the most important is human perception, because it is very essential to understand the human psychology and the optical deceptions to attain extreme immersion, without any side effects. Since VR is concerned about simulating a real world, it is important to understand “HOW TO FOOL HUMAN SENSES?” and what are the different stimuli to different situations. System coordination of all stimuli with user’s activity is also responsible for proper functioning of VR system. A very major part of virtual reality is used to make virtual shopping possible. Virtual shopping is nothing but sitting at your home and buying products and items of your own choice; and, they can be delivered at your doorstep without your stepping out of the house. Virtual shopping gives the client or the consumer the freedom to be at any place in the world and order any item they wish to, without having to stress about visiting a mall. The major benefit of virtual shopping is that it is available 24/7; one can interact with the brand representative online with the help of virtual assistant, which the applications or the website provide to its user. It is also comfortable because, considering the fact that most of the people are working in the day time and get free at night only, they can place an order for an item at midnight and receive the same at their doorstep, the next morning. Virtual shopping provides you with a lot of advantages from saving quality time to having facilities of online tracking to saving the travel expenses; it is a boon in today’s world. Living in this fast-moving and continuously developing society, virtual shopping is achieving greater attention. Placing an order online is considered more convenient as compared to visiting a store. But with the rise in technology, online shopping provides their customers a “real experience.” The main purpose of virtual shopping is to provide customers with the
90
Urvashi
convenience of shopping online as well as giving them an experience of virtual reality. A virtual reality shopping application was launched on 25th September’2017 by Mastercard and Swarovski for the Atelier Swarovski home décor line. It is a collection of operational and ornamental crystal accessories for home, which has been designed in partnership with the world’s esteemed architects and designers. This application provided its customers with complete immersion in a well-decorated home, where they are free to browse and buy the piece with Masterpass – Mastercard’s digital payment service. Customers can adjust their phone into a compatible Virtual reality headset to get themselves immersed into the experience.
Fig. 7.1: How I’m in VR world.
Once the headsets are on, the customers get an experience and grasp the stories behind every article present in the store, read through the description, check the pricing and, in some cases, even go through the videos about its craftsmanship.
Fig. 7.2: VR system [1].
7 Virtual trial room
Fig. 7.3: VR sample view [1].
91
92
Urvashi
7.2 Benefits of virtual shopping 1. Keep business flourishing: no matter where the store teams are! One of the plus points of virtual shopping is that the retailers are able to keep the team working, no matter if they are working from home, office, or the headquarters. A number of retailers have seen a huge increase in sales online, as compared to the one in the store. For instance, the Covid-19 situation had a huge impact on the retail industry, but the grocery stores were able to flourish even during the pandemic, considering that the population preferred ordering groceries online. 2. Online customers love the systemized and personalized shopping experience With the stores running on reduced capacity, customers love and prefer a personalized and systematic online shopping. Online shopping is much more coordinated because the customers can search for what they want easily, without having to go through 10 products. The customers can search for what they want, color of the item, size they prefer, material of the product, etc., and this provides them a personalized section of products of their own choice. For instance, if a customer is looking for a pair of blue color jeans of size 28 and of denim material, then, all they have to do is to search by using the filtering options in the bottom section, and the application will show them all the different types of blue jeans of the size and material they mention [2]. Customers can also share their virtual shopping experience by rating the application as well as the product on a scale of 1 to 5; this gives the retail store an idea about where they are lacking and what improvements they need to make. 3. Clients become lifetime customers Customers share their personal details, for example, their mail id and their phone number, which helps the brand stay in contact with their clients. Store teams can text or email about their new product launch, their sales, or notify the customers when a product they requested for is back in stock. Stores also provide the delivery and tracking details of the item also in order to have a safe online transaction.
7.3 Virtual trial room With rise in virtual shopping, a Virtual trial room becomes a necessity. Virtual trial room, often called virtual dressing room, is an online changing room equivalent of an instore trial room. This enables the shoppers to try on clothes of different sizes, their fitting and style virtually. Augmented reality technologies with depth and color to provide robust body acknowledgement and functionality successfully address the fit and suit aspects of shopping. Automatic generation of precise body scan data is done in order to guarantee the quality of fit. A 3D avatar type model of
7 Virtual trial room
93
the customer’s body is created, so that the items can fit, like they would in real. The first is the size recommendation service, wherein the retailers suggest and offer their customers a variety of sizes, based on a combination of factors; they can also use body scanners to identify the body shape of the customer, and then suggest different size recommendations. Then, we have body scanners that are of two types: one using the cameras, webcam or scanners, and the second, Microsoft’s Kinect device that uses more advanced technology, wherein the customer has to travel to the scanner in order to try the product. This technology is expensive and bulky to be kept in stores; so the companies prefer to keep them in shopping malls, which can be later used by the customers online. There are a couple of brands in India which offer a virtual trial while being at home. Lenskart is an Indian optical prescription eyewear retail chain. It offers a variety of services to its customers –eye check up at your door step, different designs of eyewear and different brands from basic to luxury; one of the most out-of-the-box services and with a futuristic view, Lenskart offers its customers an online a 3D glasses try-on. If a customer wants to see which pair of glasses suits them, they can opt for a 3D try-on. In 3D try on, a video of your face from all the angles will be made, after which it will analyze your face shape and recommend what frame suits your face. After preparing the customers try-on, a 3D image, an avatar will be created and, hence, the customer can choose which frame suits them and proceed with buying them. This is a great example of a virtual trial room where you can virtually try different products while being at home [3].
Fig. 7.4: VR try-on feature [3].
7.4 Intelligent shopping mall Smart Shopping Mall is a very primary shopping mall management system. It is a fusion of hardware and software. These Smart shopping malls are provided with advancements and smart access system. It provides its users with a hands-on all-in-one management platform. With this system, the admin is provided the access to all the
94
Urvashi
top-notch available features. It can amalgamate available devices and systems into one, making it an easy task for the operator. The system makes sure that there is continuous and complete safety for all the customers, staffs and belongings. It will create a pleasant atmosphere while providing a precise start for a smart business development and decisions.
7.5 Smart shopping malls thrill visitors and bring in profit To reform customer experiences, and to escalate the money spent, in-store sales can be driven through smartphone notifications. The premium mall rents are justified with hard traffic data and lowering operational costs. These are some of the significant benefits of making shopping malls smart. Earlier, retailers began beaming promotions directly to the shopper smart phones, until recently, the area has been restricted to the area within the reach of a single store. With advancements, malls have introduced affordable mall-wide data networks such that the retailers can subscribe for push promotions to the smart phones directly through permission-based mall app.
7.5.1 Advantages Promotion, shopping, and wayfinding are just part of the story when it comes to smart malls. Data propagation is matched with inclusive collection of data, which means that customer movement can be tracked, thus revealing a data pattern that mall owners can take advantage of. Proximity sensors can be installed, making it easier for the maintenance staff to know when supplies need replenishing. “Imagine a dashboard that gives you the big picture,” suggests Mirva Saarijärvi of Wirepas, a mesh technology network provider. These dashboards can display the location of assets, the temperature, humidity as well lightning values in all parts of the mall. A sensor-directed mesh technology provides real-time knowledge, also an affordable reality [4]. Customers win: 1. Customized updates 2. Marketing & sale acknowledgement 3. Loyalty programs 4. Wayfinding 5. Escalated surroundings
7 Virtual trial room
95
Retailers win: 1. Strong promotional channel 2. Inclusive reach of the mall 3. Instant flash sale capability 4. In-store loyalty programs 5. Fast and accurate pattern [5] Mall teams win: 1. Traffic patterns of heat maps are displayed 2. High price for highly crowded areas 3. Improved security and safety 4. Tracking and locating all mall assets 5. Sensor-driven technology [5]
7.6 Smart supermarkets bring benefits in five locations The convenience of low space is in its mobility. It includes the basic infrastructure for any number of jobs. “If your major goal is portable bacon,” suggests van Doorn, “you can also upgrade your mall with bacon, and put it in sensor lighting or in asset management where it makes sense and when” [2]. 1. Sales alerts and wayfinding The frequency, type and structure of notices are restricted only by the imagination of the retail team; Regular store updates can be replaced by external or daily or weekly store alerts that raise awareness of sales events. Vendors can issue their advertisements for a certain amount of money, or as part of a rental package. The features of the mall app are a great way to get to the supermarket, specifically for those who don’t travel there often or for those who have a tight schedule. These apps show buyers their present location in mall, and one can create a shopping route based on what they want to buy. Apps can be customized to allow the customers to select the vendor they wish to buy from. In an emergency, they can alert the nearby exit. 2. Maintenance requirements How much time does it take you for you to get the cleaning vehicles, repair tools, fire extinguishers used to deliver goods to retailers? Is the seller’s delivery ever misplaced? For example, keeping tabs on all goods is a summary. Tags can send alert indicating that all goods are less than 5 square meters, without overcrowding. An item can be found on the map by searching for their assigned ID and the LED light on the tag can be alerted.
96
Urvashi
3. Perfect environment Sensors can be used to achieve accurate temperature, air quality, humidity, ambient light and environmental hygiene. HVAC, lighting and other systems can be configured to automatically respond to any provided sensor data. As natural light is dimmed, for example, electric light can illuminate automatically. Sensors may alert employees to such things as the need to refill towels in the laundry facilities or in spilled liquids. Studies show that clean toilets, along with the right amount of light, temperature, and humidity, can positively affect the long-term visits to the mall. 4. Shopper movement Shopping visits can be measured in terms of length as also in terms of location. Shop groups can track customer movements within hours, days, or months by exhibiting an interactive map of temperature. Color coding is used to reveal overcrowded areas (red), less crowded areas (blue) and other travel patterns during the day [2]. A heat map is a very powerful marketing tool. It has the ability to be a superior rental space or by helping a retailer sell an area by proving, without the shadow of a doubt, how much that area sees traffic, and when. It does help in, say, preparing for different type of movement they notice on Mondays to see how the traffic changes with each passing day. This helps the store team set up kiosks in a better way and provides assistance for more store assistance or staff workers.
7.7 Intelligent trial room Intelligent shopping mall is an actual mall with a good management system. An intelligent shopping mall is embedded with the latest technologies and software, giving their customers a sense of thrill. It is a blend of both software and hardware. Screens or mirrors, with the help of integrated technology indicate which color, model and size are available directly in the store, in the web shop, or in other stores. It all started about a decade ago with RFID tags that made all stocks manageable, by making them digital, among other operations. Attached labels incorporating radio frequency advancements simplify operations of the trial rooms. When the customer is in the changing room, with the mirrors that use RFID tags [6], retailers can see all the clothes consumers want to use; so, with the help of the virtual screen on the mirror, you can check for other colors, sizes or, maybe, accessories to go with the outfit. In case the trial room is not used, the screen acts as a mirror or acts as a display to play videos and pictures.
7 Virtual trial room
97
Fig. 7.5: VR shopping experience [7].
There are various types of rooms suitable for the market. For example, the appropriate wardrobes that use 3D technology and the unpopular reality for taxpayers that we see: 1. They can detect the contour of a person with the help of software inbuilt with a scanner. 2. They allow you to wear particular clothes virtually to see how you look, and one can take pictures to save and share. 3. Put clothes in a shopping cart, shop online and complete the QR code purchase process. Some functionality between intelligent trial rooms is possible, for example, the ability to convert content into different languages. By the way, there are no changing rooms with cameras, in order to ensure the privacy of the customers. One of the most intelligent trial rooms, or smart mirrors, available on the market is designed by SAP. It’s a tool that ensures an innovative and a magical shopping experience and, most importantly, completely customized according to the customer needs. Once, you’re in the smart room and are identified, you can go through the catalog, get recommendations based on the type of fit you like, find a previous search history done at home and put the products in a basket to try them on in the physical trial rooms. An additional amount of shopping experience with these digital trial rooms is possible, for example, to order another size or color, without leaving the trial room. On making a request, the staff wearing a smartwatch, receives an alert or a notice of the request to be able to take the requested new garment to the appropriate trial room.
98
Urvashi
7.8 Benefits of using intelligent trial rooms in physical stores From a product or physical store’s point of view the benefits are as follows: 1. It provides you with a unique and personalized shopping experience that sets the store apart from the competition. 2. It provides in-store customer service. 3. It provides extensive knowledge about the clothes a customer chooses, the fabrics and the colors he or she likes and what items he or she decides to purchase. And, for example, it allows you to record the time a customer stays in the store. With all the provided data, we are able to detect e-commerce based on big data. As a result of this data analysis, it may later be possible to strengthen the customer relationship by offering, for example, personalized suggestion.
7.9 Why do stores need smart fitting rooms? While making a trial room more comfortable, interactive technologies assist customers in making their decisions more meaningful and interesting, for example, to adjust the ambient lighting and provide detailed information about the products and to make it easier for the customers to decide whether they need this product. Interactive trial room allows the customers to look through the available items; customers can call a store assistant to bring different sizes, models or have the professional stylist with styling tips. There are a lot of features that engage the customers, and they come back again for a better shopping experience and to maintain a long term relationship with the brand.
7.10 Few successful trial rooms 7.10.1 Ralph Lauren The Ralph Lauren Polo flagship store situated on the Fifth Ave in New York uses intelligent trial rooms that have smart mirrors with a touch screen installed inside the fitting room, crafted by Oak labs. A customer can interact with the mirror and can change the background lightings to see how an outfit looks in different lightings. Customers can ask for help from a store assistant by calling them with a different size for fit or another model. During the fitting, the mirror has technologies built in it with which it recommends what accessories or items would be good
7 Virtual trial room
99
enough to complete the look. If the buyer, for example, is out of money, the person can send information directly to their cell phones. According to Ralph Lauren representatives, the sales were raised up to three times because of the intelligent trial room, and the visitors tend to spend more time in the store than usual [8].
Fig. 7.6: Inside the Ralph Lauren fitting room: item selection and recommendations.
7.10.2 Rebecca Minkoff Customers visiting Rebecca Minkoff store can call store assistants to bring other items or sizes in to the trial room; their store also provides a feature to adjust the lighting; one can also create a personal profile, where the size history of the customer would be stored for future purpose. Similar to Ralph Lauren flagship, every item in a Rebecca Minkoff store is equipped with an RFID chip. Once a customer enters a store with a product, the information about the product is displayed on the mirror. After entering the trial room, a customer can make use of a huge mirror to look at different promotional materials, install a branded application on their cell phones, and immediately search for the items and forward it them to the dressing room. Smart fitting rooms are only a part of Rebecca Minkoff’s strategy called “the connected store” [9].
7.10.3 Neiman Marcus The Neiman Marcus store is situated in San Francisco. Its main aim is to make the trial process faster and more interesting as well as pleasant for its customers. Here, a customer doesn’t have to retry clothes; instead, the mirror records a short video, which then allows the customers to compare different chosen images. In addition
100
Urvashi
Fig. 7.7: The mirror in the Rebecca Minkoff flagship store’s fitting room offers to enter the phone number to save the customer profile (left), and gives recommendations based on the current and past fittings (right) [9].
Fig. 7.8: Memory mirror at Neiman Marcus [10].
to this, a customer can keep a photo of their image and send it to their friends on Facebook or any other social media applications [10].
References [1] [2] [3] [4] [5]
Elizabeth Doupnik, WWD, Mastercard and Swarovski team up on new virtual-reality shopping app. PTI, sub editor. (2014). Walmart Starts Virtual Wholesale Stores in Hyderabad. Lucknow. Blog-lenskart. Spectacular-blog-lenskart-com-basic-guide-to-use-lenskart-virtual-ar-tool. Retail Technology Review. (2019). Smart Shopping malls thrill visitors, drive profit. CAAD Retail Design. (2020). Smart Fitting Rooms: The Perfect complement for the digital transformation of fashion retail.
7 Virtual trial room
[6] [7]
101
Silicon Valley Infomedia Pvt. Ltd. Smart Shopping Mall. Tiffany M’bodj. Really clever: The Smart Fitting Room is appreciated by customers and retailers. [8] Hilary Milnes, Inside Ralph Lauren’s connected fitting rooms. [9] Hilary Milnes. How tech in Rebecca Minkoff’s fitting rooms tripled expected clothing sales. [10] Neiman Marcus’ high-tech “memory mirror” transforms shopping experience. [11] Zhang, W., et al. (2008). An intelligent fitting room using multi-camera perception. In Proceedings of the 13th international conference on Intelligent user interfaces. [12] Song, Y., & Leung, T. (2006). Context-aided human recognition – Clustering. ECCV. [13] Tagiev, R. (2017). Smart Fitting Rooms: How they work and why stores need them.
Shubham Sharma and Naincy Chamoli
8 Merging of artificial intelligence (AI) with virtual reality (VR) in healthcare Abstract: Digitalization impacts the lives of billions of people in the world and increases the opportunity for every individual to explore and experience different perspectives of life. Digitalization is possible because of continuous innovation and inventions in information technology (IT) sector. IT is growing in an unprecedented pace, and these innovations are not only limited to IT but impact each sector like retail, healthcare, life science, and manufacturing. To be more concise, you name any sector, and you feel that somewhere they are using IT innovations to increase their productivity, efficiency, user experience (UX), fast delivery, easy collaboration, marketing, sales, and in these places where these innovations can give fruitful results. UX is the field which is always in demand because it attracts the consumer toward the product. Since nothing is constant and technology ever evolving, the latest innovation changes the way people think about the UX. These innovations are popularly known as virtual reality (VR), augmented reality (AR), and mixed reality. The VR can also be called as computer-generated reality. VR, AR and head-mounted displays are altering the way we see and cooperate with the world, influencing virtually every industry. These innovations are permitting three-dimensional (3D) vivid show and comprehension of life systems at no other time is conceivable. Clinical applications are widely coming to and influence each feature of clinical consideration from learning gross life systems and careful method to understand explicit pre-procedural arranging and intra-employable direction, just as symptomatic and restorative methodologies in recovery, torment the executives, and brain science. The FDA is starting to affirm these methodologies for clinical use. VR changes the way we can experience the image in 3D and AI is known for the decision making, have the capability to mimic the human decision in the absence of humans; so, on the basis of images generated by VR, AI can diagnose the disease in a much faster way and help our medical professionals with the appropriate recommendations. In this chapter, we sum up the utilization of AI with VR for healthcare. A set of experiences, current utility and future utilizations of these advancements are portrayed. Keywords: Artificial intelligence (AI), virtual reality (VR), machine learning (ML), deep learning (DL)
Shubham Sharma, Assistant System Engineer-TCS, India, email: [email protected] Naincy Chamoli, B. Tech CSE, Student-UTU, email: [email protected] https://doi.org/10.1515/9783110713817-008
104
Shubham Sharma and Naincy Chamoli
8.1 Introduction In this vociferous twenty-first century, we are surrounded by the technology that has a huge impact in the lives of every generation from a small kid to an old-aged man, and all are busy in using this technology. This amount of human involvement in technology motivates rapid advancement in technology, where researchers and engineers put all their efforts for solving problems that impact the lives of billions of users. These problems are either conspicuous or abstract. Any new advancement for conspicuous problems can be noticed by end users but the advancement for abstract technology may not be noticed by end users with such ease and they also play a major role in solving problems that need to be addressed. Let me explain this with a simple example: consider we are using Google Chrome in Android phones or desktops and assume that the desired page takes 10 microseconds in loading. To solve this problem, Google uses the new-generation connectivity technology which helps in loading of Google page in less than 0.1 microseconds and uses an algorithm to show top 10 similar searches on the left side of the page so that the design of the Google Chrome changes. Both changes impact the performance of the system but end user noticed the change in the design of the Google Chrome much faster than the change in the networking technology which is used for connectivity. The heterogeneous idea of future remote systems including different access systems, recurrence groups and cells – all covering inclusion territories – presents remote administrators in organizing, arranging and sending difficulties. Artificial intelligence (AI) and machine learning (ML) can help remote administrators to beat these difficulties by examining the geographic data, designing boundaries and memorable information to figure the pinnacle traffic, asset use and application types, enhance and calibrate or organize boundaries for limit extension and dispense with inclusion gaps by estimating the obstruction and utilizing the between-site separation data. 5G can be a key empowering influence to drive ML and AI incorporation into the system edge. The figure underneath shows how 5G empowers synchronous associations with different Internet of things (IoT) gadgets, creating gigantic measures of information. The coordination of ML and AI with 5G multiaccess edge computing empowers remote administrators to offer elevated level of robotization from the conveyed ML and AI engineering at the system edge, application-based traffic guiding and accumulation across heterogeneous access systems, dynamic system cutting to address shifted use cases with various quality of service necessities and ML/AI as an administration offering for end clients. In this chapter, we address all 5G technologies, challenges associated with 5G technology and role of VR, AI and ML for 5G technology, but VR will become more fruitful with AI, ML, IoT and 5G technology. In these days, it gets conceivable in any event, for a normal client, to move into the universe of computer designs. This interest with another (ir)reality regularly begins with computer games and keeps going forever. It permits to see the encompassing world in other measurements and to encounter things that are most certainly not
8 Merging of artificial intelligence (AI) with virtual reality (VR) in healthcare
105
available or even not yet made. In addition, the universe of three-dimensional (3D) designs has neither fringes nor imperatives and can be made and controlled without the help from anyone else as we wish – we can upgrade it by the fourth measurement: the measurement of our creative mind But insufficient: individuals consistently need more. They need to venture into this world and associate with it – rather than simply watching an image on the screen. This innovation which turns out to be overwhelmingly well known and in vogue in the current decade is called virtual reality (VR). Augmented reality (AR) is considered to have started in the 1950s; nevertheless, it became obvious in the last part of the 1980s and 1990s. This can be ascribed to spearheading computer researcher Jaron Lanier, who presented the world in 1987 to the term “augmented reality.” Examination into computer-generated reality proceeded into the 1990s and that joined with the presence of movies; for example, The Lawnmower Man assisted in raising its profile. Most augmented simulation conditions are fundamentally visual encounters, shown either on a computer screen or through exceptional stereoscopic presentations. AR may likewise incorporate hearable incitement through speakers or earphones. Clients can collaborate with the virtual climate using gadgets, for example, a console, a mouse or a wired glove. The historical backdrop of computer-generated reality has to a great extent been a background marked by endeavors to make an encounter even more genuine. Most of the chronicled models are visual and, less significantly, hearable. This is the result of the apparent multitude of human detects vision gives by a wide margin the most data followed by hearing. Most likely 90 for each penny of our impression of the world is visual or hearable. Toward the start of 1990s, the improvement in the field of AR turned out to be a lot stormier, and the term “virtual reality” itself turned out to be very well known. We can find out about VR almost in such a media, and individuals utilize this term very frequently and they abuse it as a rule as well. VR is a famous name for an engrossing, intuitive, computer-interceded involvement with which individual sees a manufactured (recreated) climate by methods for unique human–computer interface equipment. It connects with recreated protests in that climate as though they were genuine. A few people can see each other and collaborate in shared engineered climate, for example, war zone. VR is a term used to portray a computer-produced virtual environment that might traveled through and controlled by a client progressively. A virtual climate might be shown on a head-mounted presentation, a computer screen, or a huge projection screen. Head and hand global positioning frameworks are utilized to empower the client to notice, move around and control the virtual climate. The primary contrast between VR frameworks and customary media (e.g., radio, TV) lies in 3D of VR structure. Inundation, presence and intuitiveness are curious highlights of VR that draw it away from other illustrative innovations. VR does not mimic genuine reality, nor does it have an illustrative capacity. Persons have powerlessness to recognize recognition, fantasy and figments. VR has developed into another stage and turns into an unmistakable field in the universe of registering. The utility of VR has just been explored in vehicle plan, robot configuration, medication, science, instruction, just as in
106
Shubham Sharma and Naincy Chamoli
building plan and development. Virtual environment (VEs) presents a brought together workspace permitting pretty much complete usefulness without necessitating that all the capacities be situated in a similar actual space. “Virtual conditions [can be defined] as intuitive, virtual picture shows improved by exceptional handling and by nonvisual showcase modalities, for example, hear-able and haptic, to persuade clients that they are inundated in a manufactured space.” Less in fact, “a virtual world is an application that allows clients to explore and connect with a three-dimensional, system created (and system kept up) climate continuously. This sort of framework has three significant components: connection, 3-D illustrations, and inundation” and five components that influence the authenticity of a virtual climate for clinical applications: – Fidelity – high-goal illustrations – Display of organ properties; for example, distortion from transforming or kinematics of joints – Display of organ responses; for example, seeping from a conduit or bile from the nerve bladder – Interactivity – between items; for example, careful instruments and organs – Sensory input – material and power criticism In any case, authenticity of the virtual articles is not sufficient. Also, the human–computer communication must give a sensible climate which the client can collaborate. Today recreations compromise less authenticity for more ongoing intelligence considering restricted registering power; yet, the future holds guarantee of a virtual dead body almost vague from a genuine individual. “Starting exploration in the course of the last 5–10 years in video innovation, illustrations, PC helped plan (CAD) and computer generated reality has given us an knowledge into a portion of the necessities for an augmented experience careful test system.” On the off chance that realistic pictures are utilized, it is assessed a pace of 5,000,000 polygons/s would be needed for a sensible reproduction of the midsection; current elevated level PC illustration workstations create 60,000–100,000 polygons/s. Utilizing CT or MRI filter pictures would require significantly more PC power. The calculations for object twisting and gravity are accessible and keep on advancing. For movement at any rate, 30 casings/s are important to wipe out glimmer and reaction delays; this degree of intelligence is accessible on standard (VR) frameworks.
8.2 Added value of VR in healthcare Virtual conditions and related advancements increase the value of medical care in the regions of cost investment funds, improved administrations and reserve funds in material assets.
8 Merging of artificial intelligence (AI) with virtual reality (VR) in healthcare
107
8.2.1 Cost savings Injury units in trauma centers could improve working effectiveness and decrease costs by utilizing authorities who are not actually present. With telepresence, specialists could be connected to far off patients. This is successful if separations are incredible. Since the requirement for authorities in crisis circumstances cannot be anticipated, utilizing distant experts in period of scarcity restricts the staffing needs of the injury units without restricting their adequacy. Doing this would monitor assets by restricting the requirement for low-maintenance authorities to be present in injury units.
8.2.2 Improved services AR is affecting and improving careful outcomes. Models incorporate the utilization of laparoscopic test systems for preparing, the advancement of uses that recreates human reaction to medicine (e.g., test system frameworks assist in preparing anesthesiologists), and also the advancement of imaging apparatuses that direct careful instruments through mind tissue to the site of a tumor. Military and media outlets were the bleeding edge engineers furthermore, clients of VR, anyway ongoing clinical and logical advances made VR advancements appropriate to different fields. The applications of VR in medication began in 1993, where it was applied to treat emotional wellness measures. Earlier, VR psychological social treatment (CBT) was utilized to treat certain states of fear, for example, tallness fear. It was viewed as a best option treatment since it recorded above 90% achievement rate. Today, VR applications in cutting edge medical care have been toward surgeries. Distant medical procedures can be performed successfully not even close to the patient while the mimicked strategy in the virtual climate led by a specialist is sent to an automated instrument that copies the activities. Different applications are clinical treatment, preventive medication, perception of information bases, expertise upgrade and recovery, and clinical instruction and preparing. Moreover, VR is powerful in psychotherapy to divert patients during difficult systems or to give treatment to a more extensive territory for nervousness problems; for example, posttraumatic stress issues. Progression in innovation permitted VR frameworks to run on close to home PCs subsequently diminishing the usage cost for an advanced framework. Besides, with the capacity to acquaint digitized pictures with a VR framework, it is conceivable to imitate real places and subsequently increment the likelihood to treat emotional well-being messes (e.g., dread of public talk and social fear). Bringing in photos of companions, families and collaborators can permit patients to communicate with these substances in the well-being of a virtual climate prior to endeavoring the collaboration.
108
Shubham Sharma and Naincy Chamoli
VR can be used in various sectors and it starts with gaming. So, it is difficult to explain every sector and every area in this chapter. Therefore, for this chapter, we are selecting a sector of healthcare. In healthcare as well, VR can be used in various ways but this chapter explains how VR can help medical professionals like surgeons to get a 3D image of a patient body part which is infected, and with the help of that image AI is able to classify whether the person is diagnosed with disease or not which acts as a helpful hand for medical professionals.
Fig. 8.1: Use of VR in healthcare.
Preferences offered by telepresence frameworks incorporate upgrading task execution in far-off control through expanded situating goal or reach; permitting controlled utilization of very huge or little powers; improving administrator view of the undertaking; what is more, encouraging control in risky conditions by disengaging the climate from the administrator, or control in clean conditions by segregating the administrator from the climate. Regular to all telepresence frameworks is a human administrator in a manual or administrative control circle administering task execution. Application zones incorporate tasks in radioactive, submerged, space, careful, recovery and “tidy up room” conditions, as well as the assembling and development ventures. There are so many areas where VR can be used in healthcare, but it is quite difficult to cover all the diseases; therefore, for this chapter we are selecting a most severe disease called cancer. Cancer is one of the most severe diseases and it arises because of abnormal growth of cells. Since whole body is comprised of cells, there is possibility that there may be abnormal growth of cells in any part of the body.
8 Merging of artificial intelligence (AI) with virtual reality (VR) in healthcare
109
Therefore there are different kinds of cancers, but in this chapter, we are only dealing with brain cancer which is popularly known as brain tumor. The human brain has the capability of decision-making, and it plays a very vital role in our survival. The human brain is surrounded by the hard skull, and due to the abnormal growth of the cell in the human brain, pressure increases in the human skull resulting in brain tumor. In this chapter, we are trying to solve the problem of the human brain with the help of deep learning (DL) algorithms that have the ability to mimic the human brain. In our work, we diagnose the brain tumor with the help of DL algorithm named as convolutional neural network (CNN), which is developed by “Shiqi Wang,” who takes CT scan image of the brain as input. We are trying to achieve high accuracy for early detection of brain tumor automatically by the machine. This work will also work as a helping hand for doctors when doctors must deal with large number of cases in a single day. Brain tumor is one of the most feared diseases as they lead to one of the most uncurable diseases of cancer. It is a known fact that the greater the time taken to diagnose it the more threatening it gets. This chapter deals with the problem of time taken to diagnose the tumor in the brain without waiting for the doctors to do the analysis. This chapter proposes an idea to form an intelligent system that would include a regressively trained model to diagnose the brain tumor simultaneously as the reports are generated. This model makes use of CNN algorithms that treat data in the nonlinear form and create matrixes of the data as it goes further [1]. This model can be implemented in hospital’s emergency wards where brain tumor can be detected as soon as possible without having any delays that involve human interventions. Nowadays, everyone is living at the risk of being detected with tumor due to heavy environmental pollution and messier and busier lifestyle. The problem with relying on the older system is that it is delayed and very threatening to human life as delayed in medication can lead to serious illness and even death. Whatever the reason maybe for this blunder, it always results in suffering of people. The system that we propose is a solution to parking solution in its innovative sense as it allows people to get the medical attention they need as soon as the CT scan is done. The system that we put forward collects data from the CT scan on real-time basis and gives the status of whether the person has tumor or not [2]. This allows the people to take the necessary actions regarding their health. The system makes use of CNN that can provide a ground for decision-making about the use of the personal health.
8.3 Artificial intelligence In the twenty-first century, AI has become a significant area of research in fields like engineering, science, schooling, medication, business, bookkeeping, account, promoting, financial matters, securities exchange and law. Man-made consciousness is
110
Shubham Sharma and Naincy Chamoli
as of now incompletely created without cutting edge capacities to learn all alone; however, all things being equal provided orders to follow up. This will be a definitive future of man-made consciousness, where the AI machines will perceive the human conduct and feelings and will prepare their portion according to it. They likewise impact the bigger patterns in worldwide supportability. Man-made reasoning can be valuable to fathom basic issue for supportable assembling (e.g., enhancement of energy assets, coordinations, flexibly chain the executives, and squander the board). In this specific circumstance, in brilliant creation, there is a pattern to fuse AI into green assembling measures for stricter natural arrangements. Truth be told, as said in March 2019 by Hendrik Fink, head of Sustainability Services at PricewaterhouseCoopers, “In the event that we appropriately fuse man-made brainpower, we can accomplish a transformation with respect to manageability. Simulated intelligence will be the main thrust of the fourth mechanical insurgency.” Along these lines, subfields of AI, for example, AI, normal language preparing, picture handling, furthermore, information mining, have additionally become a significant subject for the present tech goliaths. The subject of AI produces extensive interest in established researchers, by ethicalness of the persistent development of the advancements accessible today. The improvement of ML as a part of AI is presently quick. Its use has spread to different fields, for example, learning machines, which are as of now utilized in brilliant assembling, clinical science, pharmacology, farming, antiquarianism, games and business. As per the above contemplations, in this work, a deliberate writing audit of exploration from 1999 to 2019 was performed on AI and the ML strategy. In this way, it is viewed as important to make an arrangement framework that alludes to the articles that together treat the two subjects, to have more prominent fluctuation and reflection. Moreover, to pick up a more profound agreement, the impact of other factors was investigated, for example, the topical regions and the areas where the advances are most persuasive. The fundamental commitment of this work is that it gives an outline of the examination conveyed out to date. Various noteworthy documentations to set up examination strategies and reasoning have been examined for quite a long while. Sadly, little examination and incorporation across considers exists. In this chapter, a typical comprehension of AI and ML exploration and its varieties was made. This chapter is not endeavoring to give a comprehensive structure on the writing on AI and also ML research. Or maybe, it endeavors to give a beginning stage to coordinating information overresearch in this space and recommends ways for future exploration. It investigates concentrates in certain novel disciplines: Environmental contamination, medication, upkeep, fabricating and so on. Further exploration is expected to expand the current limit of information in AI by incorporating standards and ways of thinking of some customary controls into the current AI structures. The objective that this report might want to accept that is not the trigger of an abrupt multiplication of a generally solidified area; however, it is trusted that this examination could be a significant scholarly apparatus for both the
8 Merging of artificial intelligence (AI) with virtual reality (VR) in healthcare
111
pulling together of the work and making new scholarly chances. This chapter presents significant thoughts and points of view for going through exploration on AI and ML. The last point was to foresee the change of the order later on age. This would be an excursion that may encounter change in its course as new ages of researchers add to the exchange and to the activity. As noted earlier, this work presents a survey; consequently, it establishes a framework for future request. It offers a reason for future examinations as well as prompts various new inquiries for examinations too. While subjects that may be considered as aftereffects of this work are various, some are of especially wide interest or effect. As indicated by John McCarthy (one of the authors of AI), man-made intelligence is “the science and designing of making clever machines, particularly intelligent computer programs.” In other words, AI is a method of making a computer, or a computer-controlled robot (or) programming that thinks astutely like how people would think. The point of AI is to create machines with PC programs that have scholarly abilities to dissect their current circumstance (or) circumstances and settle on choices like humans. Computer-based intelligence is a huge field and there are numerous zones inside it; however, in the end it is tied in with making choices intellectually. AI is the establishment (or then again) building square of AI. AI transcendently worried about settling on the correct choice dependent on learning. Profound learning is a subset of AI. Profound learning is predicative and it gets exactness, gives measurements or rules to learn all alone without management from people. Man-made reasoning advancements have prompted a few discoveries in the field. From a more extensive perspective, AI is a science; what is more, a bunch of computational advancements that carries capacity to machines and computer frameworks to think and work like a human. It further adds abilities like discourse acknowledgment, visual recognition, dynamic, feeling cognizance and more into frameworks.
8.3.1 Machine learning ML is a subset of AI that permits machines to consequently take in and improve as a matter of fact. Specific frameworks are made for this reason, and no unequivocal writing computer programs is needed to add new definitions to the information base. The machines can learn all alone. It is only the progression of computer algorithm and applications that will permit the frameworks to get enormous measure of information and gain from them to supplement specific thoughts (or) assignments.
112
Shubham Sharma and Naincy Chamoli
8.3.2 Deep learning DL is a subset of ML that comprises especially enormous neural organizations and a huge assortment of calculations that can mimic human knowledge. DL will give the capacity to gain from informational collections that are unlabeled or then again unstructured. One of the complexities that make profound learning stand apart is the way that the organizations in this innovation are solo, which means they are constantly learning without human intercession.
8.4 Artificial neural network In computer science and related fields, artificial neural networks are numerical or computational models that are propelled by a human’s focal sensory system (in specific, the cerebrum) which is equipped for ML just as example acknowledgment. Though creature’s sensory system is more mind-boggling than humans, the framework planned like this will have the option to tackle more unpredictable issues. Counterfeit neural organizations are by and large introduced as frameworks of profoundly interconnected. Neural network is actually similar to a web network interconnected neurons which can be millions in number. With the assistance of these interconnected neurons, all the equal, preparing is being finished in the body, and the best illustration of parallel handling is human or creature’s body. Presently, counterfeit neural organizations are the grouping of the crude counterfeit neurons. This grouping happens by making layers that are then associated with each other. How these layers interface is the other piece of the “workmanship” of designing organizations to determine the complex issues of this present reality. So neural organizations, with their more grounded capacity to get significance from muddled or loose information, can be utilized to separate examples and identify patterns that are too perplexing to possibly be seen by either people or other computer methods.
8.5 Applications of AI There is rapid advancement and improvement in AI; numerous undertakings notice the incentive in receiving AI and ML, figuring out how to address their business challenges. As ventures, need quicker what is more, simple approach to carry AI to their associations to draw an effective promising human-to-machine association and come to upgraded end results. Figure 8.2 represents the promising applications of AI.
8 Merging of artificial intelligence (AI) with virtual reality (VR) in healthcare
113
Fig. 8.2: Applications of AI.
8.6 Proposed diagram In general, the medical procedure is performed by making entry points and legitimately collaborating with the organs and tissues. Late developments in video innovation permit direct survey of interior body holes through normal openings or little cuts. Similarly as with distant medical procedure, the specialist works on a virtual picture. The control of instruments by the specialists or collaborators can be immediate or by means of virtual conditions. In the last case, a robot imitates the developments of people utilizing virtual instruments. The accuracy of the activity might be expanded by information/pictures superimposed on the virtual patient. Thus, the specialist’s capacities are upgraded. An overview of our proposed framework is described in Fig. 8.3. Using VR application when a medical professional looks to a patient’s brain, then the 3D image is generated in front of him/her which helps to examine the subject without any delay and assists him/her. VR application is combined with DL model, where the DL model can mimic the human decision and help medical professionals with their decisions and also provide appropriate recommendation in more depth. As explained in Fig. 8.3, first, using VR headset a medical professional examines a subject, then the 3D image of the subject comes in front of the doctor where the doctor can examine the human brain without actually doing any operation or by doing any image scanning (CT scan). In the third step, we connected a DL model which can help medical professionals with more informative decisions.
114
Shubham Sharma and Naincy Chamoli
Fig. 8.3: Proposed framework.
In this proposed system, we used DL algorithm developed by “Shiqi Wang” and his team to diagnose a disease like brain tumor with the help of CT scan image [3]. Since a CT scan image is made by the scanner, it provides the system a capability to identify whether the image belongs to the brain tumor category or benign. For example, if there is a situation where one doctor has to diagnose 500 patients a day and out of 500 more than 400+ people have symptoms of brain tumor, then it becomes very difficult for the doctor to diagnose CT scan image of all 400 patients in one day and it will take weeks for the doctor to diagnose the disease, but using DL model, a trained model itself classifies 400 patients into two categories [4]. The first category is of those people who have actual brain tumor and other category of people who are benign. So, those identified as being infected from brain tumor are given priority and the doctor first addresses them and then the doctor addresses people who are not infected from brain tumor. Instead of DL models, ML models can also be used for generating intelligence and decision-making power. In this chapter ML model performance is also discussed. Models that give more accuracy can be used to generate AI.
115
8 Merging of artificial intelligence (AI) with virtual reality (VR) in healthcare
8.7 Data flow diagram Data flow diagram (DFD) is the graphical representation used to show the flow of data in DL model. A graphical representation shown in Fig. 8.4 shows the implementation of our work. For this work, first of all we identify all the external sources of information from where we collect all the data related to melanoma and normal mole. Afterward we pre-process the data and discard all the unrelated data from the dataset and curate a clean dataset. We divide the data into 3:1 ratio, where 75% of total number of images are stored in training image dataset and 25% of total number of images are stored in test image dataset. Using the training image, we extract the features using the CNN and trained a model [5] on the basis of these features, and using test image dataset we find the accuracy of the model to know how accurate a model is trained on the training image and after this we use validation dataset to know about the performance of the model. External Sources of Information
Extract Data
Data Preprocessing
Clean Dataset
Training Image
Data Set
DataPreparation
Feature selection
Convolution Neural Network
Benign
Yes
Classifier Y=classifier.predict() Test Image
if Y==0
No
Brain Tumor
Fig. 8.4: Data flow diagram.
After training our model, we have to check whether our model works fine for a new MRI image of the brain to diagnose at an early stage because if someone misdiagnose the brain tumor, then this will impact more to the patient and the severity may result in death. Therefore we validate our model with image datasets as well as with random images of the brain tumor [6]. An overview of the implementation of our work is shown in Fig. 8.5.
116
Shubham Sharma and Naincy Chamoli
Fig. 8.5: Explanation of deep learning model.
8.8 Result 8.8.1 Convolution neural network CNN algorithm has different layers like convolution layer, max pooling layer and dropout layer, where each layer has a specific function, and these layers help in extracting features from training image dataset and train model on extracted features [7]. In our work, total number of parameters are 167,105 and our model is trained on all these parameters; therefore, trainable parameters are 167,105 and the role of each layer is shown in Fig. 8.6. Training dataset is used to train the model, and validation dataset is used to measure the performance of the model. The graph between training and validation accuracy [8] is shown in Fig. 8.7. Training loss is the error that occurs at the time of running the network, and validation loss is the error obtained after running the validation dataset through the trained network, and this graph is shown in Fig. 8.8. In this chapter, we show the importance of technology in healthcare. With the help of CNN algorithm [9], we are able to diagnose the disease in the early stage and we are able to get more than 98% accuracy in our work. This means that out of 100 times, our model predicts 98 times whether the person is infected from brain tumor or not, and as there is an increase in different types of dataset, its accuracy increases automatically because DL algorithm has the ability to learn from experience. This will solve the problem of the delay in medical attention of the people having tumor who require instant medication. This is environment-friendly and does not make use of infrared radiations but only a simple system. Moreover this also helps in reducing the
8 Merging of artificial intelligence (AI) with virtual reality (VR) in healthcare
Fig. 8.6: DNN layers’ explanation.
Training and validation accuracy 1.00 0.95 0.90 0.85 0.80
Training acc Validation acc
2
4
6
Fig. 8.7: Training and validation accuracy.
8
10
117
118
Shubham Sharma and Naincy Chamoli
Training and validation loss 0.5 Training loss Validation loss
0.4
0.3
0.2
0.1
0.0 2
4
6
8
10
Fig. 8.8: Training and validation loss.
delay in any medical attention if the brain tumor is detected. This project is the game changer that will lead to safer and lesser cases of brain tumor death rates all over the world. This project can be integrated in the ongoing medical industry. If this medical system is integrated with the CT scan and MRI machines, then the person who is under the process can be depicted by the algorithm even before the report is sent to the doctor. The system does have a future scope because the model is continuously trained by the dataset. This trained model not only gives the medical results of the patient but if integrated and worked on, the further analysis can also analyze the real-time data and prove the initial stage remedies to put the patient into the medical journey of recovery. Distribution 2000 1750 1500 1250 1000 750 500 250 0 0 Fig. 8.9: Count of melanoma and non-melanoma.
1
8 Merging of artificial intelligence (AI) with virtual reality (VR) in healthcare
119
8.8.2 Machine learning classifier Datasets containing 3,762 images belong to two classes: first class is 1 which means the image belongs to tumor and other class is 0 which means it belongs to nontumor. Out of 3,762 images, 2,079 belong to class 0 and 1,683 belong to class 1. These datasets contain first-order features, and these features are highlighted based just on the estimations of individual pixels in the picture, and do not communicate their relationships with other picture pixels; for example, the mean/middle/ maximum/minimum pixel esteems in the picture. Of the datasets containing 15 features, 14 features are independent and 1 feature is dependent. Correlation between these features is shown in Fig. 8.10. 1.00 Class Mean 0.75 Variance Standard Deviation
0.50
Entropy Skewness
0.25
Kurtosis Contrast
0.00
Energy –0.25
ASM Homogeneity
–0.50
Dissimilarity Correlation
–0.75
Fig. 8.10: Correlation between features.
Coarseness
Correlation
Dissimilarity
Homogeneity
ASM
Energy
Contrast
Kurtosis
Skewness
Standard Deviation Entropy
Variance
Mean
Class
Coarseness
120
Shubham Sharma and Naincy Chamoli
0.8
0.6
0.6
0.4
0.8
0.4
0.6 0.4
0.2
0.2
0.2
0.0
0.0
0.0
0.2
0.4 0.6 Entropy
0.8
1.0
0.0 1.0
0 1
0.8
0.6
0.6
Energy
0.8
0.4 0.2
0.2
0.4 0.6 Entropy
0.8
1.0
0 1
0.4
0.4 0.6 Energy
0.8
1.0 1.0
0 1
0.8
0.6
0.6 Energy
0.8
0.4
0.2
0.4 0.6 Energy
0.8
0.0
0.2
0.4 0.6 0.8 Homogeneity
1.0
0.4 0.6 Energy
0.8
1.0
0.4 0.6 0.8 Homogeneity
1.0
0 1
0.4
0.0 1.0
0 1
0.2 0 1
0.8
0.4
0.6 0.4 0.2
0.0
0.0
1.0
0.6
1.0
0.2
0.2
0.8
0.0 0.0
Homogeneity
1.0
0.2
0.4 0.6 Entropy
0.2
0.0 0.0
0.2
0.8
0.2
0.0
0 1 0.0
1.0
Homogeneity
1.0
Entropy
1.0
0 1 Homogeneity
0.8
0.0
Entropy
1.0
0 1
Energy
Entropy
1.0
0.0 0.0
0.2
0.4 0.6 0.8 Homogeneity
1.0
0.0
0.2
Fig. 8.11: Scatter plot of homogeneity, entropy and energy.
8.8.3 Gradient boost Boosting is a strategy for changing over fragile students into solid students. In boosting, each new tree is a fit on an altered rendition of the first informational index. The inclination boosting calculation (gbm) can be most effortlessly clarified by first presenting the AdaBoost algorithm. The AdaBoost algorithm starts via preparing a choice tree where every perception is appointed an equivalent weight. In the wake of assessing the main tree, we increment the loads of those perceptions that are hard to arrange and bring down the loads of those that are not difficult to characterize. The subsequent tree is along these lines developed on this weighted information. Here, the thought is to develop the forecasts of the primary tree. Our new model is thus Tree 1 + Tree 2. We, at that point, register the characterization
8 Merging of artificial intelligence (AI) with virtual reality (VR) in healthcare
121
mistake from this new two-tree outfit show and grow a third tree to anticipate the overhauled residuals. We rehash this interaction for a predetermined number of cycles. Resulting trees assist us with characterizing perceptions that are not all around arranged by the past trees. Expectations of the last gathering model along these lines are the weighted amount of the forecasts made by the past three models. Angle boosting trains numerous models in a continuous, added substance and in a successive way. The significant distinction among AdaBoost and gradient boosting algorithm is the manner by which the two calculations recognize the weaknesses of powerless students (e.g., choice trees). While the AdaBoost model distinguishes the inadequacies by utilizing high weight information focuses, inclination boosting plays out the equivalent by utilizing angles in the misfortune work (y = ax + b + e; here e needs an uncommon notice as it is the mistake term).
0
1750 10
1888
1500 1250 1000
1
750 42
1322
500 250
0 Fig. 8.12: Confusion matrix of test data. True positive: 1,888 True negative: 1,322 False positive: 10 False negative: 42 Accuracy on test data is 98.40%
1
122
Shubham Sharma and Naincy Chamoli
300
0
250 175
6 200 150
1
100 8
311 50
0
1
Fig. 8.13: Confusion matrix of external test data. True positive: 175 True negative: 311 False positive: 6 False negative: 8 Accuracy on test data is 97.20%
8.8.4 XG boost XGBoost or extraordinary angle boosting is one of the notable inclination boosting techniques (ensemble) having improved execution and speed in tree-based (consecutive choice trees) AI calculations. XGBoost was made by Tianqi Chen and at first kept up by the distributed (deep) machine learning community gathering. It is the most wellknown calculation utilized for applied AI.
8.8.5 MobileNet MobileNet model is an organization model utilizing depth-wise distinguishable convolution as its essential unit. Its depth-wise distinct convolution has two layers: depth-wise convolution and point convolution. Dense1-MobileNet model considers the depth-wise convolution layer and the point convolution layer as two separate convolution layers, that is, the information with guides of each depth-wise convolution layer in the thick square is the superposition of the yield highlight maps in the past convolution layer, as is the information with guides of every profound convolution layer (Fig. 8.16). Since depth-wise convolution is a solitary channel convolution, the quantity of yield highlighting guides of the center depth-wise convolution layer is equivalent to that of the information with maps, which is the amount of the yield with guides of the multitude of past layers.
8 Merging of artificial intelligence (AI) with virtual reality (VR) in healthcare
0
1750 1892
6
1500 1250 1000
1
750 29
1335
500 250
0
1
Fig. 8.14: Confusion matrix test data. True positive: 1,892 True negative: 1,335 False positive: 6 False negative: 29 Accuracy on test data is 97.20%
0
300
175
6
250 200
1
150
7
312
100 50
0
1
Fig. 8.15: Confusion matrix of external test data. True positive: 175 True negative: 312 False positive: 6 False negative: 7 Accuracy on test data is 97.20%
123
124
Shubham Sharma and Naincy Chamoli
DenseNet contains a change layer between two sequential thick squares. The progress layer decreases the quantity of information with maps by utilizing 1 ∗ 1 convolution part, and parts of the quantity of information highlight maps by utilizing 2 ∗ 2 normal pooling layer. These two tasks can facilitate the computational heap of the organization. Unique in relation to DenseNet, there is no progress layer between two successive thick squares in Dense1-MobileNet model, and is explained as follows: (1) in MobileNet, bunch standardization is completed behind every convolution layer, and the last layer of the thick squares is 1 ∗ 1 point convolution layer, which can decrease the quantity of highlight maps; (2) what is more, MobileNet diminishes the size of highlight map by utilizing convolution layer as opposed to pooling layer, that is, it straightforwardly tangles the yield with guide of the past point convolution layer with step 2 to decrease the size of the highlight map. 0
0
0
100
100
100
200
200
200
0
0
200
200
0
0
0
100
100
100
200
200
200
0
0
200
200
0
0
1.0
100
100
0.5
200
200
0
200
0
200
0.0 0
200
0
200
0.0
0.5
1.0
Fig. 8.16: Brain tumor images.
In our work, the total number of parameters is 2,259,265, and our model is trained on all these parameters; therefore, trainable parameters are 1,281 and the role of each layer is shown in Fig. 8.17.
8 Merging of artificial intelligence (AI) with virtual reality (VR) in healthcare
Fig. 8.17: MobileNet model summary.
MobileNet accuracy is 84.86%
125
126
Shubham Sharma and Naincy Chamoli
Confusion matrix
True positive: 349 True negative: 290 False positive: 51 False negative: 63 Accuracy on test data is 84.86%
8.9 Conclusion The framework consists of tools, technologies and processes. These frameworks can be used to solve the domain-related problems. In this chapter, we discussed the usage of VR with AI to solve the problems in healthcare domain; for example, quick diagnosing of brain tumor helps patients in saving their lives. VR can ease the process of finding the 3D image of a human brain without doing any physical operation to examine the patient. This framework will help medical professionals and act as an intelligent assistant to take more informative decisions. Merger of VR with AI is a revolutionary idea. This can be used to solve various problems of different domains. Different AI models are proposed to classify brain tumor.
References [1]
[2] [3]
[4]
[5]
[6]
Havaei, M., Davy, A., Warde-Farley, D., Biard, A., Courville, A., Bengio, Y., . . . Larochelle, H. (2017). Brain tumor segmentation with deep neural networks. Medical Image Analysis, 35, 18–31. Raj, A., Srivastava, A., & Bhateja, V. (2011). Computer aided detection of brain tumor in magnetic resonance images. International Journal of Engineering and Technology, 3(5), 523. Liu, S., & Deng, W. (2015, November). Very deep convolutional neural network based image classification using small training sample size. In 2015 3rd IAPR Asian conference on pattern recognition (ACPR) (pp. 730–734). IEEE. Ghanavati, S., Li, J., Liu, T., Babyn, P. S., Doda, W., & Lampropoulos, G. (2012, May). Automatic brain tumor detection in magnetic resonance images. In 2012 9th IEEE International Symposium on Biomedical Imaging (ISBI) (pp. 574–577). IEEE. Jayachandran, A., & Dhanasekaran, R. (2013). Automatic detection of brain tumor in magnetic resonance images using multi‐texton histogram and support vector machine. International Journal of Imaging Systems and Technology, 23(2), 97–103. Zhen, Z., Quackenbush, L. J., Stehman, S. V., & Zhang, L. (2013). Impact of training and validation sample selection on classification accuracy and accuracy assessment when using
8 Merging of artificial intelligence (AI) with virtual reality (VR) in healthcare
[7] [8]
[9] [10] [11] [12]
[13] [14] [15] [16]
127
reference polygons in object-based classification. International Journal of Remote Sensing, 34(19), 6914–6930. Ioannou, Y., Robertson, D., Shotton, J., Cipolla, R., & Criminisi, A. (2015). Training CNNs with low-rank filters for efficient image classification. arXiv preprint arXiv:1511.06744. Fletcher-Heath, L. M., Hall, L. O., Goldgof, D. B., & Murtagh, F. R. (2001). Automatic segmentation of non-enhancing brain tumors in magnetic resonance images. Artificial Intelligence in Medicine, 21(1–3), 43–63. Ioannou, Y., Robertson, D., Shotton, J., Cipolla, R., & Criminisi, A. (2015). Training CNNs with low-rank filters for efficient image classification. arXiv preprint arXiv:1511.06744. Cheng, J., & Greiner, R. Comparing Bayesian network classifiers. Proceedings UAI, 101–107, 1999. Bouckaert, R. R. Bayesian belief networks: From construction to inference. Ph.D. thesis, University of Utrecht, 1995. Todorovski, L., & Džeroski, S. (2000). Combining multiple models with meta decision trees. Proceedings of the 4th European Conference on Principles of Data Mining and Knowledge Discovery (PKDD-2000) (pp. 54–64). Lyon, France: Springer-Verlag. Zhang, H.-Y., & Wang, W. (2009). “Application of Bayesian method to spam SMS filtering”. 2009 International Conference on Information Engineering and Computer Science, 1–3. Bauer, E., & Kohavi, R. (1999). An empirical comparison of voting classification algorithms. Machine Learning, 36, 105–169. Ortega, J. (1996). Exploiting Multiple Existing Models and Learning Algorithms Proceedings of the AAAI-96 Workshop on Integrating Multiple Models (pp. 101–106). Introduction to Decision Trees and Random Forests, Ned Horning; American Museum of Natural History’s.
Brief biography Dr. D. Jude Hemanth received his BE in ECE from Bharathiar University in 2002, ME in communication systems from Anna University in 2006 and PhD from Karunya University in 2013. His research areas include computational intelligence and image processing. He has authored more than 180 research papers in reputed SCIE indexed international journals and Scopus indexed international conferences. His cumulative impact factor is more than 270. He has published 37 edited books with reputed publishers such as Elsevier, Springer and IET. He has been serving as associate editor/scientific editor of SCIE indexed international journals such as IEEE Journal of Biomedical and Health Informatics (IEEE-JBHI), Soft Computing (Springer), IET Image Processing, Mathematical Problems in Engineering, PeerJ Computer Science and Dyna (Spain). He also holds the associate editor/guest editor position with many Scopus journals. He has been serving as the series editor of “Biomedical Engineering series” (Elsevier), editorial board member of ASTI series (Springer) and “Robotics and Healthcare series” (CRC Press). He has received a project grant of £35,000 from the Government of UK (GCRF scheme) with collaborators from the University of Westminster, UK. He has also completed two funded research projects from CSIR, Govt. of India, and DST, Govt. of India. He also serves as the “research scientist” of Computational Intelligence and Information Systems (CI2S) Lab, Argentina; LAPISCO research lab, Brazil; RIADI Lab, Tunisia; Research Centre for Applied Intelligence, University of Craiova, Romania; and e-health and telemedicine group, University of Valladolid, Spain. He has also been the organizing committee member of several international conferences across the world such as Portugal, Romania, UK, Egypt, and China. He has delivered more than 150 keynote talks/invited lectures in international conferences/workshops. He holds professional membership with IEEE technical committee on neural networks (IEEE Computational Intelligence Society) and IEEE technical committee on soft computing (IEEE Systems, Man and Cybernetics Society) and ACM. He is the NVIDIA “University Ambassador” and NVIDIA certified instructor for deep learning courses. His name is in the “Top 2% leading world scientists 2021” list released by Stanford University, USA. Currently, he is working as professor at the Department of ECE, Karunya University, Coimbatore, India. He also holds the position of “visiting professor” in Faculty of Electrical Engineering and Information Technology, University of Oradea, Romania. is working as an associate professor at the Department of Computer Dr. Madhulika Bhatia Science and Engineering, Amity School of Engineering and Technology, Amity University, Noida. She holds a diploma in computer science and engineering, BE in computer science and engineering, MBA in information technology, MTech in computer science and PhD from Amity University, Noida. She has a total of 15 years of teaching experience. She has published almost 32 research papers in national and international conferences and journals. She is also the author of two books. She has filed two provisional patents. She has attended and organized many workshops, guest lectures, and seminars. She is also a member of many technical societies like IET, ACM, UACEE, and IEEE. She has reviewed for Elsevier-Heliyon, IGI, Indian Journal of Science and Technology and did editorial for Springer Nature, Switzerland, for book chapter in data visualization and knowledge engineering. She has guided 5 MTech thesis and around 50 BTech major and minor projects and is guiding PhD scholars. Dr. Isabel De La Torre Diez is full professor at the Department of Signal Theory and Communications and Telematics Engineering, the University of Valladolid, Spain. His teaching and research interests include development and evaluation of telemedicine applications, e-health, mhealth, EHRs (electronic health records), EHR standards, biosensors, QoS (quality of service), QoE https://doi.org/10.1515/9783110713817-009
130
Brief biography
(quality of experience) and machine learning applied to the health field. He is the leader of GTe Research Group (http://sigte.tel.uva.es) at the University of Valladolid. He is author of more than 210 papers in SCI journals, peer-reviewed conferences, proceedings, books and international book chapters. He has coauthored 16 registered innovative software. He has been involved in more than 70 program committees of international conferences until 2022. He has participated/coordinated in 45 funded European, national and regional research projects.
Index 4-D films 2 Accuracy 109, 113, 114–116 artificial intelligence 27, 32–33, 36–37, 71, 73–74, 75–79, 82, 109 artificial life 27, 33, 37 assistance 51 Augmented reality 4, 29–31, 38, 92, 103 basic 3D obvious test structure 3 Binocular-Omni-Orientation Screen (Effect) 3 classification 29 CNN 109, 115–116 Computer generated 1, 4, 7, 31, 87, 103, 105–106 convergence 28, 36, 37
Neural network 32, 109, 112, 116 non-immersive expanded comprehension 2 Oculus Quest 2 Pilot test programs 2 pain 46–53 precision 79–81 segment 22, 72 Semi-immersive systems 35 shopping 57–61 – intelligent shopping 66, 93 – Shopping malls 57, 93–94 – virtual shopping 89, 92 simulations 1, 7, 9, 10, 18, 22, 28, 33, 34, 87–88, 105 unity 15–16, 19, 25
digital air stream 3 fitting rooms 87, 98, 99 head-mounted grandstand 2 healthcare 48, 71–72, 75–78, 82–84, 106–108, 116 HTC Vive 2, 3 machine learning 45, 72, 78, 80, 104, 111, 119 Methodology 16–17
https://doi.org/10.1515/9783110713817-010
Video Surveillance 15–17 VIDEOPLACE, a phony reality 3 virtual reality 1, 3–11, 45–48, 51–52, 54, 71, 78, 87–90, 103, 105 – Immersive VR 2, 38, 45 – non-immersive VR 2 – VR experience 2, 5, 45, 47 Virtual trial room 92–93 visually coupled airborne structure test system 3 VR and AR enhancements 3
De Gruyter frontiers in computational intelligence Already published in the series Volume 13: Computational Intelligence in Software Modeling Vishal Jain, Jyotir Moy Chatterjee, Ankita Bansal, Utku Kose, Abha Jain (Eds.) ISBN 978-3-11-070543-0, e-ISBN (PDF) 978-3-11-070924-7, e-ISBN (EPUB) 978-3-11-070934-6 Volume 12: Artificial Intelligence and Internet of Things for Renewable Energy Systems Neeraj Priyadarshi, Sanjeevikumar Padmanaban, Kamal Kant Hiran, Jens Bo Holm-Nielson, Ramesh C. Bansal (Eds.) ISBN 978-3-11-071379-4, e-ISBN (PDF) 978-3-11-071404-3, e-ISBN (EPUB) 978-3-11-071415-9 Volume 11: Cyber Crime and Forensic Computing. Modern Principles, Practices, and Algorithms Gulshan Shrivastava, Deepak Gupta, Kavita Sharma (Eds.) ISBN 978-3-11-067737-9, e-ISBN (PDF) 978-3-11-067747-8, e-ISBN (EPUB) 978-3-11-067754-6 Volume 10: Blockchain 3.0 for Sustainable Development Deepak Khazanchi, Ajay Kumar Vyas, Kamal Kant Hiran, Sanjeevikumar Padmanaban (Eds.) ISBN 978-3-11-070245-3, e-ISBN (PDF) 978-3-11-070250-7, e-ISBN (EPUB) 978-3-11-070257-6 Volume 9: Machine Learning for Sustainable Development Kamal Kant Hiran, Deepak Khazanchi, Ajay Kumar Vyas, Sanjeevikumar Padmanaban (Eds.) ISBN 978-3-11-070248-4, e-ISBN (PDF) 978-3-11-070251-4, e-ISBN (EPUB) 978-3-11-070258-3 Volume 8: Internet of Things and Machine Learning in Agriculture. Technological Impacts and Challenges Jyotir Moy Chatterjee, Abhishek Kumar, Pramod Singh Rathore, Vishal Jain (Eds.) ISBN 978-3-11-069122-1, e-ISBN (PDF) 978-3-11-069127-6, e-ISBN (EPUB) 978-3-11-069128-3 Volume 7: Deep Learning. Research and Applications Siddhartha Bhattacharyya, Vaclav Snasel, Aboul Ella Hassanien, Satadal Saha, B. K. Tripathy (Eds.) ISBN 978-3-11-067079-0, e-ISBN (PDF) 978-3-11-067090-5, e-ISBN (EPUB) 978-3-11-067092-9
www.degruyter.com
Volume 6: Quantum Machine Learning Siddhartha Bhattacharyya, Indrajit Pan, Ashish Mani, Sourav De, Elizabeth Behrman, Susanta Chakraborti (Eds.) ISBN 978-3-11-067064-6, e-ISBN (PDF) 978-3-11-067070-7, e-ISBN (EPUB) 978-3-11-067072-1 Volume 5: Machine Learning Applications. Emerging Trends Rik Das, Siddhartha Bhattacharyya, Sudarshan Nandy (Eds.) ISBN 978-3-11-060853-3, e-ISBN (PDF) 978-3-11-061098-7, e-ISBN (EPUB) 978-3-11-060866-3 Volume 4: Intelligent Decision Support Systems. Applications in Signal Processing Surekha Borra, Nilanjan Dey, Siddhartha Bhattacharyya, Mohamed Salim Bouhlel (Eds.) ISBN 978-3-11-061868-6, e-ISBN (PDF) 978-3-11-062110-5, e-ISBN (EPUB) 978-3-11-061871-6 Volume 3: Big Data Security Shibakali Gupta, Indradip Banerjee, Siddhartha Bhattacharyya (Eds.) ISBN 978-3-11-060588-4, e-ISBN (PDF) 978-3-11-060605-8, e-ISBN (EPUB) 978-3-11-060596-9 Volume 2: Intelligent Multimedia Data Analysis Siddhartha Bhattacharyya, Indrajit Pan, Abhijit Das, Shibakali Gupta (Eds.) ISBN 978-3-11-055031-3, e-ISBN (PDF) 978-3-11-055207-2, e-ISBN (EPUB) 978-3-11-055033-7 Volume 1: Machine Learning for Big Data Analysis Siddhartha Bhattacharyya, Hrishikesh Bhaumik, Anirban Mukherjee, Sourav De (Eds.) ISBN 978-3-11-055032-0, e-ISBN (PDF) 978-3-11-055143-3, e-ISBN (EPUB) 978-3-11-055077-1