Virtual Reality and the Built Environment [2 ed.] 9781317211136, 1317211138

Like the first edition, the central question this book addresses is how virtual reality can be used in the design, produ

240 73 8MB

English Pages 152 [161] Year 2018

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Cover
Title
Copyright
Contents
Preface
1 Introduction
1.1 The virtuality continuum
1.2 What are VR systems?
1.3 Characteristics of VR systems
1.4 Changing experiences in the built environment
1.5 Cyber-physical environments and the ‘digital twin’
1.6 Technology choices: information, users and tasks
1.7 The structure of this book
2 User experience in VR systems
2.1 Perceiving digital information
2.2 Shaping user experience
2.3 Development of virtual and augmented reality systems
2.4 The future of VR
3 Visualizing city operations
3.1 1990–1999: early city models
3.2 2000–2009: multi-use urban models
3.3 2010–2019: new cyber-physical interactions and relationships
3.4 2020 onwards: starting with operations
4 Visualizing design
4.1 1990–1999: design through digital media
4.2 2000–2009: design reviews, choosing options and marketing
4.3 2010–2019: transforming design practice
4.4 2020 onwards: towards the future of VR in design
5 Visualizing construction
5.1 1990–1999: from design into construction
5.2 2000–2009: simulating construction and operations
5.3 2010–2019: training operators and augmenting operations on site
5.4 2020 onwards: towards the future of VR in construction
6 Towards digital maturity
6.1 Digital adolescence
6.2 Defining the value proposition for VR systems
6.3 VR strategy: growing and developing capabilities
6.4 The future of VR in the built environment
Index
Recommend Papers

Virtual Reality and the Built Environment [2 ed.]
 9781317211136, 1317211138

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Virtual Reality and the Built Environment

Like the first edition, the central question this book addresses is how virtual reality can be used in the design, production and management of the built environment. The book aims to consider three key questions. What are the business drivers for the use of virtual reality? What are its limitations? How can virtual reality be implemented within organizations? Using international case studies it answers these questions whilst addressing the growth in the recent use of building information modelling (BIM) and the renewed interest in virtual reality to visualize and understand data to make decisions. With the aim of inspiring and informing future use, the authors take a fresh look at current applications in the construction sector, situating them within a broader trajectory of innovation. The new edition expands the scope to consider both immersive virtual reality as a way of bringing professionals inside a building information model, and augmented reality as a way of taking this model and related asset information out to the job-site. The updated edition also considers these technologies in the context of other developments that were in their infancy when the first edition was written, such as laser scanning, mobile technologies and big data. Virtual Reality in the Built Environment is essential reading for professionals in architecture, construction, design, surveying and engineering and students on related courses who need an understanding of BIM, CAD and virtual reality in the sector. Jennifer Whyte is Laing O’Rourke / Royal Academy of Engineering Professor of Systems Integration at the Centre for Systems Engineering and Innovation in the Department of Civil and Environmental Engineering at Imperial College London, UK. Dragana Nikolić is Lecturer in Digital Architecture in the School of the Built Environment at the University of Reading, UK.

Virtual Reality and the Built Environment Second edition

Jennifer Whyte and Dragana Nikolic´

Second edition published 2018 by Routledge 2 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN and by Routledge 711 Third Avenue, New York, NY 10017 Routledge is an imprint of the Taylor & Francis Group, an informa business © 2018 Jennifer Whyte and Dragana Nikolić The right of Jennifer Whyte and Dragana Nikolić to be identified as authors of this work has been asserted by them in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. First edition published by Routledge 2002 British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library Library of Congress Cataloguing-in-Publication Data Names: Whyte, Jennifer, author. | Nikolić, Dragana, author. Title: Virtual reality and the built environment / Jennifer Whyte and Dragana Nikolić. Description: Second edition. | Milton Park, Abingdon, Oxon ; New York, NY : Routledge, 2018. | Includes bibliographical references. Identifiers: LCCN 2017040815 | ISBN 9781138668751 (hbk : alk. paper) | ISBN 9781138668768 (pbk : alk. paper) | ISBN 9781315618500 (ebk) Subjects: LCSH: Building—Data processing. | Municipal engineering—Data processing. | Virtual reality in architecture. | Building information modeling. | Virtual reality—Industrial applications. Classification: LCC TH437 .W547 2018 | DDC 624.0285—dc23 LC record available at https://lccn.loc.gov/2017040815 ISBN: 978-1-138-66875-1 (hbk) ISBN: 978-1-138-66876-8 (pbk) ISBN: 978-1-315-61850-0 (ebk) Typeset in Charter and FS Albert by Apex CoVantage, LLC Visit the companion website: www.routledge.com/cw/whyte

Contents

Preface

1

Introduction 1.1 1.2 1.3 1.4 1.5 1.6 1.7

2

4

Contents

43

1990–1999: early city models 46 2000–2009: multi-use urban models 52 2010–2019: new cyber-physical interactions and relationships 60 2020 onwards: starting with operations 68

Visualizing design 4.1 4.2 4.3 4.4

14

Perceiving digital information 15 Shaping user experience 23 Development of virtual and augmented reality systems 33 The future of VR 39

Visualizing city operations 3.1 3.2 3.3 3.4

1

The virtuality continuum 2 What are VR systems? 3 Characteristics of VR systems 4 Changing experiences in the built environment 7 Cyber-physical environments and the ‘digital twin’ 8 Technology choices: information, users and tasks 10 The structure of this book 12

User experience in VR systems 2.1 2.2 2.3 2.4

3

vii

72

1990–1999: design through digital media 76 2000–2009: design reviews, choosing options and marketing 81 2010–2019: transforming design practice 88 2020 onwards: towards the future of VR in design 98

v

5

Visualizing construction

103

5.1 1990–1999: from design into construction 107 5.2 2000–2009: simulating construction and operations 111 5.3 2010–2019: training operators and augmenting operations on site 116 5.4 2020 onwards: towards the future of VR in construction 124

6

Towards digital maturity 6.1 6.2 6.3 6.4 Index

vi

129

Digital adolescence 130 Defining the value proposition for VR systems 134 VR strategy: growing and developing capabilities 139 The future of VR in the built environment 143 147

Contents

Preface

How can virtual reality be used in the operation, design and construction of the built environment? The first edition of this book came out in the childhood of virtual reality use in the built environment, but the rapid pace of technological development in VR systems and digital platforms presents us with the challenge of making appropriate choices for using VR to obtain real value. In this book we consider some key questions: What is virtual reality? How is it used in the built environment? What is the value proposition? What can we anticipate about its use in the future? These questions are pertinent as the increasing use of digital data through Building Information Modelling (BIM) and related technologies, leads to a renewed interest and more mature use of virtual reality to visualize and understand data to make decisions. In this second edition we examine how the ongoing process of growth and development involved in the uptake of VR often introduces new visualization approaches into practices that are already digital. We expand the scope to consider both immersive virtual reality as a way of bringing professionals inside a building information model, and augmented reality as a way of taking this model and related asset information out to the job-site. We consider these approaches in the context of other technological developments that were in their infancy when the first edition was written, such as laser scanning, mobile technologies and big data. This new edition would never have been completed without the effort and good will of a very large number of people. We are grateful to editors Catherine Holdsworth and Matt Turpie for their patience and encouragement, Seth Wilberding for his input and advice and Ranjith Soman for his comments on earlier drafts of this book. We would also like to thank the built environment professionals who shared their insights and examples. In addition to those who helped with the first edition,

Preface

vii

we are indebted to many others including James Bowles, Tom Gunkel, Mats Eliasson, Rachel Hain, Jason Hawthorne, Ricardo Khan, John Messner, Alvise Simondetti and John Taylor. Jennifer Whyte Department of Civil and Environmental Engineering, Imperial College London, UK. Dragana Nikolić School of the Built Environment, University of Reading, UK. December 2017

viii

Preface

Chapter 1

Introduction

There is renewed excitement about virtual reality and the built environment. A range of virtual, mixed and augmented reality approaches are becoming used to represent the built environment and to make decisions about the operation, design and construction of new buildings and infrastructure. Central to this is wellstructured digital information, which, as computing becomes ubiquitous, users can access, unbound by place, form or device. In recent decades, the exponential growth in computing power and early experiments with virtual reality (VR) systems have given way to a more systematic use of digital information on buildings and infrastructure and far more powerful and widely distributed VR systems and applications across a range of desktop, laptop and mobile devices. Furthermore, immersive visualization applications using head-mounted displays (HMDs) and augmented reality (AR) devices have become increasingly more affordable to consumers and smalland medium-sized enterprises, and can be used to augment existing and understand future interventions in the built environment. These provide new opportunities for innovation, with many professionals experimenting with these technologies in practice. Yet, we also become acutely aware of how technology developments tend to outpace our ability to fully understand how to maximize the benefits from using it. The growing use of immersive and augmented visualization approaches is also transforming our relationship with the built environment, raising new questions for professionals who seek to leverage VR systems in the planning, architecture, engineering, construction and operations fields. It is these questions that we explore in this book, including: 1 2

3

How can applying virtual and augmented reality approaches help us better understand the built environment? How can built environment professionals use these approaches to improve the design, operation and maintenance of both new and existing built environment projects? How can we choose between VR systems and visualization approaches for different built environment tasks?

Introduction

1

In this chapter, we introduce the concept of the virtuality continuum as a framework for discussing VR systems, including augmented reality. Specifically, we discuss these systems in the context of changing user experiences within the built environment, and consider how, as the environment becomes more ‘cyber-physical’, digital information about buildings and infrastructure can be seen as their ‘digital twin’. We also discuss how, as authors of digital information about the built environment and of digitally mediated experiences, we face choices in how we present information for particular users and tasks.

1.1 The virtuality continuum To better discern the various types of virtual and augmented reality and to what extent they represent aspects of ‘virtual’ and ‘real’ environments, Milgram and Kishino (1994) proposed a virtuality continuum concept (Figure 1.1). This concept categorizes mixed reality environments as those that straddle actual and virtual worlds and those that are composed of solely real and virtual objects. However, because this term defines environments that combine both real and virtual objects, mixed (or hybrid) reality approaches are more commonly referred to as augmented reality, or those that support actual environments with virtual information. Examples of augmented reality (AR) applications include mobile phone applications that overlay virtual directions, retail locations and other information over information about actual places. On the other hand, augmented virtuality, a term primarily used in a research context, refers to applications in which users predominantly view digital information supported by real contextual data, such as navigational GPS devices. Thus, in this book, we take the approach that virtual reality systems are not a monolithic concept and imply a more flexible view of designing approaches to visualizing and interacting with information. In the following sections, we discuss the characteristics of virtual reality systems and examine their defining components and attributes. In addition, we situate this discussion of the use of both virtual and augmented reality technologies using examples of various use scenarios, tasks and user experiences. This approach allows us to understand how virtual and augmented reality applications can change user perceptions and influence the understanding of digital information, which can consequently affect the design and operation of built environment projects. To achieve this, we present case studies, user experiences and ongoing research initiatives in order to identify how virtual and augmented reality are used in the architecture, engineering and construction fields.

1.1 The virtuality continuum Source: Adapted from Milgram and Kishino, 1994.

2

Introduction

1.2 What are VR systems?

1.2 VR system components, including input and output devices, data, software, hardware and users

The goal of virtual reality systems is to give users compelling, intuitively interactive and immersive experiences within virtual environments. In this book, we apply the term VR system to include both virtual environments that lie completely within a virtual world and those that project digital data onto the actual world. As shown in Figure 1.2, we distinguish the various user experiences that are achieved through a combination of input and output devices, which typically connect to a VR model that is developed from a single or multiple models and other data inputs using particular software and hardware. Generally speaking, these systems allow users to interact with and experience information that can either be grounded in physical reality (i.e. actual information) or entirely computer-generated. However, more than just a type of technology that is confined to the use of specific hardware, VR facilitates interactions between users and a simulated reality. The Oxford English Dictionary definition of the term virtual reality emphasizes both the experiential and technological aspects of this approach: The computer-generated simulation of a three-dimensional image or environment that can be interacted with in a seemingly real or physical way by

Introduction

3

a person using special electronic equipment, such as a helmet with a screen inside or gloves fitted with sensors. (OED, 2017) As VR technology has matured, it has seen a shift from a technology-centred ‘goggles and gloves’ view of VR applications based on hardware and software to a broader and more integrative media approach centred on user experience, such as understanding and interacting with proposed design and construction projects. This latter approach allows us to consider virtual reality as a flexible system that presents a choice of input and output devices that can be tailored to how we wish to experience VR information for a broad range of tasks. As discussed previously, when virtual information is superimposed onto the physical world to augment the actual settings, we use the term augmented reality. Generally, both AR and VR applications use the same system components, but AR affords users the additional capacity to display digital information over “a predominantly real-world view” (Wang et al., 2013: 2). And while AR has a long development history, applications such as the smartphone game Pokémon Go,1 which challenges players to find digital creatures that are ‘hidden’ in physical reality, exemplify a shift in perception of AR from being a niche technology to a broader consumer success.

1.3 Characteristics of VR systems While the aim of VR applications is the illusion of an unmediated experience, users experience VR models through input and output devices, and this experience is influenced by the hardware and software of the VR system in which the model is displayed. In the following sub-sections, we further consider the key system components that contribute to the user experience.

1.3.1 Input and output devices The configuration of VR and AR input devices, including control and position tracking, and output devices such as visual, aural, olfactory, haptic and kinaesthetic systems (Isdale, 1998), largely determines how users view, navigate, interact with and experience simulated environments (Figure 1.3). Control devices refer to general-purpose interaction interfaces, typically a wireless controller or some kind of mouse, trackball or joystick, all of which are used to navigate a virtual environment or select and interact with virtual objects. When an interaction device is integrated with a position tracking system, a VR or AR application display responds to users’ positions and head or body movements, adjusting their view as they interact with an environment. Position tracking interfaces ideally include three position measures (i.e. the X, Y and Z axes) and three

4

Introduction

1.3 VR input and output devices

orientation measures (i.e. roll, pitch and yaw). However, some standard input devices, such as the mouse, do not allow for this full range of measures. In addition, higher-end VR systems have explored the use of ultrasonic, magnetic and optical position trackers to enable all six degrees of position tracking and control. For more specialized VR models, such as training applications for medical professionals, simulated tools such as ‘virtual scalpels’ can be graphically mapped onto special-purpose interaction devices to mimic actual tools and controls and provide a more intuitive interactive experience. Visual displays range in size, from monitors to large screens, and configuration, from single to multiple screens. These displays feature either stereoscopic imagery, i.e. imagery that shows a different view for each eye to perceive depth, or monoscopic imagery, which generates the same image for both eyes. A head-mounted display, such as the Oculus Rift,2 is an example of an immersive visual display, while non-immersive displays include desktop monitors and VR workbenches. Aural or auditory interfaces are experienced through hearing, but are not common in the industrial use of virtual reality applications. However, Brooks (1999) explained how, in some cases, audio information can be more important in VR applications than visual inputs.

Introduction

5

Olfactory interfaces are experienced through smell and are even less common in practical applications of virtual reality. The sense of smell is chemical in nature, and thus requires scent-generating sources that are typically linked to applications that control their release and mixture. In practical terms, olfactory interfaces are more challenging to implement in VR than visual or auditory interfaces. Haptic interfaces are experienced through touch. For example, Brooks (1999) stated that much of the sense of presence and participation inherent in VR vehicle simulators comes from the presence of haptic information, as these simulators allow users to ‘touch’ elements that are accessible on actual vehicles. Haptic interfaces are typically found in cases that involve manual operation of virtual objects, and use data gloves and similar force-feedback devices to apply forces and vibrations that enable tactual perception. Kinaesthetic interfaces are closely linked to the haptic feedback experienced through bodily motion. Kinaesthetic feedback is typically used in tasks involving navigation and movement through virtual environments. For example, six degrees of freedom hand-held controllers enable physical engagement with a virtual environment and may also provide tactile or force-feedback. Room-sized VR applications often use walking in a physical environment as a way to navigate or ‘walk’ through a virtual environment. For example, artist Charlotte Davis’s 1995 work “Osmose,”3 for which she designed a virtual experience for viewers to move through a space by breathing (Fieldgate, 2017), challenges standard preconceptions of how we interact and move through VR environments. In addition, recent VR experiments with different forms of input and output devices engage a range of senses. Some experimental work also explores the mind’s ability to navigate VR environments using headsets that detect brainwaves.

1.3.2 Software and hardware Recently, improvements in computer hardware processing capabilities, graphics cards, database technologies, software libraries and applications that allow for real-time interaction with virtual environments have expanded the opportunities to explore ever more complex VR environments (Figure 1.4). Both the amount and type of digital information and VR system configurations govern the potential for real-time viewing and interaction. However, the human perceptual system is attuned to detect any inconsistencies, such as low frame rates and system response lags that can induce discomfort or even motion sickness. In terms of rendering, 18–24 visual frames per second (fps) is generally the minimum rate at which a viewer can perceive a stream of still images as a smooth visual (Isdale, 1998). However, authoring software applications that generate digital asset information, such as building information modelling (BIM), tend to produce relatively large models that require either advanced computer processing capabilities or model optimization strategies to allow for real-time visualization capabilities. When coupled with immersive displays and user tracking systems,

6

Introduction

1.4 Software and hardware

lower frame rates can cause lags in rendering models and can slow a VR system’s response time to user commands, which can disrupt the illusion of an unmediated experience and thus an overall VR experience. Differences between how lags in head-tracking and hand-tracking systems alone may degrade user performance reveal complex interactions between system components on user experience (Ware and Balakrishnan, 1994). Google designer Jean-Marc Denis illustrated the importance of real-time visualization for virtual reality applications in this way: “In a smartphone app, a dropped frame is just a stutter. In a headset, it’s like an error in your consciousness” (Wilson, 2015).

1.4

Changing experiences in the built environment

As discussed previously, digital modes of interaction have become increasingly sophisticated and widespread. Across the globe, many forms of interaction have become mediated through smartphones and social media, and for many of us, using these technologies makes it harder to sustain boundaries between our personal and professional lives. Work correspondence can be read at home, and friends’ updates are read in the office. These changes in how we use digital

Introduction

7

technology reflect and shape our experiences and the extent, pattern and pace of our interactions with the built and natural environments within which we operate. Fundamentally, the technologies we use to visualize the built environment are no different, as they influence the way we see, understand and ultimately construct this environment. As a result, it is important to explore the connections between digitally mediated visualization and our perceptions in order to guide how we employ VR systems to design, construct and operate built environment projects. In addition, users require a degree of visual and digital literacy to ‘read’ a virtual environment as representative of a built or natural environment. Historically, significant changes in visualization technologies have coincided with radical changes in human perception. For example, the development of high-quality window glass in the 14th century led to our tendency to view the world increasingly through frames (Mumford, 1934) that allowed certain elements of reality to be more clearly understood and focused attention on sharply bounded fields of view (Foster and Meech, 1995). Furthermore, in the 1400s, the development of more sophisticated lenses and mirrors coincided with the rise of accurate portraiture (Hockney, 2006). Brunelleschi, the engineer of Florence’s famous Santa Maria del Fiore cathedral, may have discovered linear perspective through experimentation with the latest advanced lens and mirror technologies, including glass imported from northern Europe. In the 20th century, many began to view the world dynamically through the frame of the cinema, television, car windscreen, computer monitor and game console, and saw for the first time our viewpoint moving rapidly through the world while we remained static. Years later, our development and use of VR systems draws on these experiences. As with film, animation and television media, VR applications often use a language of cuts, pans and zooms that must be learned, as these phenomena are not experienced in our physical perception of the actual world. As with travelling by car, our perception is often reduced to a dynamic view through a frame in which our view changes but our body remains still relative to its immediate surroundings. As you read this book, consider how your experiences in virtual environments shape your understanding of and ultimately the design and construction of built environments. We now turn to consider the cyber-physical nature of the built environment and will return more broadly to this question of the relationships between our experiences in virtual, built and natural environments in the final chapter.

1.5 Cyber-physical environments and the ‘digital twin’ Our growing use of digital information and the ability to access that information across an array of personal and professional digital technologies have significantly extended our capability to record, process, manipulate and display information about the built environment. Often, clients that commission building and

8

Introduction

infrastructure projects now require the delivery not only of the physical infrastructure, but also of its associated digital information. Such digital information has become important to the operation, management and maintenance of the built environment. As a result, accurate data about how the built environment operates has become increasingly more important to its efficient operation and maintenance and to address issues of sustainability and resilience. The term ‘cyber-physical’ has become used to describe the increased level of interaction between the operation and use of the physical environment, which is made up of both the built and natural environments, and its digital copy, which may be updated with real-time information from sensors and scans and may be stored in a range of ways. As in manufacturing industries (Glaessgen and Stargel, 2012), digital information about a building or infrastructure can be seen as its ‘digital twin’. In other words, delivery projects no longer produce physical assets as their sole deliverables, but also produce ‘digital twins’. As a result, while VR can be used to examine proposed building and infrastructure projects during their delivery process, after they are complete, both VR and AR may be useful for understanding the status of these physical assets and environments (Figure 1.5). Thus, unlike the common project delivery approaches of the past that focused only on physical deliverables, such as plans, construction documents and building structures, the design, construction and engineering disciplines now also commonly generate and manage digital asset information in their projects. This enormous shift has changed the nature of these industries, all of which have now taken on new responsibilities to generate, manage and safeguard digital asset information. As BIM processes become more widely used in both the construction and infrastructure sectors, professionals must learn how to generate and manage large datasets for their projects and develop new understandings of built environment assets and their performance. However, while information lies at the core of built environment projects, the many decisions surrounding this information that are acquired during the design, construction and operation phases rely heavily on the methods and technologies that allow users to visualize and interact with this information. As a result, it is important to explore how a range of technologies, including VR and AR approaches, can influence how users experience and consequently modify digital information. In response, consumer market hardware components, such

1.5 How VR and AR technologies interface with design and physical assets and their digital twins

Introduction

9

as head-mounted displays, are being developed for immersive and augmented visualization capabilities, and connections are being made with other technology domains, such as machine learning and robotics. These interactive visualization technologies extend the standard design and information management applications used in the architecture and construction fields, in that they act primarily as communication media that engage users in intuitively navigating, testing and reviewing information.

1.6 Technology choices: information, users and tasks Professionals from various disciplines often engage in interdisciplinary work in order to evaluate designs and processes in broader performance contexts. For example, built environment development processes may involve engineers collaborating with designers and facilities operators on the system design and layout of a project to address maintenance procedures and access issues. As a result, the use of building information modelling (BIM) approaches (Eastman et al., 2008) has significantly advanced to allow professionals to generate much more information about built environment projects. However, the growing use of BIM methods also demands new approaches to bringing visual interfaces to collaborative approaches that involve users and practitioners from a variety of disciplines. In recent years, VR systems have become increasingly more important in providing visual interfaces for digital information. And as a result of new gaming experiences, professional expectations of VR and AR technologies have increased, as users demand ever more sophisticated uses of visualization in both their professional and personal lives. Underlying any given VR model is an information source that has been generated by capturing, analysing and synthesizing digital data (Figure 1.6). Recent developments, such as laser scanning,4 sensors and photogrammetry,5 have offered

10

Introduction

1.6 Data capture, analysis and synthesis for VR/AR model development

design and construction professionals the means to quickly capture physical asset data from project scales ranging from individual buildings to broad urban contexts. When paired with consumer market devices such as mobile phones, tablets and drones, these data-capture methods can be used to aggregate unprecedented amounts of digital information about physical environments. In addition, data analysis and synthesis methods using design modelling, engineering analysis and operational analytics can now combine and structure data to an unprecedented degree. In designing, constructing and operating built environment projects at any scale, professionals use this digital information to simulate, test and make informed decisions. They typically create information in discipline-specific applications, which often utilize different representation forms (e.g. diagrams, 2D or 3D deliverables) and data structures. These computational methods are used to develop models that can then be visualized using virtual or augmented reality to help design, evaluate and adjust project performance. Of course, it is also necessary for professionals to evaluate which visual interfaces afford the most useful experience for any given task. Due to their distinctive features, VR and AR applications may not offer the most effective experience in all cases. Technology choices largely depend on an understanding of the nature of the tasks at hand, including identifying user groups, anticipating what outcomes they require and discerning what information is available. At this point, we broadly distinguish between applications for two user groups: built environment users (e.g. residents, building occupants, tourists, commuters, etc.) and built environment professionals (e.g. designers, engineers, operators, etc.) (Table 1.1). Both groups may engage in a range of specific outcome-directed tasks either individually or collaboratively using any given application. In addition, for each user group, there may be single user or multiple user applications available. For example, Pokémon Go is a single-user AR application that is widely disseminated in the consumer market; safety training is an example of a professional single-user application; and design and constructability reviews are examples of multi-user applications, which respectively engage built environment users and built environment professionals. Based on these applications, we explore different ways to further categorize VR systems in Chapter 2.

Table 1.1 Individual and collaborative uses of virtual and augmented reality applications by built environment users and professionals

Introduction

11

1.7 The structure of this book

1.7 Thematic structure of this book

As this chapter indicates, this book explores how virtual and augmented reality can shape, and have already shaped, our experiences with and perception of real and virtual environments. The following chapters present and discuss applications in the operation, design and construction of the built environment. The thematic structure of the book is shown in Figure 1.7. In Chapter 2, we categorize the main types of VR systems, trace the development of selected systems and discuss how VR technologies relate to other emerging built environment applications. Chapter 3 examines how virtual and augmented reality are applied to operating built environment projects across building, neighbourhood, infrastructure and city scales. Chapter 4 discusses the use of virtual reality in the design process and explains the role digital representations play in using these technologies for design-related tasks. In Chapter 5, we review many applications of virtual and augmented reality for construction-related tasks, such as construction reviews, fault detection, progress and site activities monitoring, among others. Finally, in Chapter 6, we conclude by presenting some of the arguments concerning current and potential uses in the context of digital maturity, technology innovation and digital data management.

Notes 1 2 3 4

www.pokemongo.com/ www.oculus.com/rift/ www.immersence.com/osmose/ Laser scanning is a method that uses high speed lasers to detect surface data points, thus creating a three-dimensional point cloud. The method is also known as a point cloud survey. A similar method that uses both light and radar detection is known as LiDAR. 5 Photogrammetry is a technique that uses photographs or videos to capture and extract three-dimensional geometric information in the form of meshes, rather than point clouds.

References Brooks, F.P., 1999. What’s real about virtual reality? IEEE Computer Graphics and Applications 19, 16–27. Eastman, C., Teicholz, P., Sacks, R., Liston, K., 2008. BIM handbook : A guide to building information modeling for owners, managers, designers, engineers, and contractors. Wiley, Hoboken, NJ. 12

Introduction

Fieldgate, K., 2017. Osmose by Char Davies [WWW Document], http://immersence.com/ publications/2017/2017-KFieldgate.html Foster, D., Meech, J.F., 1995. Social dimensions of virtual reality, in: Carr, K., England, R. (Eds.), Simulated and virtual realities. Taylor & Francis, Inc., New York, pp. 209–223. Glaessgen, E., Stargel, D., 2012. The digital twin paradigm for future NASA and US Air Force vehicles. Presented at the 53rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference; 20th AIAA/ASME/AHS Adaptive Structures Conference; and 14th AIAA, Honolulu, HI, p. 1818. Hockney, D., 2006. Secret knowledge: Rediscovering the lost techniques of the old masters. Viking Studio, New York. Isdale, J., 1998. What is virtual reality? A web-based introduction [WWW Document], http://isdale.com/jerry/VR/WhatIsVR/noframes/WhatIsVR4.1.html Milgram, P., Kishino, F., 1994. A taxonomy of mixed reality visual displays. IEICE Transactions on Information and Systems 77, 1321–1329. Mumford, L., 1934. Technics and civilization. Harcourt Brace and World, Inc., New York. OED (The Oxford English Dictionary), 2017. Virtual, a. (and n.). Oxford University Press, Oxford, UK. Wang, X., Kim, M.J., Love, P.E.D., Kang, S.-C., 2013. Augmented reality in built environment: Classification and implications for future research. Automation in Construction 32, 1–13. doi:10.1016/j.autcon.2012.11.021 Ware, C., Balakrishnan, R., 1994. Reaching for objects in VR displays: Lag and frame rate. ACM Transactions on Computer-Human Interaction 1(4), 331–356. doi: 10.1145/198425. 198426 Wilson, M., 2015. 3 Tips on designing for VR, from Google. CoDesign 6, November 2015.

Introduction

13

Chapter 2

User experience in VR systems

In the previous chapter, we discussed that the goal of VR systems is to provide immersive and intuitively interactive experiences of simulated worlds, or in other words, to embed users ‘inside’ a simulated world. The resulting sense of being ‘inside’ a virtual world, or perceiving virtual objects as ‘real’, is known as presence, and is a distinctive feature of the virtual reality experience. Whether a user is perceptually and psychologically fully ‘within’ a simulated world or viewing virtual information overlaid on real-world imagery, achieving a sense of presence has been described as creating “an illusion that a mediated experience is not mediated” (Lombard and Ditton, 1997). Thus, presence is generally determined by the characteristics of a VR medium and its user. These characteristics include those of the VR display, such as size, field of view, resolution and distance (Lombard and Ditton, 1997); its richness, fidelity or plausibility of displayed information (Steuer, 1992); and the degree of natural interaction with a virtual environment (Witmer and Singer, 1998), or broader experiential congruence (Otto, 2002). Although a full discussion of presence is beyond the scope of this book, it is nevertheless important to recognize its importance and the role it plays in the VR experience. Chapter 1 introduced how users experience VR models, and how they are developed through software and hardware and experienced through input and output devices (see Figure 1.2). In this chapter, we extend this discussion by: 1) outlining the characteristics of the user experience drawing from literature on representation and perception, and describing VR as an interactive, spatial and social experience; 2) articulating how specific VR system components contribute to the user experience and categorize types of systems; 3) discussing the development of VR systems and associated technologies; and 4) drawing out the implications for designing appropriate VR interfaces for information (BIM) models, in which various VR viewing perspectives, navigation modes, guides and aids can affect the illusion of an unmediated experience and support the use of VR for different applications.

14

User experience in VR systems

2.1 Perceiving digital information When we experience a VR environment, we focus our attention largely on the simulation that we interact with, rather than the technology that supports this interaction. As a result, VR users tend to focus on represented information and the tasks at hand, such as evaluating construction safety protocols, as shown in Figure 2.1. In other words, we tend to become aware of VR technology only when it does not function in the way we expect. Of course, virtual reality cannot be naïvely conceived of as reality, as there are many ways in which virtual reality masks or distorts physical reality. In recent decades, researchers have learned much about human perception and cognition (e.g. Diemer et al., 2015) and social interaction (e.g. Schroeder et al., 2001) by studying user behaviour in virtual environments. Some of this research has counter-intuitive findings, which suggests a complex relationship between people and their environments. In the following sections, we draw on the literature to discuss representation and perception in virtual environments and present VR as an interactive, spatial and social experience.

2.1.1 Representation in virtual reality Representations are abstractions of real objects or concepts that are created to be meaningful (Scaife and Rogers, 1996). In design and construction practice, we use the term representation to refer to a range of abstractions, from the most abstract diagrams and drawings to more photorealistic images and models. While the goal of many VR systems is to create representational

2.1 A design professional evaluating construction safety protocols in a virtual environment

User experience in VR systems

15

Box 2.1

Objects, representations and users

The study of signs, or semiotics, provides a rich foundation for understanding the complex relationships between objects and their representations. These relationships can be considered dyadic, or consisting of signs that link objects and their representations, or triadic, that is, between objects, their signs and interpreters. And just as the same object can be represented in many ways, its representation can be interpreted in multiple ways. Thus, we can think of the process of representation and its interpretation as an act of ‘knowledge construction’ (Macheachren, 1995) or as a complex form of reasoning (Bosselmann, 1999). This lack of absolute correspondence between reality and its representation can also be seen as a limitation. Any representation, short of an identical copy in the same medium, has “by its very nature its limits, which its user must either accept, or try to transcend by other means” (Gombrich, 1982: 173). Yet, as a way to consider alternatives, it is the symbolic, abstract and partial nature of representations that makes them so useful. Differences between virtual and built environments may provide users with insight into what is important to them about the built environment. In this way, we can see the artifice of virtual reality as a positive feature in design and problemsolving, where that artifice and abstraction allow the user to question options and understand underlying characteristics, rather than something to be resolved through future work.

equivalents to real-world scenarios (Otto, 2002), virtual reality is of course not the same as physical reality. For example, VR models exhibit a fundamental lack of absolute correspondence between their representations and the objects they represent, as explored in Box 2.1. Put another way, a representation is not a replica of what is represented, but rather “its structural equivalent in a given medium” (Arnheim, 1954: 162). This means that choices are always made about what to include and what to omit in developing a given representation. Digital models of cities, for example, often do not represent the graffiti, litter or weather conditions experienced within actual cities, but rather focus on urban form. Furthermore, our interpretation of any representation also depends upon our experience, abilities, strategies and motivations (Chen and Stanney, 1999). Depending on what form they take, representations can present different advantages and disadvantages for specific tasks and users. For example, objects in the built environment can be represented in an iconic manner, so that their representations closely resemble the objects to which they refer. Alternatively, objects can be represented in a symbolic way, which requires the user to learn a

16

User experience in VR systems

‘code’ to understand aspects of their representation. Some examples of iconic representations of the built environment include photographs, digital models and highly photorealistic renderings, and common symbolic representations are sketches, maps and diagrams. While VR applications generally present highly realistic representations, practitioners and researchers are also developing tools and techniques for using VR in more abstract and symbolic ways, or for supplementing iconic representations with symbolic representations. For example, in early design phases, greater abstraction may allow viewers more latitude to question and interpret what is seen through the reduction of visual ‘clutter’ (Radford et al., 1997). In other applications, such as navigating a building, a simple plan may supplement a highly realistic first-person or viewercentred viewpoint.

2.1.1.1

2D REPRESENTATION

Two-dimensional (2D) representations, such as maps and plans, are often used to simplify three-dimensional (3D) phenomena. Translating these phenomena into two dimensions allows us to represent environments that are too large and complex to be viewed directly (Macheachren, 1995). Furthermore, 2D representations can present entire environments from a single vantage point. The ability to look at the world at different scales allows the structures that are most prominent at these scales to be prioritized, and movement between these scales can help shift our attention to specific aspects of the built environment. Such representations are symbolic in nature, and can be used as a relatively easy way of communicating knowledge about the layout of an environment. In other words, they are useful as a sort of ‘design shorthand’. However, the representation medium of spatial knowledge can affect user understanding. As one researcher pointed out, “To see a map is not to look from some imagined window but to see the world in a descriptive format” (Henderson, 1999: 28). A 2D representation provides an interpretation or description of a phenomenon, rather than direct access to it. In a well-known study, employees who worked in a building were more accurate at estimating directions and route distances within the building than a group of participants who had only studied its floor plan (Thorndyke and Hays-Roth, 1982). However, the non-employees who studied the plan estimated straight-line distances more accurately than new employees working in the building, and about as accurately as those with 1–2 years of experience working there. In this study, the authors concluded that while maps and plans provide a means to rapidly assimilate knowledge about the relationships between different building areas, they provide less explicit information about directions and route distances. The sociologist Latour (1986) argues that 2D representations have an important political role in convincing others and establishing knowledge claims, as they can become stable referents, circulate and become combined with text. Designers may knowingly use media such as sketches that are relatively easily

User experience in VR systems

17

questioned, altered or erased, or, alternatively, make 2D representations such as complex digital plans that may be difficult to question and alter.

2.1.1.2 3D MODELS

Three-dimensional representations, or models, allow design and construction professionals and non-professionals to more easily understand spatial aspects of existing and proposed built environment projects. Since the Renaissance, building models have often been seen as a necessary accompaniment to architectural drawings. For example, the Renaissance architect Filippo Brunelleschi presented a model of his chapel at Santa Croce in Florence to his project patrons, the Pazzi family (Vasari, 1568). In the 17th century, Sir Henry Wooton (1624) argued that buildings should not be constructed on the basis of 2D drawings, such as plans or perspectives, but rather from models of entire structures constructed at the largest possible scale. Recently, the desire to increase public participation in the design process has also prompted the use of 3D models to support collaborative design projects, rather than simply presenting final designs for review (Lawrence, 1987). Models can be physical (i.e. scale models) or digital. The use of digital models in designing built environment projects is becoming a standard industry practice, as professional tools such as BIM are model-based. In contrast to longstanding design processes, when using BIM, 3D models and other project information are typically created first, and 2D drawings are generated from these models. However, due to the characteristics of screen technologies, digital models viewed on 2D screen systems are seen as perspective, axonometric or isometric representations on single or multiple 2D viewing planes. In other words, although these models are created in 3D environments, and may be viewed in stereo, users typically interact with and view them on 2D screens. By contrast, VR models allow for movement through a digital environment and give the user the illusion of an unmediated experience. This further suggests that the scale of the representation (medium) can also influence how the user perceives digital space, whereas a mismatch in scale between the observer and a representation can lead to objects being misinterpreted and can result in design errors (Dorta and LaLande, 1998).

2.1.1.3 COMBINING DIFFERENT VIEWS

While VR is typically represented as a three-dimensional spatial experience, 3D models can often incorporate additional performance aids and guides, such as 2D maps, to assist in navigating VR environments. Later in this chapter, we consider how the issues of representation discussed in this section impact the design of VR interfaces. In addition, the diverse users of VR environments may experience them through different interfaces, which may impact their understanding. Thus, it is not always straightforward or easy to combine different representations and viewing points, as suggested in Box 2.2.

18

User experience in VR systems

Box 2.2

CALVIN (University of Illinois, US)

The way a model is represented and viewed can impact the viewer’s understanding of that model. An early VR project called CALVIN (Collaborative Architectural Layout Via Immersive Navigation) incorporated two different viewer perspectives: a ‘mortal view’, which was viewer-centred, and a ‘deity view’, which was model-centred, as shown in Figure 2.2 (Leigh and Johnson, 1996). The mortal view was shown to viewers who were completely immersed within a cave automatic virtual environment (CAVE), which is a room-sized cube with stereo images projected on its walls and floor. The deity view, however, was an aerial view of the virtual world presented on a horizontal viewing surface called an ‘immersive workbench’. Even though the workbench displayed a stereo image, the viewing perspective was similar to a plan. Although the intention of this experience was to allow students, teachers and clients to role-play as ‘mortals’ and ‘deities’, this rigid use of varying viewpoints by different participants was found to inhibit a shared understanding of the model (Leigh and Johnson, 1996). Whilst ‘mortals’ were capable of making finer changes in the model, ‘deities’ were capable of performing broader manipulations and making structural changes to the VR environment. One can imagine how a seemingly minor design change, such as moving a wall, might seem sensible from a firstperson perspective within a model, but might look less appropriate from a plan view, where one can see its consequences on an entire design.

2.2 Images of ‘deity’ and ‘mortal’ views using the CALVIN software application Source: Electronic Visualization Laboratory, University of Illinois at Chicago, USA.

User experience in VR systems

19

2.1.2 Perception in virtual environments Psychologists differ in their understanding of the relationship between human cognition and perception. For some, representations such as the 2D drawings and 3D models discussed previously are reformulated within the brain as mental representations. However, for this book, we follow the perspective of other researchers who have argued that cognition occurs with artefacts in the world (Clark, 1998): here cognition is seen as not something that happens in the head, but in the interactions between the head, hands and artefacts. From this perspective, we can consider how a representation of reality is perceived through the senses. The characteristics that distinguish the experience of virtual reality from actual reality are both intended and unintended. Intended differences include rapid movement through the environment, and the power of moving to pre-set locations. Some unintended characteristics include implementation errors and technological limitations of the hardware and software (Drascic and Milgram, 1996). Such shortcomings may be more problematic for mixed and augmented reality applications, in which virtual images are overlaid on the real world (Drascic and Milgram, 1996), and hence the conception of both a virtual and the real world is important to the experience. When fully immersed within a VR environment, viewers can often quickly adapt to miscalibrated systems, such as a viewpoint that is too high for the user or distorts an image. Such a distortion can be seen as analogous to looking at the world through a different lens. However, visual perception plays only a part of how we experience the built environment. The human body is finely tuned to experience its surroundings through the interplay of our senses, which provide us with sight, smell, touch, balance and orientation. Using these sensory systems, our brains process information to give us continuous spatial orientation in order to coordinate our bodies and movements. For example, it has long been understood that a conflict between our visual sense that perceives motion within a virtual environment and our locomotive system that registers the body as stationary may lead to motion sickness. Recent VR research also suggests that our emotional experience can be affected by a combined perception of our body and environment. For example, users sometimes experience fear and happiness simultaneously as a result of the often disorienting bodily nature of navigating virtual environments.

2.1.2.1 INTERACTIVE EXPERIENCES

In contrast with the 2D and 3D representations discussed previously, some distinguishing characteristics of VR environments make them more interactive, spatial and social. In addition, the degree of interaction virtual environments afford also distinguishes them from animations and 3D movies. In VR, users are not

20

User experience in VR systems

constrained to predetermined viewpoints and can choose how to navigate and what to see within an environment. Furthermore, in a VR environment, users not only typically have navigation capabilities, but they may also have the ability to use devices or gestures to change the parameters of the environment or create objects within it. As with the actual world, people tend to be able to intuitively interact with virtual environments, and often understand the effects of their actions. However, interaction with a virtual environment is usually not the same as interaction with a physical environment. For example, our perceptions of time and distance in a VR model often differ from our real-world experiences, as navigating a virtual environment is not typically bound to transportation modes we experience moving through the real world. Put another way, a realistic walking speed may seem slow in VR, and unlike in the real world, users may be able to fly, jump and speed through the environment. Moving between two points in a virtual model is often much faster than would be possible in the real world. To date, basic applications that allow interaction with simple VR models are available on mobile computing devices, and even greater realism and interaction are possible for users with access to higher-end VR facilities. However, navigating a virtual environment can often be problematic and limited, as body movements in VR environments tend to be constrained and distorted. In some immersive VR applications, the user experience is generally disembodied, as our bodies do not move within a model, but rather the virtual world moves in relation to us. This disembodied experience is even more prominent when using head-mounted displays, for which a user’s body is typically not represented in a virtual space.

2.1.2.2

SPATIAL EXPERIENCE

Unconstrained movement through a VR environment, depth perception (stereopsis) and peripheral vision may also support a VR experience. Furthermore, VR experiences generally attempt to follow the logic of the physical world. As a result, users can typically navigate freely through virtual environments and decide what to interact with and observe. In addition, stereoscopic views and a greater field of vision within the environment are among a range of techniques that VR designers use to provide the illusion of an unmediated spatial experience. Rather than manipulating a 3D representation or virtual model, in an immersive experience, a user should feel as if they are within the reality that is represented. There are also some challenges to the spatial nature of VR in built environment applications. For example, on small screens, we tend to interpret spaces as smaller than they are, so game developers typically create over-sized spaces. Yet, built environment professionals are not able to change the scale of the real world in their models, which necessitates careful design of spatial experiences to provide

User experience in VR systems

21

realism. Furthermore, choices about VR interfaces often depend on the task at hand. For example, VR users may have the option to reduce the time required to navigate a city model, and depending on the task, it might be appropriate that the physical movements required to progress through the model differ markedly from similar actions in the actual world.

2.1.2.3 SOCIAL EXPERIENCES

Aesthetic practices have become a recent research focus in the organizational studies community (Ewenstein and Whyte, 2007; Strati, 1999). Recently, many studies have examined the organizational consequences of sensory experiences, with a particular focus on visual practices (Ewenstein and Whyte, 2009; Meyer et al., 2013), as discussed in Box 2.3. Drawing on a related sociological tradition, recent work has also been conducted on the social experience of using VR – which highlights the threshold spaces involved in navigating an immersive environment – and social practices, such as changing shoes, which have become common in the use of large-scale collaborative VR environments (Maftei and Harty, 2015). This latter study used an immersive environment “to challenge or

Box 2.3 Visual practices While much work on visual representation has focused on intrinsic qualities of representations, sociologists and organizational scholars have also examined the social practices through which organizations use and interpret visual materials. Key contributions of this work have demonstrated how factors such as expertise, objects, tools, routines and representations are interlinked, and how the introduction of a new medium of representation may change and sometimes cause problems of understanding (Lanzara, 2009). This goes beyond the idea that experts and novices experience representations differently to articulate how the interpretation of visual images is socially based and thus affected by sensory inputs as part of social practice. For example, Goodwin (2000) described how jurors for the 1992 trial of several Los Angeles policemen who were recorded beating motorist Rodney King lacked the experience and social position to articulate the recording in a consequential way. Television audiences were often outraged by what they saw on the tape, but during the trial, the officers’ attorneys used language, hand gestures and expert testimony to shape how the jury saw the events in a way that helped to exonerate the policemen. After the officers were found not guilty, a riot ensued in Los Angeles, and a later retrial resulted in their conviction. Thus, the researcher concluded that practices involving participants of differing levels of expertise, together with objects, tools, routines and representations, may influence the interpretation of what they see and their subsequent actions.

22

User experience in VR systems

2.3 The BIM CAVE at Texas A&M University

surprise the participants as they experience[d] the immersive, full scale version of their own design” (Maftei and Harty, 2015: 53). Such work has drawn attention to the settings within which VR environments are experienced. A VR viewing space may, for example, be public, such as the BIM CAVE at Texas A&M (Figure 2.3), where visitors can see into the facility and explore its novel experiences. Recently, there has been significant interest in the use of VR for experiential learning, or learning through reflecting on doing, particularly through comparisons of the interpretation of virtual environments by expert practitioners and lay users. The success of any given VR model depends on the given task, the model itself, its users and the context in which it is experienced. Learning and experience are important factors in navigating VR models, and expert VR users tend to be more sophisticated than novices in navigating them.

2.2 Shaping user experience So far we have discussed VR models as representations, considered issues of perception in virtual and physical environments, and articulated the interactive, spatial and social features of VR. However, the VR systems that support these experiences have also been characterized in terms of their technological components. This section presents how professional VR designers shape user experiences through various types of available VR systems and their components. As discussed previously, VR users typically focus on the environment that is represented virtually, rather

User experience in VR systems

23

than the VR system interface itself. Thus, the final sections consider the choices we face in the design of that interface in relation to VR viewing perspectives, navigation modes and performance aids.

2.2.1 Types of VR systems Building on our discussion in Chapter 1 of the virtuality continuum and types of VR applications, this section highlights how practitioners and researchers have categorized various types of VR systems. Here we consider factors such as extent of immersion (Bowman and McMahan, 2007) and number of users. Table 2.1 lists a taxonomy of VR system types and examples of their corresponding hardware and software.

2.2.1.1 EXTENT OF IMMERSION

As discussed previously, virtual reality concerns a certain level of user immersion in a simulated environment, and VR systems are designed to give users a sense of presence, or of ‘being’ within an environment (Ijsselsteijn et al., 2000). While this sense of immersion largely depends on a user’s experience, expectations and interest, here we refer to ‘immersion’ as a physical characteristic of a VR system, and ‘presence’ as a psychological response or characteristic of the user experience. Although this feeling of presence is important for many built environment applications, and early researchers reported that a high level of viewer immersion was necessary to experience true virtual reality (Gigante, 1993), for many simple routine applications, full immersion has not been found to be necessary (Bowman and McMahan, 2007). Based on the extent of visual immersion they provide, VR systems are broadly classified into three categories ranging from fully immersive to non-immersive systems: •

Fully immersive systems envelop a user’s field of vision to present an unmediated VR experience. To achieve this sense of full immersion, these

Table 2.1 A taxonomy of virtual reality types and examples of associated hardware and software

24

User experience in VR systems





systems typically employ specialized hardware, such as head-mounted or surround-screen displays coupled with stereoscopic views and position tracking capabilities. These systems generally require advanced computing power and can generate highly realistic simulations. Semi-immersive systems partially envelop a user’s field of view and provide some aspects of an immersive experience (Figure 2.4). For example, these systems may present stereoscopic 3D models without covering a user’s peripheral vision or tracking body movements, or provide a human-scale representation using large screens without necessarily using stereoscopic viewing techniques. Even head-mounted displays that are typically considered fully immersive can provide partial immersion when the resolution is lower. The Oculus Rift consumer version (CV1) is thus more immersive compared to the lower-resolution developer’s kit (DK1 and DK2). Non-immersive systems are often described as ‘fishtank’ or ‘window-ona-world’ systems. They typically use more commonplace hardware, such as standard monitors and 3D glasses. These systems use similar software as the first two types, but without covering a user’s field of view, and may incorporate some aspects of immersive systems, such as stereoscopic viewing. Given that immersion is a key characteristic of virtual reality, it may be tempting to dismiss low- or non-immersive VR as not providing a true VR experience. However, these systems offer users the ability to interact with and manipulate virtual objects for tasks that may not require fully immersive approaches and are useful when the scale of objects is not relevant.

2.4 3D-MOVE setup used in the UK’s Crossrail office

User experience in VR systems

25

2.2.1.2

SINGLE AND MULTI-USER EXPERIENCE

The user experience of a VR system can be affected by the participation of other users, which is another way to classify the range of VR systems. Virtual reality systems range from multi-user projection-based VR systems to single-user head-mounted displays. •



Multi-user projection-based VR systems allow several users to simultaneously experience a simulated environment. They use large-screen display configurations that range from fully immersive experiences, such as enclosed CAVE systems, to more open-footprint semi-immersive applications (Figure 2.5). While they often also feature user tracking capabilities, they can pose challenges for group exploration, as they force users who are not tracked by a VR model to navigate and experience a model through another user’s movements and vantage points. To increase the mobility of these collaborative VR approaches, current initiatives, such as the Mobile Immersive Collaborative Environment (MICE) at Heriot-Watt University,1 explore the methods to network multiple head-mounted displays and allow groups of users to simultaneously explore virtual environments. Single-user head-mounted displays allow users to navigate virtual spaces using either an interactive input device (such as a tracked controller or keyboard) or body movement. Steadily decreasing production costs for VR components have brought an array of new consumer-market wearable VR systems that cater primarily to individual users. These systems, which typically use headmounted displays and incorporate head movement tracking, can range from fully immersive high-end displays (such as HTC Vive) to lower-end, partially

2.5 A CAVE2 system in the Data Science Institute at Imperial College London

26

User experience in VR systems

immersive hardware (such as Google Cardboard). Users of these systems are typically fully immersed in a virtual space and are thus unable to see either their own body or its virtual representation. However, to overcome any feelings of disembodiment, users can often generate ‘avatars’ to represent themselves in a virtual environment. Finally, while these systems are not generally designed to be collaborative, they can be linked to allow multiple users to collectively experience a virtual environment.

2.2.2 Customizing the visual interface As previously discussed, virtual, augmented and mixed reality applications provide visual interfaces to digital information. As BIM becomes more widely used across the allied built environment industries, digital information has become increasingly more extensive, complex and challenging to view using traditional interfaces. There has been substantial development on workflows that bring BIM information into VR systems or directly display native BIM models in VR headsets. Yet, considerable effort is required to ensure that users, whether they are novices or built environment professionals, are able to effectively use the virtual environments. For example, with respect to visual literacy, we know from studies on representation and perception that lay viewers of digital information can be easily led (as the jury was in the trial of the Los Angeles policemen, discussed in Box 2.3) and may not always grasp the information that is relevant for making beneficial decisions. In a study of VR and safety (Whyte et al., 2012), we found that experienced safety professionals used a CAVE-displayed model differently from graduate students and were able to identify additional safety issues. While most of the graduate students involved in this study identified issues concerning edge protection and openings in the building, many lacked the experience to identify alternative types of equipment, prefabrication or solutions such as the use of permanent staircases as circulation routes during the construction process. In addition, they did not discuss alternative forms of cranes in their individual assessments, with a tower crane identified as a better solution in only two of ten collaborative discussions. This study suggests that novice users are not always able to consider information that is not explicitly modelled, and the visual interface for this model required careful preparation to allow students to interpret and assess the design. However, VR models can also be made more user-friendly and effective for new users by incorporating displays, menus or toolbars. For example, for the case just described, a palette of different construction equipment options could have helped the novice users to question the model. Similarly, judiciously setting different perspectives that alternate between plan and first-person views can aid novice users in navigation. However, while a VR model’s interface can be used to improve user performance, the types of assistance and tools that are helpful vary both by its users and the given task; therefore, three aspects should be considered in customizing the visual interface of a VR model: 1) viewing perspectives, 2) navigation modes and 3) further guides and user aids. User experience in VR systems

27

2.2.2.1 VIEWING PERSPECTIVES

Immersive, semi-immersive and non-immersive VR systems typically exhibit three types of viewing perspectives. A viewer-centred, or egocentric, perspective allows users to experience VR from the perspective of the human body, similar to how we view the actual world (Figures 2.6a–d). In an object-centred, or exocentric,

2.6a–b Viewer-centric (egocentric) perspectives moving through a model of HypoVereinsbank by Perelith. Source: Perilith

28

User experience in VR systems

2.6c–d Viewer-centric (egocentric) perspectives moving through a model of HypoVereinsbank by Perelith.

perspective, users are disembodied and see their environment through an object, often an avatar (Figure 2.7). Although still exocentric, this type of viewing perspective can also be model-centred, where users can become external observers and manipulate an environment from a static viewpoint (Figure 2.8). However, as early research at the University of Illinois demonstrated (Box 2.2), VR users who

User experience in VR systems

29

2.7 An object-centred (exocentric) perspective of a peacekeeping mission scenario funded by the US Army that involves soldiers, civilians and vehicles. Source: Boston Dynamics and the Institute of Creative Technology (ICT).

2.8 The exocentric perspective is based outside the model and is centred on the model itself

experienced different viewing perspectives in the same model found collaboration difficult. There are also limitations to the viewpoints that are possible in the physical world, thus in AR, where the viewpoint within a virtual model merges with real-world imagery, it is usually possible only to offer viewer-centred perspectives.

2.2.2.2

NAVIGATION MODES

A VR application may also support a range of navigation modes. Users may walk, run or jump through movement tracking or interfaces such as menus or wands. Virtual reality headsets, such as the HTC Vive, allow for both tracking and use of a wand

30

User experience in VR systems

for navigation. It is also possible for a VR user to remain stationary and zoom into or rotate a model as well as ‘jump’ to pre-set viewpoints within a model. However, novice VR users typically experience a greater sense of presence in the environment when they actually walk, rather than ‘walk’ virtually, although the latter action tends to provide a greater sense of presence than ‘flying’ through a VR model (Usoh et al., 1999).

2.2.2.3

GUIDES AND USER AIDS

Quite often, unfettered navigation through a VR environment can make users feel disoriented or lost. As a result, the design of any virtual experience should consider model boundaries and levels of detail and include cues and tools that help users navigate and access information. To enhance performance, many VR applications provide users with a range of tools, including access to maps, exocentric views, markers and system-wide indicators (see e.g. Figure 2.9). In addition, users may be limited to displaying only parts of a model and able to select either perspective or orthographic views, with a separate plan view window that can be turned off and on. Some VR applications also provide the ability to measure objects and cut section planes through a model. Unlike the use of conventional maps, the information presented in a virtual world can increase the difficulty of performing certain tasks. As a result, VR users may become overwhelmed and fail to filter out non-essential information if they are not aware of what is most important in the execution of given tasks (Goerger et al., 1998).

2.9 The interface of a VR application may include navigation aids. This application, developed by Matsushita, enables users to explore a home design before purchase

User experience in VR systems

31

As VR is relatively new, to date, few established sources exist to assist novice users in navigating complex virtual environments. The inherent differences between moving through virtual and real environments can affect user performance in even simple tasks, such as navigation (Satalich, 1995). For example, novice users may often veer off course, become disoriented or collide with virtual objects. Because of the difficulty users often face in identifying their location in a model, they may also struggle with simply moving through a model and lose focus on the task at hand (Darken and Sibert, 1993). Early graphical user interfaces for immersive VR applications often featured a virtual ‘hand’ to guide users. However, manipulation of this aid was not standardized, and its functionality in pointing or moving towards objects varied across applications. The design of later graphical user interfaces for VR applications was influenced by 2D interfaces, and as a result, familiar elements such as menus often appear in newer VR applications (Sherman and Craig, 1995). The extent to which performance aids are useful also depends on a user’s skill level. Users with more advanced verbal abilities may prefer narration features, whilst those with more sophisticated map reading skills may prefer using 3D maps (Chen and Stanney, 1999). Early exploration of VR focused on developing city models for route planning and instruction (Figure 2.10). In addition,

2.10 A model of Tokyo, Japan, by Arcus Software, used for 3D route instruction Source: Arcus Software.

32

User experience in VR systems

including landmarks in a model can aid navigation (Ruddle et al., 1997), and user-defined bookmarks have also been suggested as an effective navigational tool (Edwards and Hand, 1997). Tools for marking changes and detecting inconsistencies between design components are also steadily being introduced in professional VR applications. In particular, more sophisticated interfaces that allow users to undo commands and allow access to detailed information about files may assist construction sector users. The ability to add comments to a model or identify all elements of a particular type may also enhance construction performance.

2.3 Development of virtual and augmented reality systems Over the last few decades, the exponential growth of computing power has made virtual and augmented reality systems smaller, cheaper and more flexible, and has improved the sensitivity of input devices and resolution of output devices. This growth in computing power has also greatly increased the availability of VR systems for built environment users and has opened new areas of application. Table 2.2 shows a timeline of the major developments and prominent shifts in digital project delivery (Lobo and Whyte, 2017; Whyte and Levitt, 2011), developments in information models and standards, and the hardware, software and interfaces that are used in VR and AR systems for built infrastructure. To illustrate the extent of this growth, Enzer (2016) compiled information on computing costs. For example, the cost of a gigabyte of storage, which was $700,000 in 1981, was $0.04 in 2016; a GFLOP2 of processing power, which cost $1.1 trillion in 1961, was $0.08 in 2016; and a Mbps of transmission speed, which cost $1,200 in 1998, was $0.06 in 2016.

Table 2.2 Development of enabling technologies

User experience in VR systems

33

2.3.1 Pre-1980: early development of VR-related technologies Whirlwind, the first computer designed to instantly respond to user commands, was developed by MIT in the late 1940s and early 1950s as part of Project SAGE, a programme that simulated a computer-based air-defence system against Soviet long-range bombers. Whirlwind began as a flight simulator but evolved into a more general-purpose machine (Waldrop, 2000). Although it featured only 1024 bytes × 2 banks of memory, it weighed 10 tons and consumed 150 kW of power. The 1960s saw the development of what became known as the computer ‘mouse’ (English et al., 1967). At that time, pioneering MIT computer scientist Ivan Sutherland wrote in his doctoral thesis that “In the past we have been writing letters to, rather than conferring with our computers” (Sutherland, 1963: 8). Sutherland created an early interactive graphical system called Sketchpad, which allowed users to draw vector lines on a computer screen with a light pen. Later work by Sutherland (1965, 1968) developed the concept of an immersive 3D computer environment viewed through a head-mounted display. Flight simulator research by the US Air Force and NASA also contributed to an understanding of the technical requirements of virtual reality, although this work was not published until much later (Earnshaw et al., 1993; Furness, 1986; McGreevy, 1990). In the 1970s, the first interactive architectural walkthrough system was developed at the University of North Carolina (Brooks, 1986, 1992). At this time, Myron Krueger (1991) developed video projection methods he described as ‘artificial reality’. Now based in Utah, Ivan Sutherland and his colleague David Evans founded the graphics company Evans and Sutherland (E&S), which commercialized the first head-mounted display, and led a team of researchers in exploring how to render an image from a set of geometric data by removing hidden lines and adding colours, textures, lights and shading (e.g. Sutherland et al., 1974).

2.3.2 1980–1989: commercialization of early virtual reality The first commercial VR packages became available in the late 1980s with the founding of W Industries in the UK and VPL Research in the US. VPL Research’s chief executive, Jaron Lanier, is credited with coining the term virtual reality to describe his company’s DataGlove, a user-tracked head-mounted display that immersed users in interactive simulations. After that, video game applications became popular on early personal computers such as the BBC Micro, Commodore 64 and Atari ST. For example, the game Elite, which ran on 8-bit machines, presented a 3D universe. Also at this time, the VR hardware supplier Silicon

34

User experience in VR systems

Graphics International (SGI) was founded by James Clarke, a former student of Evans and Sutherland (who was by then a professor at Stanford), and his graduate students.

2.3.3 1990–1999: commercialization of VR peripherals Many of the peripheral applications associated with virtual reality were first commercialized in the 1990s, when its capabilities increased both for high-end VR facilities and personal computing applications. At the same time, VR software protocols were also being developed. For example, SGI developed the non-proprietary Open Graphics Library, or Open GL, which provided an application programming interface (API) for rendering 2D and 3D graphics. In terms of high-end VR facilities, the CAVE (or cave automatic virtual environment) was developed at the University of Illinois (Cruz-Neira et al., 1993). Around this time, the VR hardware supplier Fakespace also introduced an immersive table-top display (Figure 2.11). In addition, this decade saw early work to prototype applications of augmented reality (Figure 2.12) (Feiner et al., 1995, 1997). For personal computing applications, the growing video game market drove significant developments in graphic capabilities, which improved their ability to rapidly update 3D imagery. In the mid-1990s, the Virtual Reality Modelling Language (VRML) was developed for creating virtual worlds networked through the Internet (Bell et al., 1995), and later became the international standard VRML 97 (ISO/IEC 14772–1). Desktop VR also became popular, and the growth in the laptop market raised the potential for a shift towards mobile devices.

2.11 Workbenches such as Fakespace’s ImmersaDesk were first commercialized in the 1990s, providing a new way for industrial users to interact with complex VR data Source: Fakespace Inc.

User experience in VR systems

35

2.12 An augmented reality system at Columbia University’s Computer Graphics and User Interfaces Laboratory, which pioneered AR approaches. The system tracks user movements and conveys 2D and 3D information about an environment through a transparent head-mounted display reproduced from Feiner, MacIntyre, Haupt and Solomon (1993).

2.3.4 2000–2009: online VR, 3D movies and mobile phones In the early 2000s, data input techniques advanced rapidly, particularly in the areas of 3D laser scanning and geometric image and film capture. For the first time, VR models could be constructed from as-built data as well as CAD data. New gaming engines were developed, such as Google SketchUp, a low-cost 3D modelling application with an intuitive interface that was made widely available, and ARToolKit for Unity, which was released by HITLAB in 1999, provided open libraries for the development of augmented reality applications. In terms of VR research, more rapid development was enabled by a culture of sharing code and workflows and demonstrating prototypes online (e.g. Figure 2.13), which were facilitated through the development of creative commons licenses and the GitHub software development platform, as well as the availability of blogs and wikis. In addition, the launch of the first generation iPhone in 2007 and availability of third generation wireless networks led to a rapid uptake of smartphones and their use in

36

User experience in VR systems

2.13 Le Deuxième Monde is an early virtual world created by Canal + Multimedia, which allowed users to interact online through their avatars

a wide range of applications. Furthermore, 3D movies such as Avatar became more popular in the 2000s and helped make 3D stereoscopic cinema experiences mainstream.

2.3.5 2010–2019: a new generation of VR devices Recently, a new generation of VR devices has made VR environments increasingly accessible from home or the workplace (Slater, 2014). The crowd-sourced development of the Oculus Rift,3 initiated through a campaign in 2012, marked the start of a new generation of headsets with stereoscopic displays and six degrees of freedom. At the same time, new AR hardware was developed, and Google released its Google Glass headset4 that weighed 36g and featured a touchpad, camera and LED display. It was available in developer versions starting in 2013, but was discontinued shortly after 2015. Other headsets followed, with the HTC Vive5 offering an immersive VR experience, and Microsoft’s Hololens6 and Daqri7 headsets further demonstrating the potential for AR. Google Cardboard,8 released in 2014, provided a very low-end VR device that works with smartphones and is made of cardboard and two lenses. In addition, stereoscopic content has become available online through sites such as the 3D YouTube channel.9 Major game developers have also moved into virtual reality products, such as the Playstation VR10 headset and Samsung Gear VR11, and Apple has

User experience in VR systems

37

released ARKit, an application programming interface (API) to help developers build AR content. In the construction industry, interest in VR technologies has grown along with the increasing use of BIM, which has led to the generation of large datasets that often need to be collectively visualized. In addition, research has been conducted on new multi-user systems to change design parameters, including reducing equipment footprints, noise and heat; functionality in normal room lighting (DeFanti et al., 2011); and making collaborative VR applications mobile (Parfitt and Whyte, 2014). Recently, there has been a significant convergence in the growth of digital technologies alongside their simultaneous use. This is shown in a recent report by Boston Consulting Group for the World Economic Forum, as shown in Figure 2.14. Thus, the use of augmented and virtual reality systems has advanced through integration with BIM, tracking and sensing devices (which are often discussed as the Internet of Things and include radio-frequency identification [RFID] tags and wireless sensor networks), 3D scanning and data capture (including laser scanning and image and heat recognition), and big data and analytic technologies such as machine learning. Furthermore, the growing use of augmented reality is also benefitting from developments in wearable computing applications, gesture devices, mobile hardware and global positioning systems (GPS). In addition to these benefits, the development of VR systems is a part of a broader change in a set of related digital technologies. The past decade has seen massive growth in the use of Wi-Fi and mobile devices for watching videos and experiencing virtual reality. The amount of data that we create and copy globally each year has also grown, and is projected to increase from 125 exabytes in 2005 to 44 zettabytes (44 trillion gigabytes) by 2020 (Gantz and Reinsel, 2013). However, the benefits of digital technologies have also become increasingly questioned in terms of sustainability and security.

38

User experience in VR systems

2.14 Integration of digital technologies across the design, construction and operations fields Source: adapted from Gerbert et al., 2016.

2.4 The future of VR Of course, little about the future is certain, but writer and essayist William Gibson’s (1999) maxim that “The future is already here – it’s just not evenly distributed” is a good way to begin to consider the future of virtual reality for built environment projects. As we become more mature users of digital technologies, we have to consider privacy and security issues, as well as the increasing technology integration and life-cycle integration. We can imagine potential scenarios in which, rather than viewing digital data on a monitor or smartphone, displays may be directly fed into the retina of a viewer or through specialized glasses. We can also anticipate an increased convergence of VR and related technologies that combine sensors, BIM, big data and artificial intelligence with VR in novel applications. We may also think more about the carbon footprint of these technologies. As we will discuss further in the final chapter, we can anticipate technological developments such as: • • • • • •

More integration of VR with BIM; More wearable and auto-stereoscopic devices; More data capture and video; More sensory-rich applications; New platforms that support a range of uses and applications; and Distributed virtual environments and teleoperations.

We also face many choices with respect to the use of such VR technologies. For example, digital technologies could help us better understand the interactions between natural and built environments, giving us a broader view of the physical world, or the built environment itself may become more cyber-physical in nature, making us ever more insulated from our physical surroundings. Throughout the rest of this book, we consider the trajectory of developments and possible futures for the use of VR in the operation, design and construction of the built environment, and we return to what may lie ahead for VR in the final chapter.

Notes 1 2 3 4 5 6 7 8 9 10 11

https://web.sbe.hw.ac.uk/fbosche/projects-mice.html Giga-floating point operations (GFLOPS) is a measure of computer performance. www.oculus.com/ https://en.wikipedia.org/wiki/Google_Glass www.vive.com/ www.microsoft.com/hololens https://daqri.com/ https://vr.google.com/cardboard/ www.youtube.com/user/3D www.playstation.com/playstation-vr/ www.samsung.com/us/mobile/virtual-reality/gear-vr/gear-vr-with-controllersm-r324nzaaxar/

User experience in VR systems

39

References Arnheim, R., 1954. Art and visual perception. Faber, London. Bell, G., Parisi, A., Pesce, M., 1995. The Virtual Reality Modeling Language Version 1.0 Specification, http://www.martinreddy.net/gfx/3d/VRML.spec Bosselmann, P., 1999. Representation of places: Reality and realism in city design. University of California Press, Berkeley. Bowman, D.A., McMahan, R.P., 2007. Virtual reality: How much immersion is enough? Computer 40, 36–43. doi:10.1109/MC.2007.257 Brooks, F.P., 1992. Six generations of building walkthroughs (Final Technical Report, Walkthrough Project No. TR92–026). National Science Foundation. Brooks, F.P., 1986. Walkthrough: A dynamic graphics system for simulating virtual buildings. Presented at the Workshop on Interactive 3D Graphics, Chapel Hill, NC. Chen, J.L., Stanney, K.M., 1999. A theoretical model of wayfinding in virtual environments: Proposed strategies for navigational aiding. Presence: Teleoperators and Virtual Environments 8, 671–685. Clark, A., 1998. Being there: Putting brain, body, and world together again. MIT Press, Cambridge, MA. Cruz-Neira, C., Sandin, D.J., DeFanti, T.A., 1993. Surround-screen projection-based virtual reality: The design and implementation of the CAVE. Presented at the 20th Annual Conference on Computer Graphics and Interactive Techniques, Anaheim, CA, pp. 135–142. Darken, R.P., Sibert, J.L., 1993. A toolset for navigation in virtual environments. Presented at the Proceedings of the ACM Symposium on User Interface Software and Technology, Atlanta, GA, pp. 157–165. DeFanti, T., Acevedo, D., Ainsworth, R., Brown, M., Cutchin, S., Dawe, G., Doerr, K.-U., Johnson, A., Knox, C., Kooima, R., Kuester, F., Leigh, J., Long, L., Otto, P., Petrovic, V., Ponto, K., Prudhomme, A., Rao, R., Renambot, L., Sandin, D., Schulze, J., Smarr, L., Srinivasan, M., Weber, P., Wickham, G., 2011. The future of the CAVE. Open Engineering 1, 16–37. doi:10.2478/s13531-010-0002-5 Diemer, J., Alpers, G.W., Peperkorn, H.M., Shiban, Y., Mühlberger, A., 2015. The impact of perception and presence on emotional reactions: A review of research in virtual reality. Frontiers in Psychology 6, 26. Dorta, T., LaLande, P., 1998. The impact of virtual reality on the design process. Presented at the Proceedings of the ACADIA Conference 1998, Cincinnati, OH. Drascic, D., Milgram, P., 1996. Perceptual issues in augmented reality, in: Bolas, M.T., Fisher, S.S., Merritt, J.O. (Eds.), Presented at the SPIE Volume 2653: Stereoscopic Displays and Virtual Reality Systems III, San Jose, CA, pp. 123–134. Earnshaw, R.A., Gigante, M.A., Jones, H., 1993. Virtual reality systems. Academic Press, London. Edwards, J.D.M., Hand, C., 1997. MaPS: Movement and planning support for navigation in an immersive VRML browser. Presented at the VRML 97: Second Symposium on the Virtual Reality Modeling Language, Monterey, CA, pp. 65–74. English, W., Engelbart, D., Berman, M., 1967. Display-selection techniques for text manipulation. IEEE Transactions on Human Factors in Electronics HFE 8, 21–31. Enzer, M., 2016. Smart infrastructure. Presented at the Cambridge Centre for Smart Infrastructure and Construction (CSIC), London. Ewenstein, B., Whyte, J., 2009. Knowledge practices in design: The role of visual representations as “epistemic objects.” Organization Studies 30, 7–30. Ewenstein, B., Whyte, J., 2007. Beyond words: Aesthetic knowledge and knowing in organizations. Organization Studies 28, 689–708. Feiner, S., MacIntyre, B., Haupt, M., Solomon., E. 1993. Windows on the world: 2D windows for 3D augmented reality. In Proceedings of the 6th annual ACM symposium on User interface software and technology (UIST ’93). ACM, New York, NY, USA, 145–155. DOI: https://doi.org/10.1145/168642.168657 40

User experience in VR systems

Feiner, S., MacIntyre, B., Höllerer, T., Webster, T., 1997. A touring machine: Prototyping 3D mobile augmented reality systems for exploring the urban environment. Presented at the International Symposium on Wearable Computers (ISWC) ’97, Boston, MA, pp. 74–81. Feiner, S., Webster, A., Krueger, T., MacIntyre, B., Keller, E., 1995. Architectural anatomy. Presence: Teleoperators and Virtual Environments 4, 318–325. Furness, T.A., 1986. The super cockpit and its human factors challenges. Presented at the Human Factors Society, 30th Annual Meeting, Dayton, OH. Gantz, J., Reinsel, D., 2013. The digital universe in 2020: Big data, bigger digital shadows, and biggest growth in the Far East – United States, https://www.emc.com/collateral/ analyst-reports/idc-digital-universe-united-states.pdf Gerbert, P., Castagnino, S., Rothballer, C., Renz, A., Filitz, R., 2016. The transformative power of building information modeling: Digital in engineering and construction, https://www. bcgperspectives.com/content/articles/engineered-products-project-business-digitalengineering-construction/ Gibson, W., 1999. The science in science fiction. Broadcast on Talk of the Nation, November 30. Gigante, M.A., 1993. Virtual reality: Enabling technologies, in: Earnshaw, R.A., Gigante, M.A., Jones, H. (Eds.), Virtual reality systems. Addison Wesley, London, pp. 15–28. Goerger, S.R., Darken, R.P., Boyd, M.A., Gagnon, T.A., Liles, S.W., Sullivan, J.A., Lawson, J.P., 1998. Spatial knowledge acquisition from maps and virtual environments in complex architectural spaces. Presented at the 16th Applied Behavioural Sciences Symposium, Colorado Springs, CO, pp. 6–10. Gombrich, E.H., 1982. Mirror and map: Theories of pictorial representation, in: Gombrich, E.H. (Ed.), The image and the eye. Phaidon, Oxford, pp. 172–214. Goodwin, C., 2000. Practices of seeing: Visual analysis: An ethnomethodological approach, in: van Leeuwen, T., Jewitt, C. (Eds.), Handbook of visual analysis. Sage Publications, London, pp. 157–182. Henderson, K., 1999. On line and on paper: Visual representations, visual culture and computer graphics in design engineering. MIT Press, Boston. Ijsselsteijn, W.A., de Ridder, H., Freeman, J., Avons, S.E., 2000. Presence: Concept, determinants and measurement. Presented at the SPIE, Human Vision and Electronic Imaging V, San Jose, CA, pp. 3959–3976. Krueger, M.W., 1991. Artificial reality II. Addison-Wesley, Wokingham. Lanzara, G.F., 2009. Reshaping practice across media: Material mediation, medium specificity and practical knowledge in judicial work. Organization Studies 30, 1369–1390. Latour, B., 1986. Visualization and cognition. Knowledge and Society 6, 1–40. Lawrence, R.J., 1987. House planning: Simulation, communication and negotiation, in: Lawrence, R.J. (Ed.), Housing dwellings and homes: Design theory, research and practice. John Wiley & Sons, Inc., Chichester, UK, pp. 209–240. Leigh, J., Johnson, A.E., 1996. CALVIN: An immersimedia design environment utilizing heterogeneous perspectives. Presented at MULTIMEDIA ’96, Third IEEE International Conference, Washington, DC. Lobo, S., Whyte, J., 2017. Aligning and reconciling: Building project capabilities for digital delivery. Research Policy 46, 93–107. Lombard, M., Ditton, T., 1997. At the heart of it all: The concept of presence. Journal of Computer-Mediated Communication 3. doi:10.1111/j.1083-6101.1997.tb00072.x Macheachren, A.M., 1995. How maps work: Representation, visualisation and design. The Guilford Press, New York and London. Maftei, L., Harty, C., 2015. Designing in caves: Using immersive visualisations in design practice. International Journal of Architectural Research: ArchNet-IJAR 9, 53–75. McGreevy, M.W., 1990. The virtual environment display system. Presented at the 1st Technology 2000 Conference, NASA, Washington, DC, pp. 3–9. Meyer, R.E., Höllerer, M.A., Jancsary, D., Leeuwen, T.V., 2013. The visual dimension in organizing, organization, and organization research. The Academy of Management Annals 7, 487–553. User experience in VR systems

41

Otto, G., 2002. An integrative media approach to virtual reality system design and assessment. College of Communications, The Pennsylvania State University, University Park, PA. Parfitt, M., Whyte, J., 2014. Developing a mobile visualization environment for construction applications. Presented at the 2014 International Conference on Computing in Civil and Building Engineering, 23–25 June 2014, Orlando, Florida, pp. 825–832. Radford, A., Woodbury, R., Braithwaite, G., Kirkby, S., Sweeting, R., Huang, E., 1997. Issues of abstraction, accuracy and realism in large scale computer urban models. Presented at CAAD Futures 1997, Munich, Germany, pp. 679–690. Ruddle, R.A., Payne, S.J., Jones, D.M., 1997. Navigating buildings in “desk-top” virtual environments: Experimental investigations using extended navigational experience. Journal of Experimental Psychology: Applied 3, 143–159. Satalich, G., 1995. Navigation and wayfinding in virtual reality: Finding the proper tools and cues to enhance navigational awareness (unpublished Master’s thesis). Washington. Scaife, M., Rogers, Y., 1996. External cognition: How do graphical representations work? International Journal of Human-Computer Studies 45, 185–213. Schroeder, R., Huxor, A., Smith, A., 2001. Activeworlds: Geography and social interaction in virtual reality. Futures 33, 569–587. Sherman, W.R., Craig, A.B., 1995. Literacy in virtual reality: A new medium. ACM SIGGRAPH Computer Graphics 29, 37–42. Slater, M., 2014. Grand challenges in virtual environments. Frontiers in Robotics and AI 1. doi:10.3389/frobt.2014.00003 Steuer, J., 1992. Defining virtual reality: Dimensions determining telepresence. Journal of Communication 42, 73–93. Strati, A., 1999. Organization and aesthetics. Sage Publications, London. Sutherland, I., 1968. A head-mounted three-dimensional display. Presented at the Fall Joint Computer Conference, AFIPS, San Francisco, CA, pp. 757–764. Sutherland, I., 1965. The ultimate display. Presented at the IFIP Congress, New York, pp. 506–508. Sutherland, I., 1963. Sketchpad, a man-machine graphical communication system (Ph.D. thesis). MIT, MA. Sutherland, I.E., Sproull, R.F., Schumacker, R.A., 1974. A characterization of ten hiddensurface algorithms. ACM Computing Surveys (CSUR) 6, 1–55. Thorndyke, P.W., Hays-Roth, B., 1982. Differences in spatial knowledge acquired from maps and navigation. Cognitive Psychology 14, 560–589. Usoh, M., Arthur, K., Whitton, M.C., Bastos, R., Steed, A., Slater, M., Brooks Jr, F.P., 1999. Walking> walking-in-place> flying, in virtual environments. Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques, Los Angeles, CA, ACM Press/Addison-Wesley Publishing Co., pp. 359–364. Vasari, G., 1568. The lives of the artists, translated with an introduction and notes by Julia Conaway Bondanella and Peter Bondanella, 1991. ed. Oxford University Press, Oxford and New York. Waldrop, M.M., 2000. Computing’s Johnny Appleseed. Technology Review January/February, https://www.technologyreview.com/s/400633/computings-johnny-appleseed/ Whyte, J., Levitt, R., 2011. Information management and the management of projects, in: Morris, P., Pinto, J., Söderlund, J. (Eds.), Oxford handbook of project management. Oxford University Press, Oxford, pp. 365–387. Whyte, J., Zhou, W., Sacks, R., Haffegee, A., 2012. Building safely by design. Final Report to IOSH, Reading. Witmer, B.G., Singer, M.J., 1998. Measuring presence in virtual environments: A presence questionnaire. Presence: Teleoperators and Virtual Environments 7, 225–240. Wooton, H., 1624. The elements of architecture . . . from the best authors and examples. Iohn Bill, London.

42

User experience in VR systems

Chapter 3

Visualizing city operations

Cities are places in which people work, rest and play. Most people globally live in urban areas (UN, 2014), and rapid population growth in cities presents built environment professionals with the challenge of responding to growing demands for housing, transportation and resources. Interventions to build new buildings and infrastructure are made in the context of the continually inhabited places within the existing built environment. Through initiatives to digitally model and simulate the physical environment, users and services, VR systems provide professionals with visual interfaces to digital information about this environment. Across scales, from buildings and neighbourhoods to infrastructure and cities, virtual models may be used to democratize planning and test a range of future scenarios, or to understand operation of the built environment and train emergency response, maintenance and operations personnel. This is done by both modelling physical attributes of the built environment and simulating behaviours within it. Over the last decades, professionals have been experimenting with VR uses for operating, using, maintaining and planning the built environment. In this chapter, we consider this rich set of applications and how VR systems are used to present information about urban regions, cities and neighbourhoods to both improve the operation of existing buildings and infrastructure and to plan new interventions. In operating the built environment, the city can be seen as a complex system, or rather system-of-systems (e.g. Kasai et al., 2015) involving different transport, water, waste, power and communication systems. We rely on these systems to work efficiently, and although they are typically designed separately, they are open and interdependent systems, interacting with each other and the natural environment. As cities seek to integrate data from the diverse utility companies and municipal departments, there is the potential for professionals to experience the city operations through virtual and augmented reality, sharing experience across organizational and disciplinary boundaries to make better decisions.

Visualizing city operations

43

Emergency response management applications, for example, have been considered since the early days of VR for city operations in the 1990s and were seen as an application for an early model of the Los Angeles area (Jepson et al., 2000). In an emergency, rescue services face pressure to work smart and fast with little available time to assimilate important 3D information regarding buildings, location of combustibles, underground services or overhead landing patterns. In such instances, professionals just need to be able to ‘see it’, or as one consultant put it: You don’t just need the data – you need to visualize this entire thing because you need to send the right crew with the right equipment for their own safety, and for the neighbourhood’s safety. Throughout the chapter, we will look at a number of potential VR applications for training emergency response, maintenance and operations personnel, and increasingly for understanding sustainability in the built environment. Models used in planning may have quite different information content from the models used to understand instances of the built environment operation. For planning, a distinguishing characteristic of VR systems is that they enable one to experience the city both at the street-level view and an interactive bird’s-eye view. Jacobs (1961) raised the value of experiencing the city at the street level in her critique of modernist planning, arguing that the bird’s-eye view of a city is blind to the real beauty and power of great cities. She argued that great cities are not those that are just well zoned with the right amenities, but those in which the streets are interesting places to be. Yet, by constraining the user to a viewpoint outside the model, the quality of streets as places is ignored in city planning. Long before the term ‘virtual reality’ was invented, the desire for participatory planning processes motivated a search for new forms of representation beyond maps, models and plans to enable this street-level view. The Environmental Simulation Center in New York, US, experimented with Hollywood-style special effects to facilitate planning decisions. They filmed photorealistic walkthroughs of potential places to facilitate decision-making through brokering and consensus building. Video footage was created using physical models and largescale machinery for lighting and cameras that moved through the model at eyelevel (Figures 3.1 and 3.2). The question then is, can VR systems engage citizens in this decision-making process and, instead of top-down interventions, encourage bottom-up strategy for defending plurality within cities? By focusing the attention on urban areas viewed at the street level using a range of viewpoints, rather than the view from above, VR systems may allow greater participation and better decision-making in the planning process. As we will see in this chapter, some municipalities, not-forprofit planning organizations and companies see the use of virtual reality as part of a strategy to get input from citizens on planning issues, while others are using it to market new properties to potential buyers.

44

Visualizing city operations

3.1 The gantry-based simulation system used at the Environmental Simulation Center Source: Michael Kwartler, Environmental Simulation Center, Ltd.

3.2 Lighting a model for video presentation, using the gantrybased simulation system at the Environmental Simulation Center Source: Michael Kwartler, Environmental Simulation Center, Ltd.

The sections of this chapter chart the development of approaches to visualizing city operations that accompanied an exponential growth in data and computing power, starting from the early CAD-based VR city models in the 1990s (section 3.1), which used gaming techniques and geo-location to simulate movement of people, information and goods; to multi-use urban models used for training and simulating dynamic operations in the 2000s (section 3.2), situated around developments in gaming technology and navigation tools such as Google Earth and Google Street View; to the new cyber-physical interrelationships in the 2010s

Visualizing city operations

45

(section 3.3), with VR as an interface to a new generation of city models used for simulating multiple information flows to achieve smart, sustainable and resilient infrastructure. Over the decades, the professional challenge has grown from one of simply providing information to one of taking a security-minded approach to understanding what information is necessary for decision-making. The chapter concludes (section 3.4) by considering future developments in the use of VR systems for visualizing the city and its operations.

3.1 1990–1999: early city models The emergence of detailed digital models of cities is a relatively recent phenomenon. Through the 1990s, early experiments with building virtual cities were based in CAD and gaming techniques and used virtual models as interfaces to databases of geographic information, which highlighted the potential for future use.

3.1.1 Developing city models Large-scale urban CAD models, built as collective projects by architectural students in Glasgow and Bath (UK) in the 1970s and 1980s, were used to create proof-of-concept VR models in the 1990s (Bourdakis and Day, 1997; Ennis et al., 1999). Inspired in part by the intuitive interfaces available in gaming applications, viewing of these proof-of-concept models was made possible by developments in computer workstations with suitable graphics cards and processing capability. As the underlying CAD data were geometrically complex, data was translated and optimized for viewing in the VR system. Yet, such work was manual, involving setting up models with different levels of detail that could be swapped depending on the view, and relatively crude, as objects often continued to be rendered even when they were not in view. The resultant model of Bath, for example, covered 2.5 × 3.0 km of the cityscape and consisted of over three million polygons, with 160 sub-models, each with four levels of detail (Bourdakis and Day, 1997). Such models demonstrated that VR could be useful in the planning process, enabling the visualization of different planning solutions. At the same time in the US, the Urban Simulation Team at the University of California in Los Angeles (UCLA) took a different approach. The UCLA models were not built exclusively from CAD data, but the team instead drew on approaches from the flight and driving simulator communities, using primitive forms and texture mapping to build the models (Jepson et al., 1996; Ligget and Jepson, 1995). The resultant models, like those in gaming applications, were less geometrically complex, allowing real-time visualization and interaction with a wider urban area with the available hardware and software. Working with city planners, the team linked these models with GIS software to look at possible applications. Their ambition was to create a high-resolution model of the entire Los

46

Visualizing city operations

3.3 The model of Los Angeles, US, was developed using the MultiGen-Paradigm modelling tool to eventually cover the whole basin, an area comprising more than 4,000 m2 Source: Bill Jepson, Urban Simulation Team, UCLA School of Arts and Architecture, US.

Angeles basin area (Figure 3.3). They argued that dynamic movement through a real-time 3D model may be useful for training emergency personnel, and that its use could revolutionize the rescue process by giving emergency services rapid access to information (Jepson et al., 2000).

3.1.2 Planning and stakeholder engagement Alongside these university-based research initiatives to develop city-wide models, municipalities, non-profit organizations and firms experimented with VR for stakeholder engagement and planning. In the late 1990s and early 2000s, startups began to provide related services such as marketing new locations. Japanese government developer Kodan created a computer visualization to allow people to view a model and freely navigate through the scheme for the new town of Kizu, near Nara in the Kyoto prefecture in Japan (Figures 3.4a–b). The model was used at the opening ceremony and placed at the station to the new town to allow for public viewing and orientation. To aid navigation, the interface incorporated a map with the field of view. In addition, the viewpoint was reset every five minutes to allow the next person to view the model. In the US, the telecommunications and real estate industries funded the development of city models to support firms seeking to relocate (Figure 3.5). The company U-Data Solutions sold or licensed its models to companies that needed to see and integrate 3D objects and their associated links to data. Rather than making the models photorealistic, their models were abstract and symbolic where, for example, green areas signified ‘parks’. Use-cases for professional consultancy included helping the facilities manager to assess vacant offices in relation to the available space, and visualizing transportation and amenities before a move in relation to the location of competitors as well as parking.

Visualizing city operations

47

3.4a Bird’s eye view of the new town of Kizu, Japan Source: CAD CENTER Corp., Japan.

3.4b View of a street within the new town of Kizu, Japan Source: CAD CENTER Corp., Japan.

3.5 Model of Chicago, US, by U-Data Solutions, showing the location of major firms Source: Urban Data Solutions, Inc., www.u-data.com/.

When staff at the Environmental Simulation Center learned about the project management and engineering group Bechtel’s software called Walkthrough, which allowed real-time interaction in a 3D environment, they tried to use it as a more flexible way of allowing communities to understand planning alternatives. In order to do this, during 1992–1993, they transitioned from using gantry-based photorealism and cardboard to using high-end computer visualization, as further discussed in Box 3.1.

Box 3.1

Environmental Simulation Center, US

An early digital project was the visualization of a new development for Princeton Junction, an area between New York City and Philadelphia on a commuter and Amtrak line. The aim was to combine words, numbers and images so that visuals were backed by information about the places that they represented. A kit-of-parts approach was taken to allow different groups to query and reuse the model (Figure 3.6). For example, the transit authority might want to query the model to ask: What population do you need to use this transit system? Which transit stops do you want to keep open at night? The flexibility of this digital model and interface was a significant improvement from the movies that they previously used to involve communities in planning. In developing movies, significant effort was expended on things that did not impact planning decisions, such as ways to take the tops off tall building models so that the gantry would not knock them over, whereas with the VR system, there could be more focus on the planning issues. Change was less costly and the approach was more flexible, making it easier to ask questions like “What would the place look like under different conditions?” Rather than using lighting that was designed for moviemaking, the shadows in the real environment could be represented and discussed with participants in the planning process.

3.6 Kit-of-parts used on the Princeton Junction project Source: Michael Kwartler, Environmental Simulation Center, Ltd.

Visualizing city operations

49

A project in the late 1990s aimed to develop alternative growth scenarios for an area of Santa Fe, New Mexico. The Environmental Simulation Center designed and modelled new street and neighbourhood patterns as a series of building blocks that met community goals and could be easily incorporated into the community’s existing development patterns. The model that they created was used to show alternative densities in meetings with local residents. One of the densities was seen as too high, but after using the simulation, the local residents selected a density to which they would not have otherwise agreed. The use of virtual reality allowed them to visualize the implications of density and come to a new understanding of the nature of place (Figures 3.7a–b).

3.7a Comparing different options for Santa Fe – buildings at the front or parking at the front Source: Michael Kwartler, Environmental Simulation Center, Ltd.

3.7b Comparing different options for Santa Fe – activities concentrated at the corner or at the mid-block Source: Michael Kwartler, Environmental Simulation Center, Ltd.

50

Visualizing city operations

Planning decisions are often made using a 2D layout, before looking at the design implications and appearance using urban massing, isometrics and sketching tools. Alternatively, real-time interactive and spatial tools allow planners to look at urban massing in the context of existing developments. Hall (1993) qualified this approach to be more objective, as the viewpoint is infinitely variable and can be selected by any stakeholder compared to using perspective drawings. Bosselmann (1999), however, emphasized that anyone preparing and using images in decision-making must ensure that the representations are open to scrutiny and independent tests.

3.1.3 Operation of the built environment Early experiments with VR in the 1990s also sought to simulate behaviours, such as the movement of people, vehicles and goods to facilitate professional decisionmaking. Engineering consultants at Mott MacDonald were among the early experimenters in using VR to understand people flows within urban areas. They experimented with off-the-shelf commercial VR software, but ultimately developed their own custom PC-based programme to analyse fire egress and people flow called STEPS – Simulation of Transient Evacuation and Pedestrian movements. Mott MacDonald has since used STEPS to develop specialist expertise in crowd simulation (Figure 3.8). In crowd simulation models within the STEPS software, individuals have pre-set characteristics, which then define how crowds interact and behave. The

3.8 Early version of the STEPS software in use for simulating people flow Source: Mott MacDonald – images created using STEPS software tool.

Visualizing city operations

51

3.9a Bechtel model of Atlanta airport showing noise contours Source: Bechtel – Advanced Visualization/Virtual Reality, San Francisco, CA, US.

software could be used to analyse people flow through office blocks, sports stadia, shopping malls and underground stations – any areas where it is necessary to ensure easy transitions during normal operation and rapid evacuation in the event of an emergency. Transportation was an early use-case for simulating dynamic operations in VR, which enabled organizations to show and better understand vehicle movement in the design of road and rail infrastructure. One such example is the UK Highway Agency’s early VR work on road alignments in 1997. Late in the 1990s, Bechtel used virtual reality to understand and communicate the impact of sound in its testing of the environmental performance of airports. The project team that was working on the Atlanta airport and its commuter runway addition developed a virtual model using engineering data to help them visualize noise contours (Figures 3.9a–b). In this instance, it took two weeks to create the model. While such examples show the potential of VR, they continued to be limited by the challenges of data transfer and model-building, areas that were significantly transformed in the 2000s.

3.2 2000–2009: multi-use urban models In the 2000s, substantial growth in computing power led to consumer developments in online VR, 3D movies and mobile phones. The introduction of tools such as Google Earth and Google Street View, in particular, showed the potential to

52

Visualizing city operations

3.9b Bechtel model of an alternate view of Atlanta airport showing noise contours Source: Bechtel – Advanced Visualization/Virtual Reality, San Francisco, CA, US.

3.10 Google Street View image of San Francisco

build VR models from as-built information rather than CAD, providing an easy interface to navigate cities (Figure 3.10). The emergence of these tools along with the growth in database solutions and solutions that link to data in geographic information systems (GIS) made models less time-consuming to develop and update.

Visualizing city operations

53

3.2.1 Exploring city models Before the introduction of Google Earth and Google Street View, a number of start-ups and firms explored the use of VR systems with interfaces similar to 3D games to enhance navigation in the built environment. For example, Skyscraper Digital, the visualization group within architecture firm Little & Associates, created a city model of Charlotte, North Carolina (Figures 3.11a–c),

3.11a Wire-frame version of a model representing Charlotte, North Carolina. The models were created by Skyscraper Digital and show the geometry of the environment. Source: Skyscraper Digital, a division of Little and Associates Architects, Charlotte, NC, USA.

3.11b Massing version of a model representing Charlotte, North Carolina. The models were created by Skyscraper Digital and show the geometry of the environment

54

Visualizing city operations

3.11c Fully rendered version of a model representing Charlotte, North Carolina. The models were created by Skyscraper Digital and show the geometry of the environment

and used it within projects for its parent architecture company, as well as other companies. Prior to Google Street View, a Finnish company, Arcus Software, had developed 3D route instruction applications for mobile and Internet uses, showing the potential for business users to leverage interactive 3D to find corporate headquarter buildings, offices and hotels, navigating the landscape and nearby buildings from a pedestrian (or driver) perspective (Figure 3.12). Google Earth and Google Street View overcame the limitations experienced by such early experiments for many applications. However, modelling urban environments in VR can also enable us to walk through models of buildings and environments that no longer exist. Throughout the 2000s, ancient architecture increasingly became available online and was displayed at museums, as shown in Figure 3.13. As discussed earlier in Chapter 2, as well as in Box 3.2, learning is required to interact with such models and understand the extent to which they are the same as the buildings that they represent. In this example, the overall purpose of the  Theatron Project was to explore new possibilities for effective teaching using multimedia technologies, specifically the potential of VR modelling. Focusing initially upon the history of European theatre, the project included a prototype multimedia module for a new and more effective means of teaching than was previously available. The relationship and correspondence between the digital and real environments was crucial to understanding the historical environments.

Visualizing city operations

55

3.12 A model of Helsinki, Finland, embedded with 3D route instruction applications Source: Arcus Software.

3.13 An image of ancient architecture taken from an interactive 3D model of the Theatre at Epidaurus, Greece, by Theatron Source: Theatron Ltd.

3.2.2 Planning and stakeholder engagement Throughout the 2000s, Internet growth affected the use of VR in planning and stakeholder engagement. For example, the Porta Susa (Caneparo, 2001) project in Rome was conceived as a shared distributed project, with participants who could meet online as ‘avatars’ to explore the urban environment. Virtual reality thus became used in the planning of cities and urban areas, offering an eye-level experience of new (as-yet-unbuilt) developments, while helping us understand the dynamic functioning of the city. In the case of the City Library in Gothenburg (Sunesson et al., 2008), VR was used to present various architectural solutions to a jury in an architectural competition, and although the jury found VR to be useful, for the architects, it was more troublesome. The effectiveness of using VR was determined to be dependent on the level of detail and, therefore, the preparation of VR guidelines in the competition setting was important. As new tools increased the realism and detail of digital cities, it is interesting to revisit a mid20th-century story by Borges (Box 3.2) in which a map the size of the ‘Empire’ raises questions about which representational details, model structures and forms of verisimilitude help with making decisions about the built environment and its context within the broader natural environment. Whereas, in the Borges story, the map is of interest regardless of its application, in contrast, we found that built environment professionals are less interested in virtual reality itself, but rather in what they can do with it.

Box 3.2

Representations and reality

Virtual and real are entangled in virtual reality. The problematic nature of this relationship between representations and reality is illustrated in a fictional story by Borges (1946) about the Schools of Cartography, which became increasingly skilful at producing large and accurate maps, until eventually they created a ‘Map of the Empire’ that was so large it occupied the whole of a province. Yet, this disproportionate map was still not realistic enough for its creators and they became unsatisfied with it. Therefore, they built a ‘Map of the Empire’ equal to its actual size and coinciding with it at every point. The Schools of Cartography were very pleased. Inhabitants, however, found that the map had no practical use. In Borges’ story, written in 1946, the map the size of the ‘Empire’ falls into disuse, as it has no function. Following generations with less interest in cartography abandon the map, which is worn down by weather. While ruins of the map could be found in deserts, inhabited by animals and beggars, there was no trace in the rest of the ‘Empire’. Years later, Baudrillard (1983) recast this fable, arguing that, had it been written later, people would live in the map and that it would be the real world and not the map that was left to ruin in the deserts.

Visualizing city operations

57

3.2.3 Operation of the built environment One way in which VR can help visualize city operations is by demonstrating it dynamically. In the US, MultiGen-Paradigm developed a virtual model of a 2.5-mile section of Los Angeles 710 Freeway in 2000 to enable planners in the California Department of Transportation to understand the impact of development and retrofit work before construction began. The visualization model contained existing conditions for the freeway segment, as well as the proposed beautification and retrofit elements, thus allowing the planners to review the appearance, scale and compatibility of the proposed freeway improvements for immediate feedback (Figure 3.14). Interest in the accurate simulation of vehicle movement has grown alongside concerns about safety. Engineering company WS Atkins had several years of experience in using virtual reality for highway projects when it created VRail as a ‘proof of concept’ tool to assess whether train drivers could see signals, collaborating between the road and rail branches of the company (Box 3.3). VRail is a virtual reality tool that gives an accurate simulation of the driver’s view. Most tools that simulate the movement of trains assume that the driver’s eyes follow the centreline of the track, with a certain sideways offset from it. However, this is not strictly accurate. Railway locomotives and coaches are mounted on two ‘trolleys’ called bogies. The driver’s seat is part of the vehicle and is located in front of the leading bogie. It therefore swings further out on bends; the precise distance is determined by the geometry of the track and the rail vehicle. Whereas most systems interpolate the position along the alignment, VRail calculates it accurately, taking into account the track details and the type of vehicle.

3.14 Virtual reality being used for highway design on the Los Angeles 710 Freeway in California Source: MultiGen-Paradigm.

58

Visualizing city operations

Box 3.3

Proof House Junction, UK

WS Atkins’ VRail tool was used by Atkins Rail to check signal visibility at Proof House Junction near Birmingham, UK. As part of the West Coast Main Line modernization, this junction was the subject of major remodelling in late summer 2000, with the aim of reducing journey times by eliminating some conflicting routes. Atkins Rail was working on the project together with a construction company, Carillion, and the railway track management company, Railtrack. Using design and survey data, a working VR model of the junction was created within six weeks and has been further enhanced several times since (Figures 3.15a–b). On a desktop computer, the model runs trains smoothly at true speed, producing upwards of 15 frames per second. Route selection and signal

3.15a View along the track in WS Atkins’ model of Proof House Junction near Birmingham, UK, in VRail Source: WS Atkins – reproduced from Woods (2000) and Kerr (2000).

3.15b View from the train driver’s seat in the WS Atkins’ model of Proof House Junction near Birmingham, UK, in VRail Source: WS Atkins – reproduced from Woods (2000) and Kerr (2000).

Visualizing city operations

59

displays are set through a virtual signal box, and signals and points can be changed at the click of a mouse. The segments of track are intelligent objects that can be traversed in either direction and they carry design speed data. The status bar shows a continuous readout of distance along the route, speed, next signal name and the time in seconds until it is reached. The trains can be stopped or reversed, or their speed can be scaled down to give precise timings. Overhead line electrification equipment presents a main source of obstruction to signal visibility. For realism, the model includes many gantries, hanger assemblies, catenary cables and contact cables. The view can be zoomed in or out, so that it is easy to say whether a signal is visible or not despite the screen resolution, which is still far inferior to the human eye. If a signal is obscured, say by a gantry leg, the system allows the operator to drag it vertically and sideways, and reports the new offsets to the status bar. The visibility can be rechecked in seconds.

Exploring VR to understand operations extended to buildings as well. Construction and real estate development company Mortenson, which was an early user of VR for the Disney Concert Hall project in Los Angeles, California, in the early 2000s, explained that for complex projects and systems they build and deliver, they would often bring facility management teams in their VR environment where they would simulate the correct procedures and methods for running and operating the installed systems. This approach is perceived to help reduce operation costs and system faults due to inadequate operations, and thus also reduces the number of claims arising during the operations stage.

3.3 2010–2019: new cyber-physical interactions and relationships Notable developments in data-capture and processing technology in the 2010s has led to more accurate models being developed from laser scans, photogrammetry and satellite imaging along with the development of consumer applications of virtual and augmented reality. For built environment professionals, the use of digital information has become pervasive and transformative, bringing traditionally separate practices together, often in unanticipated ways. At the same time, technological integration (shown in Figure 2.14) through embedded sensors, connected through the Internet of Things (IOT), is making the built

60

Visualizing city operations

environment more cyber-physical in nature. Such integration has led to experiments with advanced forms of visualization across the scales from urban regions and cities to neighbourhoods and individual buildings, as charted in recent articles on the uses of virtual reality (Dang et al., 2012; Portman et al., 2015) and augmented reality (Wang et al., 2013). Some fundamental research also uses digital cities as proxy for real cities (Kuliga et al., 2015), for example, to examine the relationship between city shape (morphology) and spatial understanding (Shushan et al., 2016); prototype new visualization solutions; or explore new uses and experiences and explain aspects of cognition across the boundaries of physical and virtual realities (Fernandes et al., 2015). These experiments seek to reveal underlying patterns, structures and interdependencies across the built and natural environments, to create alternative utopias, or to increase decision-makers’ appreciation of different stakeholder perspectives and empathy with other users.

3.3.1 Building next-generation city models Many are now seeking ways to populate a city model to create a ‘smart city’. Internationally, city operators are looking to capture reality using a range of photogrammetry and satellite imaging techniques, rather than relying on bringing information together from legacy CAD systems where the models might be differently structured. The ability to compare and combine a mesh model with consistent information from an object library database becomes important. For example, one method is to hold information in a CityGML format, or related classification type. As a result, highly detailed and rendered city models can be created, such as the one developed through a national initiative in Singapore, or in cities such as Adelaide. There is significant activity in this area, with cloud-based solutions such as 4D-Mapper1 beginning to specialize in managing very large datasets, and Geoweb 3D2 providing an interface through which 3D visualizations can be generated on the fly by merging data-streams. Current studies examine virtual reality and augmented reality not as standalone models, but as interfaces to rich datasets of urban information that may otherwise be managed in different tools. The development of models is becoming less time-consuming, with direct or seamless access to visualize information through commercial CAD and GIS tools, which now offer 3D and stereo functionality, and through developed workflows for importing these models in gaming engines to provide a smoother real-time experience. Wagstaffs, a UK-based integrated communications agency specializing in digital tools for the built environment, developed a range of city models,3 including the one we use as a cover for this book (also in Figure 3.16), using a range of capture technologies, and these are used for planning and for viewing schemes within the broader urban environment.

Visualizing city operations

61

3.16 Image of a London street Source: Wagstaffs

3.3.2 Virtual reality as an interface to city information Urban areas seek to use the digital information they collate to operate as ‘smart cities’ (Allwinkle and Cruickshank, 2011). Visualization of city data is an explicit part of ‘smart city’ initiatives, where local government may use a visualization facility, such as the one in Bristol, UK. Digital interfaces are used in the professional management of smart cities and regions alongside photogrammetry, social media, data analytics, cloud computing and the Internet of Things. Municipalities support their citizens by investing in network infrastructure, as well as promoting business-oriented development, social inclusion, high-tech and creative industries, social and relational capital and sustainability (Caragliu et al., 2011). The infrastructure support for digital working is thus concentrated in ‘hubs’ in the global economy that can attract more resources. There are ‘smart city’ initiatives on every continent. These include major cities such as Singapore, New York and London; regions such as the Bay Area in Northern California; and innovative smaller cities such as Bristol (England), Aberdeen (Scotland) and Antwerp (Belgium). VR systems change the way in which we understand and interact with the built environment, creating new kinds of relationships. Using VR systems may increase and supplement our presence and engagement as we dwell in urban places. Or, they may isolate us as users in the virtual world. The game Pokémon Go was the first to have mass-market appeal that took gaming out of the computer screen into the urban realm as experienced in AR through a phone. This indicates a new way of interacting with the hybrid digital-physical environment that many of us increasingly inhabit, in which the devices that many of us carry or wear are used to project information into our surroundings. Mobile geolocated AR offers a mediated view of the existing urban environment, whereas immersive VR can engage us in the discussion of interventions. As we find ourselves living in an increasingly hybrid physical and digital environment, such technologies and applications raise new questions about our technology choices 62

Visualizing city operations

(individual and societal; digital and urban) and profoundly change the available options as we construct our social identity in ways that have broader societal consequence. Cities and major public institutions have begun to use VR in marketing. The landmarks of major world cities have become available on the Internet for viewing in virtual reality. One example is the Belfast Go Explore VR 360 app.4 Such applications enable tourists to gain an appreciation of the city before visiting. Museums have taken a similar approach to enhance a visit to a site of interest with real-time information, where a downloadable application provides visitors with additional augmented reality content as they move around the museum and site grounds. VR is being used with port authorities to understand operations and city development in the Øresund Region harbour area. The model of this port between Copenhagen, Denmark, and Malmö, Sweden, is used in presentations and exhibitions with an Xbox 360 gaming controller to view flows and movements of rail, road and sea traffic (Figures 3.17a–b).

3.17a–b The Øresund Region harbour area Source: OneReality.

Visualizing city operations

63

3.18 City council members view the city development project in Jätkäsaari, Helsinki Source: Charles Woodward, VTT.

Wagstaffs took a similar approach using the Unity game engine and crowd simulation to simulate the effects of a projected increase in number of users on the time it takes to exit the platform at London’s Bond Station.5 Using the 3D model also enables testing of sight lines for evacuation procedures and could prove to be a valuable resource for emergency services in the event of a fire or major incident. Building on a long tradition of work on AR and markerless AR, VTT in Finland worked with urban planners and designers through the 2010s (Olsson et al., 2012; Woodward and Hakkarainen, 2011), using AR to examine proposed developments. In the city development project in Jätkäsaari, Helsinki which included KämpTower, the tallest building planned in Helsinki (33 floors), mobile AR visualization was presented to Helsinki city council members in March 2012, with tablets operated by architects and the decision makers, as shown in Figure 3.18. A collaboration between LinkNode and Heriot Watt University also developed markerless VR to provide accurate urban location positioning for impact assessments and visualization of designs, reusing BIM and digital design data (Figure 3.19).

3.3.3 Virtual reality in heritage With fast-paced developments in data-capture technology such as LiDAR and photogrammetry, cultural heritage is increasingly drawing more attention to the potential of VR to capture existing cultural and historical landmarks as well as recreate in great detail those that have been lost over time. These developments are most visible in the creation of virtual museums, campuses, libraries and various archaeological sites. The increasing use of unmanned aerial vehicles (UAVs), or drones, coupled with laser scanners and photogrammetry allows for a relatively

64

Visualizing city operations

3.19 An example of a new building visualized in the existing urban environment Source: urbanplanar.com.

easy and fast method of creating highly realistic topography and 3D models. A recent example of this ‘virtual heritage’ is the Rome Reborn project, which provides a visual timeline from the Bronze Age to the Middle Ages and can be widely viewed in the Google Earth application (Frischer, 2008). One recent initiative (see Box 3.4) explores the role of multi-sensory virtual environments in representing the anthropological and social aspects of cultural heritage. A consequence of the pursuit of the realistic aesthetic in gaming (even the ‘realism’ of fantasy places) is the dominance of vision as the primary mode of experiencing a virtual space. In reality, we experience the world through all our senses, where sounds and smells in addition to vision play a vital role in how we perceive the environment around us, generating powerful emotions such as nostalgia, fear or joy. Whereas the visual effects of games and VR apps have become more sophisticated and, in the cinema especially, sound is being used to create atmosphere, the development of contextually specific multi-sensory VR environments remains largely unexplored. This has important implications for research into the built environment, where the potential of VR is being investigated more widely, especially at the design and planning stages.

3.3.4 Management, maintenance and operation of the built environment A new direction in research is concerned with mapping the dynamic flow of people, environmental factors (wind, temperature, air quality, biodiversity) and use of social media across urban landscapes using real-time information (Batty et al., 2015). There is also an interest in what forms of visualization may help us in considering sustainability and the circular economy (Bennadji et al., 2015; El-Shimy et al., 2015), and promoting community involvement in planning (Al-Kodmany,

Visualizing city operations

65

Box 3.4 Multi-sensory VR: sensations of Roman life Experiencing and mapping sounds and smells as they disperse into, through and around buildings is a field of enquiry that has yet to be explored beyond a basic evaluation of nuisance or contamination. This was the context of a research project led by Dr. Ian Ewart at the University of Reading, UK – Sensations of Roman Life – that sought to recreate the sights, sounds and smells of an urban environment, based on available evidence and using modern digital technologies. The extensive excavations of the Roman town Silchester (Fulford et al., 2006) provide detailed evidence of the architecture of one specific neighbourhood (Insula IX) and combining the material remains with archaeological records and Roman texts offered information about activities in specific spaces. For example, seven cesspits were identified, as was the main cooking house and the primary street for trading. There were also substantial animal remains, especially of dogs, that led the investigators to speculate they were being bred for slaughter. All this evidence allowed the team to identify key sounds and smells and locate them sensibly in and around the buildings. With the buildings modelled in SketchUp, the Unity gaming engine was used to bring Insula IX of Silchester to life. The 3D building models were located and textured, suitable background objects such as vegetation, fences, furniture, animals and people were added, and a first-person navigation system was imported. The sound system was created using suitable audio files co-located with the visual sound source (e.g. a dog barking with the dog 3D model) and adjusted for stereo, distance dropoff and an occlusion system that reduced the sound level when blocked by walls. Unity also allows for invisible trigger areas to be positioned and sized, and a signal output via USB. This was the basis of the smell system, which co-located the smell trigger zones with the visual smell source (e.g. cesspits, fires, etc. – see Figure 3.20). The Unity output trigger was converted in a control box running an Arduino micro-processor to power one specific fan from a bank of ten, with a distinct aroma capsule in front of each (using bespoke chemical odours, such as ‘woodsmoke’, available commercially; see Figure 3.21). The result was a full-scale projected image, with gaming-style navigation, and reasonably realistic sounds and smells, creating a prototype of a truly multi-sensory model of a built environment. We are just beginning to learn how to better incorporate the senses, and the human experience more generally, into the digital environment and this project is part of that emerging trend.

66

Visualizing city operations

3.20 Smell trigger zones in the Unity model of Insula IX of Silchester Source: Dr. Ian Ewart, University of Reading, UK.

3.21 VR simulation using customdeveloped and 3D-printed scent-releasing fans connected via Arduino to the Unity model Source: Dr. Ian Ewart, University of Reading, UK.

2002; Billger and Thuvander, 2016; Maffei et al., 2016). Several studies have considered the kinds of gestural input that might make interaction easier for these applications (e.g. Roupé et al., 2014), and detected user hand-gestures in city planning applications (Nguyen et al., 2016). Professional applications include: •

Climate change: There is growing interest in using VR to simulate the natural environment and the impact of the climate on built infrastructure (Bennadji et al., 2015; Vigier et al., 2015).

Visualizing city operations

67









Managing building estates: In VR applications that are dynamically linked to GIS, clicking on a building or object in the interactive 3D view allows the user to access any data related to that building or object that is stored in the GIS application. This has become more common in managing building estates. Infrastructure management: There are new applications of VR in the management of the existing infrastructure asset-base and planning maintenance and new projects. For example, the UK-based infrastructure owner and operator Anglian Water, through its @one Alliance, has been using project rehearsal rooms with VR, enabling construction and operating teams to visualize and rehearse future builds and operations. Integrated teams that include delivery supply chains and operating teams are able to collectively experience proposed water and water recycling assets. Proactive maintenance: Continuous capture of data from the built environment may be used to examine changes and degradation where it is difficult to access physically; for example, it is beginning to be used to inspect oil tankards and deep sea pipes to look for rusting and changes in physical properties. Emergency response and disaster relief: The need to respond quickly following emergencies and disasters to assess scenarios and plan the response has led to a range of new applications being explored, with companies like Pasco6 beginning to consider how urban models can be used in these disaster relief situations.

3.4 2020 onwards: starting with operations We started with a consideration that visualizing city operations of the existing built environment provides the context for planning the design and construction of new interventions. We also anticipate a greater awareness of the broader context of the natural environment as professionals seek to better understand the outcomes and wider impact of their interventions. We see VR as playing a practical role as infrastructure owners and operators reorganize the industry to use operational data more extensively in planning new interventions through project rehearsal rooms. There are still challenges in using VR to zoom across scales, and we anticipate further research to integrate models and enable professionals to engage with information at the building/system levels and broader urban levels to address issues of sustainability and planning. We anticipate developments in this area, where already the games company Improbable, which developed a platform to use multiple servers and games engines to simulate a single virtual world, has started exploring the use of such technology for large-scale urban simulations enabling a level of detail that was previously impossible.7 However, such applications also raise new issues of cybersecurity, which are beginning to be addressed in work on smart cities, smart infrastructure and data sensitivity, using new technologies such as blockchain and the potential of

68

Visualizing city operations

data  analytics. We expect that there will be an increasing divergence between consumer applications and professional applications in terms of the data and interfaces to access it, as users of urban environments require information for way-finding and engagement in planning decisions, whereas professionals need a broader set of information for decisions about how to build high quality environments and improve city operations. It is to the questions of design that we turn in the next chapter.

Notes 1 2 3 4

http://4dmapper.com/ www.geoweb3d.com/ www.vu.city http://visitbelfast.com/home/page/belfast-go-explore-360-virtual-realityapp/ 5 http://wagstaffsdesign.co.uk/portfolio/bondstreet/ 6 www.pasco.co.jp/eng/ 7 https://improbable.io/company/news/2016/03/17/disrupting-cities-throughtechnology-a-new-event-with-wilton-park

References Al-Kodmany, K., 2002. Visualization tools and methods in community planning: From freehand sketches to virtual reality. Journal of Planning Literature 17, 189–211. Allwinkle, S., Cruickshank, P., 2011. Creating smarter cities: An overview. Journal of Urban Technology 18, 1–16. doi:10.1080/10630732.2011.601103 Batty, M., Gray, S., Hudson-Smith, A., Milton, R., 2015. Visualizing spatial and social media, in: Halfpenny, P.J., Procter, R. (Eds.), Innovations in digital research methods. SAGE Publications, London and Los Angeles, p. 245. Baudrillard, J., 1983. Simulations. Semiotext(e), Inc., New York. Bennadji, A., Laing, R., Gray, D., 2015. Urban planning and climate change mitigation: Using virtual reality to support the design, in: Silva, C.N. (Ed.), Emerging issues, challenges, and opportunities in urban e-planning. Engineering Science Reference, Hershey, PA, p. 210. Billger, M., Thuvander, L., 2016. In search of visualization challenges: The development and implementation of visualization tools for supporting dialogue in urban planning processes. Environment and Planning B: Planning and Design, 1–24. Borges, J.L., 1946. De rigor en la cienca. Los Anales do Buenos Aires 1, 3. Bosselmann, P., 1999. Representation of places: Reality and realism in city design. University of California Press, Berkeley. Bourdakis, V., Day, A., 1997. A VRML model of bath, in: Coyne, R., Ramscar, M., Lee, J., Zreik, K. (Eds.), Design and the Net. Europia Productions, Paris, France, pp. 13–22. Caneparo, L., 2001. Shared virtual reality for design and management: The Porta Susa project. Automation in Construction 10, 217–228. Caragliu, A., Del Bo, C., Nijkamp, P., 2011. Smart cities in Europe. Journal of Urban Technology 18, 65–82. Dang, A., Liang, W., Chi, W., 2012. Review of VR application in digital urban planning and managing, in: Shen, Z. (Ed.), Geospatial techniques in urban planning. Springer, Berlin, Heidelberg, pp. 131–154.

Visualizing city operations

69

El-Shimy, H., Ragheb, G.A., Ragheb, A.A., 2015. Using mixed reality as a simulation tool in urban planning project for sustainable development. American Journal of Civil Engineering and Architecture 9, 830–835. Ennis, G., Lindsay, M., Grant, M., 1999. VRML possibilities: The evolution of the Glasgow model. IEEE Multimedia 7, 48–51. Fernandes, A.S., Wang, R.F., Simons, D.J., 2015. Remembering the physical as virtual: Source confusion and physical interaction in augmented reality. Proceedings of the ACM SIGGRAPH Symposium on Applied Perception, Tübingen, Germany, pp. 127–130. Frischer, B., 2008. The Rome reborn project: How technology is helping us to study history. OpEd. University of Virginia (November 10, 2008). Fulford, M.G., Clarke, A., Eckardt, H., 2006. Life and labour in late Roman Silchester: Excavations in Insula IX since 1997. Roman Society Publications, London. Hall, A.C., 1993. Computer visualisation for planning control: Objectivity, realism and negotiated outcomes, in: Beheshti, M.R., Zreik, K. (Eds.), Advanced technologies – architecture, planning, civil engineering. Elsevier, Amsterdam, Netherlands, pp. 303–308. Jacobs, J., 1961. The death and life of great American cities. Vintage, New York. Jepson, W., Liggett, R., Friedman, S., 1996. Virtual modeling of urban environments. Presence: Teleoperators and Virtual Environments 5, 72–86. Jepson, W., Muntz, R., Friedman, S., 2000. A real-time visualization system for managing emergency response in large scale urban environments. Proceedings of the 88th Annual Association of Collegiate Schools of Architecture, Los Angeles, CA. Kasai, S., Li, N., Fang, D., 2015. A system-of-systems approach to understanding urbanization – state of the art and prospect. Smart and Sustainable Built Environment 4, 154–171. Kerr, S., 2000. Application and use of virtual reality in WS Atkins. Presented at the CONVR  – Conference on Construction Applications of Virtual Reality, Middlesbrough, UK, 4–5 September, pp. 3–10. Kuliga, S.F., Thrash, T., Dalton, R.C., Hölscher, C., 2015. Virtual reality as an empirical research tool: Exploring user experience in a real building and a corresponding virtual model. Computers, Environment and Urban Systems 54, 363–375. doi:10.1016/j. compenvurbsys.2015.09.006 Ligget, R.S., Jepson, W.H., 1995. An integrated environment for urban simulation. Environment and Planning B: Planning and Design 22, 291–302. Maffei, L., Masullo, M., Pascale, A., Ruggiero, G., Romero, V.P., 2016. Immersive virtual reality in community planning: Acoustic and visual congruence of simulated vs. real world. Sustainable Cities and Society 27, 338–345. Nguyen, M.-T., Nguyen, H.-K., Vo-Lam, K.-D., Nguyen, X.-G., Tran, M.-T., 2016. Applying virtual reality in city planning. Springer, Berlin, Heidelberg, pp. 724–735. Olsson, T., Savisalo, A., Hakkarainen, M., Woodward, C., 2012. User evaluation of mobile augmented reality in architectural planning, in: Gudnason, G., Scherer, R. (Eds.), eWork and eBusiness in architecture, engineering and construction, ECPPM 2012. Taylor & Francis Group, London, UK, pp. 733–740. Portman, M.E., Natapov, A., Fisher-Gewirtzman, D., 2015. To go where no man has gone before: Virtual reality in architecture, landscape architecture and environmental planning. Computers, Environment and Urban Systems 54, 376–384. doi:10.1016/j. compenvurbsys.2015.05.001 Roupé, M., Bosch-Sijtsema, P., Johansson, M., 2014. Interactive navigation interface for virtual reality using the human body. Computers, Environment and Urban Systems 43, 42–50. Shushan, Y., Portugali, J., Blumenfeld-Lieberthal, E., 2016. Using virtual reality environments to unveil the imageability of the city in homogenous and heterogeneous environments. Computers, Environment and Urban Systems 58, 29–38.

70

Visualizing city operations

Sunesson, K., Martin, C.A., Heldal, I., Paulin, D., Roupé, M., Johansson, M., Westerdahl, B., 2008. Virtual reality supporting environmental planning processes: A case study. Presented at the 12th International Conference on Knowledge-Based and Intelligent Information and Engineering Systems KES 2008, Zagreb, Croatia, pp. 481–490. United Nations (UN), 2014. World urbanization prospects, the 2014 revision https://esa. un.org/unpd/wup/ Vigier, T., Moreau, G., Siret, D., 2015. From visual cues to climate perception in virtual urban environments. Presented at Virtual Reality (VR), 2015 IEEE, Barcelona, Spain, pp. 305–306. Wang, X., Kim, M.J., Love, P.E.D., Kang, S.-C., 2013. Augmented reality in built environment: Classification and implications for future research. Automation in Construction 32, 1–13. doi:10.1016/j.autcon.2012.11.021 Woods, H., 2000. VRail – virtual reality in the rail environment. Presented at the Institution of Railway Signalling Engineers Younger Members Conference, ‘The Railway as a System’, Birmingham, UK, 20–21 July. Woodward, C., Hakkarainen, M., 2011. Mobile mixed reality system for architectural and construction site visualization, in: Nee, A.Y.C. (Ed.), In augmented reality: Some emerging application areas. InTech, Rijeka, Croatia, pp. 115–130.

Visualizing city operations

71

Chapter 4

Visualizing design

The design of the built environment does not start from a blank sheet of paper. It increasingly starts from the operating built environment (discussed in the preceding chapter), and the need to adapt, upgrade or alter its operation. Design should thus be outcome driven. The act of designing has traditionally been described as having “a reflective conversation” with the drawing (Schön, 1983). Design is an iterative problem-solving process of generating and testing alternatives, characterized as a ‘propose-critique-modify’ progression (Chandrasekaran, 1990) through which designers may go on a journey of discovery. Designers do not need to confine themselves to a single representation, but rather can explore design possibilities by combining multiple data sources and forms of representation across a range of media. In this process, good designers will use as much evidence as they can muster to consider how their designs will use physical resources, impact the quality of life, and how they will be built, operated and maintained, and disassembled. Design, thus, sits between use and production. Designing the built environment is rarely a solo activity, but instead involves interacting with users and with a professional team, which brings construction contractors, specialist sub-contractors and suppliers together with architects and engineers in a collaborative process in which the decisions made rely on a shared understanding and conception of the intent of a design. Virtual reality provides a new medium that may be used in this process of designing the built environment. Schrage (2000: 35) describes digital prototypes as means to “create and unearth choices.” Yet, unlike the automobile industry, in which the development costs of any given design can be spread over production runs in the millions, the construction sector has far more variable economics of design. There is substantial effort to increase the use of standardized, modular components and assemblies, and to automate production processes, yet building and infrastructure projects remain context-based and are designed to

72

Visualizing design

satisfy particular client requirements within a set of programmatic, location and budget constraints. In order to be actively involved in the design process, users must be able to fully understand its possibilities. Virtual reality as a medium can enable nonexperts to view design options and dynamically propose changes to designs. By offering an intuitive, almost real-world way to experience designed space, VR can be used with clients, end-users and other stakeholders who have an interest in how the design will perform or appear to understand requirements and to enable them to share their experience more easily, and thus critique the emergent design. Unsurprisingly, clients are enthusiastic about the potential of VR applications. One architectural journalist argues that: Walking through a virtual building or zooming into any nook and cranny is a lot more useful than taking one of those roller-coaster ‘fly-throughs’ that make you feel sea sick as you watch them inside some developer’s executive suite. (Glancey, 2001) Even following a good briefing process, clients’ needs may change and evolve as they discover new requirements and shift priorities (Kodama, 1995). Given that these participants, across the professional team and the owners, users and other stakeholders, often have varying backgrounds of expertise and are trained to understand project information in various formats, enabling all of them to visualize design information in an intuitive manner that is easy to understand is central to achieving consensus around the design. But how do technologies such as virtual or augmented reality help participants understand and discuss the design? Does VR always support the design of the built environment, and if so, how? In this chapter, we focus on how VR is used in design and some of the important processes (such as conceptual design, design generation, participatory design, design coordination and design review) that it supports. As the potential of VR in design is recognized, there is now substantial experimentation with the use of VR in architectural design practice. With software and hardware prices dropping, in many organizations, architects and engineers are able to experiment with and demonstrate new ways of working, rather than having to first develop a business case for using virtual reality in their practice. Recent developments in 3D-sketching tools such as Google’s Tilt Brush and Gravity Sketch have been paired with head-mounted displays to offer designers a highly intuitive way to quickly develop early conceptual design ideas in an immersive environment (Figures 4.1a–b). Many such developments build on a long history of experimentation to make design processes more participatory (see Box 4.1). Although the use of VR is increasing, it is not yet widely adopted in comparison to the more traditional use of rendered still images and choreographed animations.

Visualizing design

73

4.1a–b Architect exploring use of ‘tiltbrush’/HTC Vive to build cities (a) and Oculus Rift to view (b)

Box 4.1

Participatory design

The goal of participatory design is to consider all potentially affected social and user groups and ensure that new building projects in given contexts adequately respond to the needs of their future occupants. The idea of applying intuitive interfaces to architectural design to enable participatory design precedes the availability of 3D computer graphics on personal computers. In the 1970s, the Laboratory of Architectural Experimentation (LEA) in Lausanne was developed to facilitate architectural creativity and simulated built space for part of a co-operative housing plan by building a physical model at full scale (Lawrence, 1987). Lightweight plastic building blocks and moveable platforms were used to create a physical model that allowed for exploration and experimentation with spatial forms and dimensions (Figures 4.2a–b). This design-by-simulation process engaged both architects and residents in shaping proposed housing designs according to their requirements. This process was not seen as design automation or as diminishing the role of design expertise. Rather, simulating design at full scale enabled residents to better participate in the design process and provide designers with feedback about their needs, thus informing the design process. A key finding of LEA’s participatory design work was that prototypes used in early design phases should be simple renditions of buildings that do not inhibit the development of design alternatives. They should facilitate easy

4.2a Use of the Laboratory for Architectural Experimentation (LEA) during the initial stages of the design-by-simulation process Source: Roderick Lawrence – reproduced from Lawrence (1987).

Visualizing design

75

4.2b A system of prefabricated door and window frames was also used in simulation at the LEA Source: Roderick Lawrence – reproduced from Lawrence (1987).

and rapid exploration and evaluation of design proposals and be designed to focus participants’ attention on the size and shape of the rooms and the interrelationships between them (Lawrence, 1987).

This chapter explores how virtual reality is used in such design practices. The following sections chart the development of approaches to visualizing design that followed an exponential growth in computing power, from the early use of 3D CAD and digital media in the 1990s (section 4.1), to their use in design review and marketing in the 2000s (section 4.2), to its use in design coordination and review of complex projects, as well as new experiments in design generation in the 2010s (section 4.3). The chapter concludes by considering future developments in the use of VR systems for visualizing design (section 4.4).

4.1 1990–1999: design through digital media In the 1990s, the design of the built environment was revolutionized through the use of digital media (Mitchell and McCullough, 1995). Alongside traditional 2D paper-based practices, a range of overlapping digital techniques such as objectoriented design, 3D scanning, 3D printing, parametric modelling and virtual reality became available. As the choice of medium can affect an understanding of design, Henderson noted that: Young designers trained on graphics software are developing a new visual culture tied to computer-graphics practice that will influence the

76

Visualizing design

way they see and will be different from the visual culture of the paper world. (1999: 57) While the advent of CAD applications was initially seen as a way to automate existing processes, design challenges were increasingly addressed using representations that did not emulate paper-based media. Thus, we now have a generation of architects that were trained in the 1990s or later and are highly proficient users of interactive, spatial, real-time environments.

4.1.1 Parametric design The early 1990s marked a radical change in how architectural forms were created by bringing computer-aided, parametric and algorithmic design into the process. During that time, architect and UCLA professor Greg Lynn coined the term ‘blob architecture’ to describe the new biomorphic shapes derived by manipulating algorithms and using parametric curves such as splines,1 NURBS2 and freeform surfaces. Also in the 1990s, the development of curvilinear forms in architect Frank Gehry’s buildings is attributable to his firm’s use of the CATIA package, which enabled not only the design of fluid forms, but also their digital fabrication and digital assembly on site. Many large architectural practices, such as Foster and Partners and Kohn Pedersen Fox (KPF), increasingly used parametric modelling tools to morph and play with 3D forms that could not be easily imagined outside the computer. Computer-generated forms linked to computer-aided manufacturing techniques further opened up opportunities to fabricate and materialize complex building forms and components. Computer numerically controlled (CNC) machines and 3D printers used alongside advanced CAD and 3D modelling systems enable designers to create physical models from digital data, allowing them to constantly move between the digital and the physical, exploring the evolving design in more than one medium.

4.1.2 Role of VR in the design process One active debate through the 1990s was whether virtual reality could be used in design generation. At the early stages of the design process, few decisions have been made and ideas are imprecise, and therefore architects tend to prefer to use paper sketches and rough scale models because they support ambiguity, imprecision, incremental formalization of ideas and the rapid exploration of alternatives (Gross and Do, 1996). While for non-professional users, highly realistic representations and virtual walkthroughs are easier to understand, designers can discover hidden features in a representation if they do not become stuck with one single interpretation of it (Suwa et al., 1999) and routinely shift their attention between different modes of thinking, such as looking alternatively at features such

Visualizing design

77

as spaces or structures (Lawson and Roberts, 1991). One visualization specialist argued, “If the building design is very sketchy, if the designer just has a vague idea of something then obviously you can’t go to 3D because it becomes . . . real.” Although some advocates of VR systems maintained that virtual reality would be useful for design generation and argued that it supports spatial thinking and rapid exploration of alternatives (Furness, 1987; Osberg, 1997), others argued that in early design phases, plans and sections were better tools for representing the organization of spatial structure than real-time rendered environments. Architects did not unreservedly welcome virtual reality in their practice, and initially few architects used virtual reality applications in the conceptual design of physical places. Widespread acceptance of VR among the architecture community was hindered by the lack of task-specific interfaces to support the intuitive generation of forms and visualization capabilities that allow for abstract and conceptual idea exploration in early design stages. A range of representations can be generated in virtual reality, using egocentric and exocentric viewing perspectives and varying degrees of abstraction and realism. For design development, particularly in the early stages, representations that allow a whole problem to be seen in a single view may be preferable. To address this issue, researchers focused on developing VR applications that employ simple abstract shapes and limited palettes of colours to support early iterative design thinking. For example, the application Sculptor introduced the concept of the ‘space element’ (Kurmann et al., 1997), which has no digital ‘mass’ and carves out a space for itself when it intersects with a solid element in a model. The application’s ability to allow designers use of both solids and voids afforded it a more intuitive approach as an architectural design tool at the conceptual design phase. Other early research efforts explored the use of distributed augmented reality for collaborative interior design applications (Ahlers et al., 1995), and there were rapid developments in material representation in VR so that the effects of glass and light could be shown in interactive, spatial, real-time environments. Similarly, early experiments with VR showed the differences between virtual and physical environments, where more realistic virtual representations may not result in a more realistic evaluation of potential spaces. “The problem with using virtual reality for client review is that you can give wrong ideas so incredibly easily,” argued an IT manager from one architectural firm. In this firm, panoramic representations were seen as more effective, as they allow architects far more control over selecting key viewpoints within a digital model. The general view was that there is nothing about a design project that cannot be explained with ‘one glance at a decent drawing’. Advocates of virtual reality have instead argued that VR gives clients, managers and end-users greater ability to explore and understand project designs as they can experience proposals from an egocentric viewing perspective, i.e. the same way they experience the built environment. Carefully choosing the amount of detail to include in simulated models can help participants focus their attention on and evaluate salient aspects of a design (see Box 4.2).

78

Visualizing design

Box 4.2 The role of design abstractions as well as realistically rendered images In an example from the first edition of this book, Dutch architecture firm Prent Landman used virtual reality models for design review in their work on the Westeinde Hospital in The Hague, Holland (Figures 4.3a–b). However, rather than presenting a realistically rendered impression, their VR model presented only the key design decisions that made up their scheme. This model, which was developed by US software company Mirage 3D, was successful in drawing viewers’ attention to the overall massing and layout of the structure, rather than more minor details. It was used to gain the acceptance of the local community living around the hospital. The highly interactive and explorative nature of virtual reality introduces additional complexity when used for design reviews. As a result, architects and designers often become curators of viewers’ experiences by carefully crafting the appearance of a model and its narrative by navigating to preselected virtual spaces to ensure focused and meaningful feedback. Highly detailed and realistic VR models have been shown to quickly attract users’ attention, and the inclusion of materials, colours, light sources, shadows or furniture in these models should be based on the considerations of the design development stage, the purpose of the review and what role these components may play in supporting spatial understanding and the sense of scale.

4.3a A VR model of the Westeinde Hospital design by Prent Landman Source: Mirage 3D and architects Prent Landman, Holland.

Visualizing design

79

4.3b Another view of the Westeinde Hospital, Holland, in the VR model Source: Mirage 3D and architects Prent Landman, Holland.

In addition, highly detailed or photorealistic VR models often present more practical challenges, such as reduced real-time rendering speeds and low frame rates, which can equally distract users from an engaging experience during design reviews. Rendering and processing speeds are likely to continue to lag behind the increasing pace of ever larger and complex data models. Managing these issues, particularly for large complex projects, often requires dividing large models into a number of smaller, possibly hyperlinked sub-models that allow for easier navigation and model manipulation. Designers thus need to exercise judgment when deciding which elements are critical to include in VR models to both ensure useful feedback and maintain the level of control they have with standard rendered images or animated walkthroughs.

Earlier studies have also illustrated how the amount of detail can influence the way participants perceive and understand size and distances in virtual reality. Considering the use of virtual reality to evaluate spaces, Pinet’s (1997) study, for example, revealed how the presence of windows or furniture led participants to perceive virtual spaces as larger. On the other hand, even when familiar objects such as a door or chair are included in the virtual model, participants can misinterpret the size of spaces because they tend to prefer certain cues over others, and in their preference also overestimate or underestimate their size (Henry and Furness, 1993). These instances reveal the complexity and extent of considerations for using virtual reality for evaluative tasks, where both the VR medium and the representation can affect the outcomes and benefits obtained in design reviews. 80

Visualizing design

4.2 2000–2009: design reviews, choosing options and marketing In the decade from 2000–2009, the main uses of VR were focused on design reviews, option selection and marketing, along with growing experimentation with conceptual design and participatory design.

4.2.1 Collaborative design reviews Early lessons in participatory design and virtual reality applications have prompted studies to look into how different VR system configurations shape the dialogue around virtual design representations. Virtual mock-ups, or full-scale digital representations that are similar to physical mock-ups, but are displayed in a range of immersive projection displays, demonstrate the potential of virtual reality approaches to help project teams more easily visualize information and thus better understand the design and engage in a fruitful dialogue (Dunston et al., 2007; Messner, 2006; Whisker et al., 2003). Comparing how immersive virtual reality displays, such as CAVE, and non-immersive displays, such as monitors or paper, support various phases in the architectural process, experimental studies (Campbell and Wells, 1997; Dunston et al., 2007) suggest that large immersive-type virtual reality displays enable a more intuitive design review in terms of how users perceive and understand space. In a similar comparative study, Bochenek et al. (2001) suggested that the combined use of different VR technologies could be more effective compared to a single choice. Furthermore, the quality of team engagement in collaborative design review processes seems to be affected not only by model content, but also by the medium used to convey information and the extent to which it enables users to interactively explore designs. For example, the ability to annotate the design during the review can contribute to its effectiveness (Verlinden et al., 2003). In their later work, Verlinden and Horváth (2009) propose augmented prototyping in the form of imagery projected onto physical and rapid prototypes as a method to enhance users’ perceptions of the design. After a project is complete, ensuring that it responds to occupants’ needs relies on the designers knowing, rather than assuming, the requirements of its users. As a result, it is important to ask project occupants at any given stage about how a design meets their needs, as users tend to evaluate the built environment differently from designers (Zimmerman and Martin, 2001). The need to understand how building projects will operate and be used is particularly evident in instances where, for example, assumptions about building occupancy patterns can affect building operational costs due to over-designed mechanical systems. Involving clients, practitioners from other project disciplines and end-users with diverse knowledge backgrounds in the design review process relies on their ability to accurately visualize and understand information and provide meaningful feedback (Otto et al., 2005). Engineering and project management company Bechtel described its use of virtual reality for client and end-user reviews of complex projects such as airports (see Box 4.3), hospitals and cancer treatment centres as enabling them to fine-tune Visualizing design

81

Box 4.3

Dubai International Airport, UAE

Virtual reality was used in client and end-user design reviews of the Dubai International Airport project (Figures 4.4a–d). Using a VR model, the airport’s external and internal signage was evaluated to ensure that all signage was easily visible and that the vehicular entrances to the airport and in the terminal building were located to allow for ample time to

4.4a VR model screenshot of the Dubai International Airport project, showing an interior design option Source: Bechtel - Advanced Visualization/Virtual Reality, San Francisco, USA.

4.4b A second screenshot from the VR model of Dubai International Airport, showing another design option for the interior of the building Source: Bechtel - Advanced Visualization/Virtual Reality, San Francisco, USA.

82

Visualizing design

4.4c A third screenshot from the VR model of Dubai International Airport, showing a different design option for the building’s interior Source: Bechtel - Advanced Visualization/Virtual Reality, San Francisco, USA.

4.4d A final screenshot from the VR model of Dubai International Airport, showing yet another design option Source: Bechtel - Advanced Visualization/Virtual Reality, San Francisco, USA.

respond to them. The client’s security personnel reviewed the model to confirm that no conflicts existed with the proposed design from a security point of view. In the early 2000s, a small group of modellers at Bechtel worked for nine months with the project architects and engineers on a model that was used by the project team and with members of the client organization. This

Visualizing design

83

model could be used in a multi-user avatar-based manner, with the engineers from Dubai and the modellers from San Francisco meeting virtually within the model. The model was used to review design features and view different design options. The architects gave samples of carpets and other furnishings to the visualization group that allowed them to incorporate these textures into the models. The model helped the client to consider design options, resulting in a number of design changes before the building was constructed.

these environments “so that the patient is comfortable.” For large companies like Bechtel, the use of highly realistic rendered models for client review supplements the use of virtual reality models within their project team. For example, for its London Luton Airport (LLA) project, Bechtel was contracted to work with Berkeley Capital and LLA to construct and operate the airport for a 30-year period. Bechtel created two VR models from the same set of CAD data, one to help coordinate detail design processes (i.e. for internal professional use) and the other for design review (i.e. for broader communication). The models were quite distinct and were maintained separately. The first model included the project’s proposed heating, ventilation and air conditioning (HVAC) system, its steel work and the design of its floors and stairways in order to detect construction conflicts between these subsystems. By contrast, the second model was purely for architectural visualization purposes and showed surface finishes and details. In the 2000s, using virtual reality in large complex projects involved significant investment. For example, the Bechtel visualization group worked on the LLA project for over a year, but the significant resources they invested in to create and maintain the model were deemed to bring commercial benefits and cost reductions in both the design and construction stages. The model was presented to LLA, Berkeley and airline operators such as Easy Jet and First Choice. As this model differed from the engineering model, it could be optimized to show the internal building layouts and features. Given that the whole model was too large because of the number of polygons, the Bechtel manager could turn off the geometry to make it run better when demonstrating it to the client. The need to provide universal access for disabled people drove some developments in participatory design. Instances in which a project was required to accommodate the needs of aging or disabled users were seen to particularly benefit from using virtual reality approaches to simulate patterns of movement, visibility or access to facilities. At Strathclyde University, researchers used an immersive VR projection screen system with a reactive motion chair to simulate the movement of manual wheelchair users through a virtual built environment model. When wheelchair users use the VR systems to explore new building designs, data about

84

Visualizing design

potential collision points can be incorporated in the design phase to better accommodate their needs (Conway, 2001). In such participatory simulation processes, designers can learn about users’ specific requirements and incorporate them into a design. Participatory design review can also be seen as a programme of ‘reality checks’ throughout project procurement that allow designers to better assess client constraints and protect the interests of end-users (Leaman and Bordass, 2001). However, designers have been described as resistant to these practices, following technological and architectural fashions for their own sake (Derbyshire, 2001).

4.2.2 Choosing design options from standardized libraries In the 2000s, the use of virtual reality became established in the design review of standard and customized housing and interior projects. Here, design decisions were selected from a narrow palette of pre-determined options and a library of optimized models could be created virtually to maximize the benefit with limited computing power. Similar techniques were also being used in several consumer goods markets, such as the furniture market. For example, Swedish company IKEA launched a project that paired its 3D-furniture catalogue with an augmented reality application, offering its consumers the ability to use mobile devices to design and test their own interior spaces by ‘placing’ virtual furniture in their real homes to evaluate options by type, colour or size. To facilitate space fit-out, the virtual models can be flexibly added to a virtual or even a real room, allowing both professionals and consumers to select from a palette of options and visualize the results instantly before making large furniture purchases. In this case, the extent of the model reuse justified the expense of its development. However, compared to such standardized projects that have larger component reuse value, the use of virtual reality for design review on bespoke design projects was less widespread in the 2000s due to a lack of perceived commercial benefits. Designers of small custom projects are not able to benefit from the economies of scale associated with reusing virtual reality models. For example, one architect argued that: If your client tells you to use [VR] then obviously you would, but in terms of the business benefits of choosing to use it with a client, that is ‘all rubbish’ – when you are with a client you are telling the client a story and the story is very carefully choreographed. However, virtual reality was seen as particularly important for value engineering, in which costs are reduced to maintain project value without compromising design quality. One visualization specialist argued that clients often make different choices when they can see the impact of their decisions and tend to reject lower-cost solutions when they can visualize them, as the difference in quality is seen between proposals.

Visualizing design

85

4.2.3 Design marketing During the 2000s, a motive for using VR in many design firms was for marketing purposes. Organizations that work on smaller projects, such as housing developments, were using virtual reality to promote sales to potential customers. Housing developers in the UK operate speculatively and face considerable risks, as they often have no known buyers at the start of the process, so the ability to sell unbuilt projects using virtual reality is a major advantage, as it reduces the risk of development. Often, suitable examples of similar built housing types do not exist in the near vicinity to show prospective clients. Virtual reality allows a house-builder to show their housing types to clients on a computer monitor at any office or show-house (Whyte, 2000). Housing developers also used immersive virtual reality to receive press coverage. For example, the developer Persimmon Homes marketed its prestigious new apartment blocks in Sheffield’s city centre by giving public VR tours in a local bar (Figure 4.5). Local press covered the event and served to promote sales, even though the apartments had not yet been constructed. For larger projects, such as banks and airports, clients may also want to use virtual reality models to market these facilities. Skyscraper Digital, the visualization division of Little & Associate Architects, worked with two major banks in Charlotte, North Carolina, to develop models of their new downtown headquarters. Each bank used virtual reality models to see how their new headquarters building would appear on the city skyline and from different parts of downtown, the airport and highways. In addition, the banks used their model to obtain zoning approvals and communicate the design at town hall meetings, as well as to raise the profile of the development in its marketing and publicity campaigns.

4.5 VR model of the Persimmon Homes development created by the VR supplier Antycip UK Source: Antycip Ltd.

86

Visualizing design

Also, airports such as in Schipol, the Netherlands, and Munich, Germany, have offered virtual tours of their facilities in the CAVE to market them to international businesses and to advertise rental space to potential clients. For the companies involved – SARA, Zegelaar & Onnekes and Hans van Heeswijk – interactively viewing the project in an immersive VR facility was seen to extend its existing technical options, potentially leading to increased market share and profitability. Munich Airport Center marketed its facilities using an online virtual reality model in which visitors choose their own avatar to virtually explore the Center in the networked virtual environment (Figures 4.6a–b). Using a virtual model of their facilities to advertise rental space is highly cost effective, with

4.6a VR model of Munich Airport Center, Germany Source: Phillippe Van Nedervelde, E-SPACES, Germany

4.6b Another VR model of Munich Airport Center, Germany Source: Phillippe Van Nedervelde, E-SPACES, Germany

Visualizing design

87

catalogue requests from potential renters becoming unnecessary. The model also allows graphic objects to be used to visualize event set-ups or office furnishings and plan events flexibly, quickly and cost effectively. Furthermore, online models of these airports are viewed and visited remotely, which is of considerable advantage to airport facility providers, as potential renters are often based internationally. For many virtual reality users, some marketing advantages also come from its novelty. Generally speaking, current approaches to marketing built environment projects focus on glossy images, but virtual reality offers greater opportunity to show clients the spatial layout of new buildings, the potential for change over time and the improved running costs of efficient construction.

4.3 2010–2019: transforming design practice Recently, there have been significant developments in the computing and gaming industry that opened immense possibilities to more easily design rich interactive environments using VR. For example, the Unity game engine can be found as a common platform for developing both virtual and augmented reality applications on devices ranging from mobile phones to high-end immersive VR systems. This integration with game engines increasingly allows the use of VR systems for realtime rendering of aesthetic features of the design, such as texture or lighting. For example, international architecture practice Scott Brownrigg uses BIM models that are coupled with Geometrics’ Enlighten software for real-time dynamic lighting requirements (Figures 4.7a–b) that can then be displayed across different VR platforms. Working alongside Geometrics to test and further develop the Enlighten

4.7a ARM models with simulated dynamic lighting Source: Scott Brownrigg.

88

Visualizing design

4.7b ARM models with simulated dynamic lighting Source: Scott Brownrigg.

software beyond the traditional gaming environments, Scott Brownrigg uses fully detailed and federated Revit models, such as one for the new ARM headquarters office building at Peterhouse Technology Park in Cambridge, that combine architectural, structural, engineering and interior information, which then enable various stakeholders to experience the volume, space and lighting of designs in virtual environments. At the moment, Unity is attracting much attention in the built environment community because of its ability to integrate model data from different sources with its own functions and objects library that allows physical, environmental and other properties of the design to be realistically simulated. Coupled with virtual reality systems such as VR rooms or head-mounted displays, different forms of interactivity, such as logging users’ movements through the virtual space or dynamically switching between design options, can engage a broader range of stakeholders in the design process. Large and complex projects such as infrastructure or healthcare facilities specifically look into virtual reality approaches to support collaborative design and performance simulation.

4.3.1 Design coordination and review in complex projects Projects with inherent system or design complexity such as healthcare facilities, data centres or infrastructure projects tend to benefit from engaging a broad group of users in collaborative VR offering egocentric views and interactivity to draw on users’ tacit knowledge and experience to inform the design. To address the often quite specific functional considerations inherent in healthcare design, such as operating or patient rooms, virtual reality can

Visualizing design

89

4.8a–b Exploring patient room layouts in a design review Source: Geisinger Health System and Penn State University, 2013.

demonstrably improve client and end-user engagement in reviewing specific operational and access requirements (Dunston et al., 2011). Issues of safety and patient well-being has prompted design approaches such as experience-based or evidence-based design to engage end-users in virtual design (Kumar et al., 2011). Researchers at Pennsylvania State University developed an interactive virtual mock-up to engage clients in reviewing the design of patient rooms for the new Geisinger Health System. Through this process, the clients were able to temporarily hide specific furniture components and review the layout of equipment and electrical outlets, which enabled them to understand the detailed requirements of locating beds and equipment (Figures 4.8a–b). Similarly, recent research offers detailed accounts of how design teams engage in viewing, evaluating and discussing designs (Berg and Vance, 2017; Maftei and Harty, 2015). Maftei (2015) details how future users, in interacting with each other and the model in a CAVE, initiate changes to a hospital design as they notice elements that do not conform to the client’s requirements. This collaborative reflection-in-action process unfolds through both verbal and bodily behaviours as the participants attempt to make sense of both the technology and the representation through asking for specific viewpoints and moving around space (Maftei, 2015; Maftei and Harty, 2015). The value of using large-display collaborative virtual reality in design has to date been primarily associated with its ability to support communication and understand user feedback on and experience with virtual spaces (Christiansson

90

Visualizing design

Box 4.4 Design reviews using fully immersive and semi-immersive VR London-based architecture office Cullinan Studio led a research initiative with Holovis International, WMG and Hyde Housing Association to test and establish a practice of using immersive visualization techniques to engage clients and construction practitioners, respectively, in reviewing digital datasets. Head-mounted displays such as Oculus Rift are now used regularly to engage clients and design teams in reviewing schematic or massing models in early design stages towards highly realistic models in the later stages of design (Figure 4.9). The Enscape plug-in for Revit allows the BIM model to be fully rendered and navigated in VR. Any changes made to the BIM model, such as moving components or changing the material appearance, are instantaneously observed in the VR environment (Figures 4.10a–c). Visualization models directly linked to the authoring application allow both designers and users to interactively and dynamically modify, move or otherwise manipulate components for immediate results. While clients typically review the design using immersive VR headsets, interdisciplinary design reviews with engineers, contractors and builders tend to take place in multi-modal interactive spaces that feature a combination of large semi-immersive displays for reviewing 3D models and interactive whiteboards that allow members to annotate, mark up and modify design drawings (Figures 4.11a–b).

4.9 In-house design review using Oculus Rift CV1 to navigate the design Source: Cullinan Studio.

Visualizing design

91

4.10a–c Revit models imported into Unity with added interactivity that allows furniture to be added, modified and positioned while visualized in VR Source: WMG, University of Warwick.

92

Visualizing design

4.11a–b An end-user design review session using WMG’s stereoscopic Powerwall facility. The adjacent touch screens allow comments to be recorded directly onto the associated contract drawings. Data immersion in a group setting assists quick decisions by the client and design teams Source: Cullinan Studio and WMG.

et al., 2011; Dunston et al., 2010; Nykänen et al., 2008; Palmer, 2011). To explore how practical applications of virtual reality can help facilitate design decision-making for large infrastructure projects, in 2016, Crossrail set up a collaborative virtual environment in its offices for three months. During this period, project teams gathered to walk through the design of a power station in order to review its layout and safety features. In one instance (Figure 4.12), an engineer’s body language revealed a design conflict that was not easily noticed on a plan view of the design. Virtual reality for collaborative product design is a research focus and nascent area of practical application. The company Bechtel is now experimenting with the use of avatars for remote design coordination meetings, bringing together designers from across international offices to meet around the virtual table and discuss design. Scanning technologies have become so good that colleagues can be realistically rendered into the virtual scene, improving the sense

Visualizing design

93

Figure 4.12 Crossrail engineer explains the shape of the gate not visible in plans Source: Dr. Laura Maftei, University of Reading

of presence and collaboration. Studies continue to explore methods for distributed web-based design development and design reviews in shared VR-based environments (e.g. Hou et al., 2008; Merrick and Gu, 2011; Wang and Dunston, 2013). After testing how an augmented reality approach can support a distributed review of a mechanical design compared to standard commercial software, Wang and Dunston (2013) reported several interesting findings. Most notably, the way users switch between egocentric and exocentric views (e.g. using a mouse to zoom in and out as opposed to leaning forward or back wearing a headmounted display, or HMD) can alleviate or exacerbate the sense of disorientation and exert cognitive load in reorienting to the model at the expense of reviewing it for any errors. However, in the design process, virtual reality continues to be used most widely for design review in later phases. The extent to which virtual reality applications are useful in design review depends in part on project characteristics such as the complexity of a project and its level of component reuse. Trade-offs

94

Visualizing design

often emerge between the expected benefits and the time required to create virtual reality models. This is slowly changing and we can expect collaborative design to become a more prevalent use for VR as the link between design authoring applications and VR becomes more direct, reliable and flexible, allowing for a bi-directional flow of information. One such plug-in that, for example, links Revit with an HMD (see Box 4.4) offers a glimpse into how the design review process can become a more active, collaborative exploration of design options by modifying the design from an egocentric perspective for instantaneous results. This can open new avenues for achieving value in early design as well, and not only late in the design when VR models get a greater reuse value through design marketing.

4.3.2 Design of way-finding and spatial optimization Recent work at engineering, planning and design firm Arup explored the potential to perform post-occupancy evaluations before construction is even started. In addition to Arup’s way-finding experts asking sample users to share their experience while navigating the environment, Arup used automated route logging when users moved through a virtual space to identify, for example, slowdown points, and improve their understanding of how users find their way around. Inspired by games technology and virtual reality, Arup created a virtual environment for the MTR Admiralty Underground Station extension in Hong Kong (Figures 4.13a–c). The design of the station, already one of the busiest in

4.13a–c Design review of user movements through MTR Admiralty Underground Station Source: Arup.

Visualizing design

95

4.13b–c Arup data specialists analyse the recorded journeys through MTR Admiralty Underground Station. Source: Arup

the world, extends it from the current three underground levels and four platforms to six underground levels and doubles the number of platforms, emphasizing the importance of way-finding. The Arup team, assessing the way-finding design first, asked a small group of users to ‘walk’ through the virtual environment and complete specific tasks. In doing so, users provided comments on signage and visibility, and their joystick ‘movements’ through the space were also logged. Arup data specialists could then analyse the recorded journeys represented as traces to understand the effect of the design on movements and inform the Arup way-finding expert, who would provide additional potential reasons behind any slowdowns, such as missing or conflicting information. The feedback and data

96

Visualizing design

supported Arup’s suggestion to the client to relocate or modify 25% of the signage, of which the client agreed to implement half. Realizing the potential of crowd-sourcing this type of feedback, Arup set up a simulation kiosk at the Venice Biennale, collecting and logging over 1,600 journeys. The increase in data traces opened future possibilities to crowd-source design feedback online, partnering with companies such as 3Drepo, followed by applying a machine learning approach to movement patterns as well as cyclical testing of design scenarios, which potentially reveals possible problems and improves decision-making. For similar tasks while working on Moorgate Station Upgrade Capacity (MSCU), engineering consultancy CH2M partnered with Igloo Software, a technology start-up company, to deploy a fully immersive projection system to engage project stakeholders in experiencing and reviewing the federated BIM model of the infrastructure project (Figure 4.14). CH2M has used a virtual reality facility called an ‘IGLOO’ in major projects, including the Bond Street and Tottenham Court Road station upgrade in London. To increase capacity in the Moorgate station while reducing congestion and optimizing journey times, CH2M used the IGLOO to assist with tasks such as identifying the station entrance location and step-free access (SFA) routes, ensuring visibility lines, identifying contraventions to building regulations and other contemporary standards and identifying relevant retail and advertising areas. These tasks benefited from immersing the users in the virtual model to review and resolve these design aspects. The reported benefits included the ability to evaluate multiple design options and equally engage project stakeholders (non-experts who are not overly familiar with the project can understand things more clearly in the IGLOO) to provide sensible feedback. CH2M reports the level of maturity in using the technology, which has since become an integral part of design.

4.14 Design review in the IGLOO Source: CH2M.

Visualizing design

97

4.4 2020 onwards: towards the future of VR in design The decreasing costs of VR hardware and software paralleled with a growing range of in-built capabilities offer easier access to VR in architectural and engineering practice. This allows designers to experiment with applications across a range of processes associated with design: conceptual design, design generation, participatory design, design coordination and design review. The scepticism in the 1990s about the use of VR in early stages of design has given way to this proliferation of experiments, some of which leverage the pervasiveness of digital information about the built environment as a starting point for design generation. While in-situ collaborative VR practices become more established, there are also a growing number of initiatives that explore ways to recreate this form of social presence in remote design reviews through avatar-based multi-user worlds. The first edition of this book appeared just before SecondLife3 proselytized a massive multi-user online virtual world as not only a communication platform, but also as a new digital playground, wide open for its users to assign it a meaning – whether that was for retail of digital items or crafting an alternative virtual existence altogether. We anticipate more extensive use of VR for remote design work in the future, as avatars become live-streamed scanned images of the collaborators at work on the design. Developments in digital content and information layers will enable users to experience other, more intangible aspects of how the design performs in time, through sounds or touch. These developments and the imagination of designers in using the medium will impact the use of VR in design. There are questions about how abstract the representation should be: the more aesthetically polished a model is, the more likely it is to be used simply as a vehicle for persuasion and public relations, with significant energy diverted from design to the production of representations (Erickson, 1995). Clients and end-users generally will feel more comfortable making design changes if it does not appear that decisions have already been made and designs have been finalized, which tends to occur more often when they are presented with highly detailed or photorealistic models. With increasing capabilities of technology and the growing richness of digital information, the task for VR-experience designers will be as much about selecting relevant bits of information to show to relevant users as the design itself. Information overload remains a tangible possibility in an increasingly digitized world that if not carefully considered could easily be counter-effective, if not problematic. With the increased use of virtual reality and digital media for both design and presentation, professionals will still need to consider the extent of detail shown in presentations. In design presentation and review, a general lack of an inherent narrative structure in virtual reality applications has been a recurring issue. One concern is that virtual reality models may not provide designers and clients with an appropriate balance of participation and control. Researchers at Penn State illustrated this challenge when users who were reviewing and freely navigating a design

98

Visualizing design

displayed on three large screens began to notice details such as wall colours or interior materials as they appeared in the model, which were irrelevant to the given design stage and thus diverted the discussion from the meeting agenda (Liu et al., 2014). To achieve increasing maturity in the use of VR, designers need to address issues regarding appropriate levels of detail and realism included in VR models that can support, rather than distract or even mislead, users in providing meaningful feedback. While there is no intrinsic narrative structure in VR, we anticipate a growing maturity in using VR in a structured manner within a wider design discussion. To achieve this, some methods include developing a predetermined review agenda of specific design spaces and questions, or sectioning a model into specific scenes to constrain viewer navigation only to sections relevant to a given review. Some established users of virtual reality often set up a series of predetermined viewpoints to help guide users through a model, familiarize them with key aspects of the design and direct their attention to key areas in the design review process. However, crafting this structure and focused design exploration requires careful understanding of user behaviour and requires additional planning. As we become more sophisticated users of interactive, spatial and real-time virtual environments, we will learn the extent to which designers should control the narrative and representational experience to tell a story and focus client and end-user attention on relevant design issues. We noted at the start of this chapter that design sits between use and production. We turn to this wider production process in the next chapter to discuss how technologies bring together professionals from different parts of the life cycle of buildings and infrastructure. This process offers new opportunities for designers to visualize and consider tasks such as material supply, manufacture and assembly and operator safety in assembly as well as the safety of those that operate and maintain our built environment.

Notes 1 In computer graphics, a spline is a polynomial curve defined by a number of points; it is named after mechanical splines that draftsmen used in ship building. 2 Non-uniform rational basis spline, or NURBS, is a mathematical model for creating curves and surfaces commonly used in computer-aided design and manufacturing. 3 http://secondlife.com/

References Ahlers, K.H., Kramer, A., Breen, D.E., Chevalier, P.-Y., Crampton, C., Rose, E., Tuceryan, M., Whitaker, R.T., Greer, D., 1995. Distributed augmented reality for collaborative design applications. Computer Graphics Forum 14(3), 3–14. Berg, L.P., Vance, J.M., 2017. Industry use of virtual reality in product design and manufacturing: A survey. Virtual Reality 21(1), 1–17.

Visualizing design

99

Bochenek, G.M., Ragusa, J.M., Malone, L.C., 2001. Integrating virtual 3-D display systems into product design reviews: Some insights from empirical testing. International Journal of Technology Management 21(3–4), 340–352. Campbell, D., Wells, A., 1997. A critique of virtual reality in the architectural design process. Unpublished Report, HIT Lab, Seattle. Chandrasekaran, B., 1990. Design problem solving: A task analysis. AI Magazine 11(4), 59–71. Christiansson, P., Svidt, K., Sørensen, K.B., Dybro, U., 2011. User participation in the building process. Journal of Information Technology in Construction 16, 309–334. Conway, B., 2001. Development of a VR system for assessing wheelchair access. Launch of 4th Call and EQUAL Research Network, 13, www.fp.rdg.ac.uk/equal/Launch_ Posters/equalslidesconway1/sld001.htm Derbyshire, A., 2001. Editorial: Probe in the UK context. Building Research & Information 29(2), 79–84. Dunston, P.S., Arns, L.L., McGlothin, J.D., 2010. Virtual reality mock-ups for healthcare facility design and a model for technology hub collaboration. Journal of Building Performance Simulation 1, 185–195. Dunston, P.S., Arns, L.L., McGlothin, J.D., 2007. An immersive virtual reality mock-up for design review of hospital patient rooms. Presented at CONVR 2007, Penn State University, University Park, PA, Penn State University, University Park, PA, 9. Dunston, P.S., Arns, L.L., McGlothlin, J.D., Lasker, G.C., Kushner, A.G., 2011. An immersive virtual reality mock-up for design review of hospital patient rooms, in: Wang, X., Tsai, J. (Eds.), Collaborative design in virtual environments. Springer, Berlin, Heidelberg, pp. 167–176. Erickson, T., 1995. Notes on design practice: Stories and prototypes as catalysts for communication, in: Carroll, J.M. (Ed.), Scenario-based design: Envisioning work and technology in system development. John Wiley & Sons, Inc., Chichester, UK, pp. 37–58. Furness, T., 1987. Designing in virtual space, in: Rouse, W.B., Boff, K.R. (Eds.), System design: Behavioral perspectives on designers, tools, and organization. North-Holland, New York, NY, pp. 127–143. Glancey, J., 2001. Fantasy football: Jonathan Glancey builds Leicester City a new stadium in 15 minutes flat. The Guardian 13 May, https://www.theguardian.com/culture/2001/ may/14/artsfeatures.arts Gross, M.D., Do, E.Y.-L., 1996. Ambiguous intentions: A paper-like interface for creative design. Proceedings of the 9th Annual ACM Symposium on User Interface Software and Technology, Seattle, WA, pp. 183–192. Henderson, K., 1999. On line and on paper: Visual representations, visual culture and computer graphics in design engineering. MIT Press, Boston. Henry, D., Furness, T., 1993. Spatial perception in virtual environments: Evaluating an architectural application. Proceedings of IEEE Virtual Reality Annual International Symposium, Seattle, WA, pp. 33–40. Hou, J., Su, C., Zhu, L., Wang, W., 2008. Integration of the CAD/PDM/ERP system based on collaborative design. Presented at the 2008 ISECS International Colloquium on Computing, Communication, Control, and Management, Guangzhou, China, pp. 561–566. Kodama, F., 1995. Emerging patterns of innovation. Harvard Business School Press, Boston. Kumar, S., Hedrick, M., Wiacek, C., Messner, J.I., 2011. Developing an experienced-based design review application for healthcare facilities using a 3d game engine. Journal of Information Technology in Construction (ITcon) 16 (Special Issue), pp. 85–104. www. itcon.org/2011/6

100

Visualizing design

Kurmann, D., Elte, N., Engeli, M., 1997. Real-time modeling with architectural space. Presented at CAAD Futures 1997, München, pp. 809–820. Lawrence, R.J., 1987. House planning: Simulation, communication and negotiation, in: Lawrence, R.J. (Ed.), Housing dwellings and homes: Design theory, research and practice. John Wiley & Sons, Inc., Chichester, UK, pp. 209–240. Lawson, B., Roberts, S., 1991. Modes and features: The organization of data in CAD supporting the early phases of design. Design Studies 12(2), 102–108. Leaman, A., Bordass, B., 2001. Assessing building performance in use 4: The Probe occupant surveys and their implications. Building Research & Information 29(2), 129–143. Liu, Y., Lather, J., Messner, J., 2014. Virtual reality to support the integrated design process: A retrofit case study. Presented at Computing in Civil and Building Engineering, American Society of Civil Engineers, Orlando, FL, pp. 801–808. Maftei, L., 2015. The use of immersive virtual reality technology in the design process: A reflective practice approach (Ph.D. thesis). School of Construction Management and Engineering, University of Reading, UK. Maftei, L., Harty, C., 2015. Designing in caves: Using immersive visualisations in design practice. International Journal of Architectural Research: ArchNet-IJAR 9(3), 53–75. Merrick, K.E., Gu, N., 2011. Case studies using multiuser virtual worlds as an innovative platform for collaborative design. Journal of Information Technology in Construction (ITcon) 16(12), 165–188. Messner, J., 2006. Evaluating the use of immersive display media for construction planning, in: Smith, I. (Ed.), Intelligent computing in engineering and architecture: Lecture notes in computer science. Springer, Berlin/Heidelberg, pp. 484–491. Mitchell, M.J., McCullough, M., 1995. Digital design media. International Thomson Publishing, Inc., New York. Nykänen, E., Porkka, J., Kotilainen, H., 2008. Spaces meet users in virtual reality. Proceedings of ECPPM 2008 Conference on eWork and eBusiness in Architecture, Engineering and Construction: ECPPM 2008, CRC Press, Boca Raton, FL, pp. 363–368. Osberg, K., 1997. Spatial cognition in the virtual environment. HIT Lab, Seattle. Otto, G., Messner, J.I., Kalisperis, L.N., 2005. Expanding the boundaries of virtual reality for building design and construction. Presented at the ASCE International Conference on Computing in Civil Engineering, Cancun, Mexico. Palmer, C., 2011. CAVE-CAD software will help mine human brain to improve architectural design. http://ucsdnews.ucsd.edu/newsrel/general/20110714CAVE-CAD.asp Pinet, C., 1997. Design evaluation based on virtual representation of spaces. Proceedings of ACADIA’97 Conference, Quebec, Canada, pp. 111–120. Schön, D., 1983. The reflective practitioner. Basic Books, New York. Schrage, M., 2000. Serious play: How the world’s best companies simulate to innovate. Harvard Business School Press, Boston, MA. Suwa, M., Gero, J.S., Purcell, T., 1999. Unexpected discoveries and s-inventions of design requirements: A key to creative designs. Presented at Computational Models of Creative Design IV, Key Centre of Design Computing and Cognition, University of Sydney, Sydney, Australia, pp. 297–320. Verlinden, J.C., De Smit, A., Peeters, A.W., van Gelderen, M.H., 2003. Development of a flexible augmented prototyping system. Journal of WSCG 11(1), 496–503. Verlinden, J.C., Horváth, I., 2009. Analyzing opportunities for using interactive augmented prototyping in design practice. AI EDAM 23(3), 289–303.

Visualizing design

101

Wang, X., Dunston, P.S., 2013. Tangible mixed reality for remote design review: A study understanding user perception and acceptance. Visualization in Engineering 1(1), 8. Whisker, V.E., Baratta, A.J., Yerrapathruni, S., Messner, J.I., Shaw, T.S., Warren, M.E., Rotthoff, E.S., Winters, J.W., Clelland, J.A., Johnson, F.T., 2003. Using immersive virtual environments to develop and visualize construction schedules for advanced nuclear power plants. Proceedings of ICAPP, Cordoba, Spain, pp. 4–7. Whyte, J., 2000. Virtual reality applications in the house-building industry (Ph.D. thesis). Loughborough University, Loughborough. Zimmerman, A., Martin, M., 2001. Post-occupancy evaluation: Benefits and barriers. Building Research & Information 29(2), 168–174.

102

Visualizing design

Chapter 5

Visualizing construction

The built environment is produced through a set of activities, often coordinated globally, which take raw materials into a production process, manipulate them in factory environments and assemble them on the construction site. Digital technologies tend to transform many of these activities through increased automation, off-site manufacture and on-site assembly. Over the last 30 years, the use of VR has moved from design into construction, with increasing potential for simulating dynamic operations in VR and with uses emerging for training operators and augmenting site operations. Yet, compared with other complex product industries (Gann, 2000; Hobday, 1996), the uptake of VR- and AR-supported production of the built environment has been relatively slow. For example, industries such as oil and gas, aerospace, mining and manufacturing were early users of interactive 3D walkthroughs of complex engineering data to check for constructability or operations (Pajon and Guilloteau, 1995), as shown in Box 5.1. Within construction, the flow of information between members of the project team, suppliers and manufacturers has traditionally been inadequate. Effective coordination that supports a shared understanding of design ensures that all project components perform, align and fit together, thus reducing constructability problems, delays on the construction site, waste and compromised safety. We can see such coordination in Design for Manufacture and Assembly (DFMA) as one approach to testing the constructability of a product to inform the design, which is particularly important in modular off-site construction strategies. In such instances, VR can be used to test the design assembly to find faults in the design or the production process earlier when they are less expensive to correct, but equally to explore innovative solutions. Construction teams and their supply chains are beginning to use VR to visualize and manage increasing volumes of engineering and design data for projects such as airports, hospitals, sport facilities, research laboratories and shopping malls. VR is used on infrastructure projects, including road and railway networks, to help with decisions around positioning traffic signals and visibility of signs or roads for the

Visualizing construction

103

Box 5.1 Early use of VR and AR in other complex product industries Virtual prototyping and simulation quickly became standard practices in car manufacturing and airspace engineering, inspiring similar collaborative attempts in the construction sector. The aerospace industry was an early investor in virtual reality; for example, the manufacturing company Rolls-Royce used it in the development of its Trent 800 aero engine (Figure 5.1). Jaguar Racing, a company that designs and produces Formula One racing cars, uses a 3D model to optimize design time and collaborate to identify design conflicts early in the process (Nevey, 2001). These types of benefits are being sought in the construction sector through the increased use of object-oriented techniques and product modelling. In a petrochemical plant design for ICI and Fluor Daniel in 1993 (Figure 5.2), both companies were interested in using virtual reality, not only as a complementary technology to CAD or as a means of replacing costly scale plant models, but also as a mechanism for improving work practices and reducing plant design and total life-cycle costs. Whilst many construction professionals working on fixed price projects use virtual reality after bidding, owners may also see the provision of 3D information as a way of reducing their costs. Three-dimensional laser scanning was used to obtain accurate information about the Forcados Crude Loading Platform before a major upgrade in 1999, as there was a lack of detailed and accurate as-built drawings of the platform

5.1 Rolls-Royce Trent 800 Engine Source: Virtual Presence Ltd.

104

Visualizing construction

5.2 ICI/Fluor Daniel petrochemical plant project Source: Virtual Presence Ltd.

5.3a A view of the Forcados Crude Loading Platform. This model was input into virtual reality using 3D laser scanning techniques. Source: SHELL - MENSI, provided by AG Electro-Optics Ltd - http:/ www.ageo.co.uk/laser_scanning/

(Figures 5.3a–b). The owner of the facility, Shell Petroleum Development Company of Nigeria (SPDC), felt that providing such information would better communicate the work scope to the contractor teams and thus reduce risk during bidding, resulting in commercially attractive bids.

Visualizing construction

105

5.3b Another view of the Forcados Crude Loading Platform. Source: SHELL - MENSI, provided by AG Electro-Optics Ltd - http:/ www.ageo.co.uk/laser_scanning/

drivers or operators. Virtual reality can be used to coordinate the work of different professionals involved in project construction and to support the shared understanding of project information through visualizing the underlying engineering data. In enabling users to visualize spatial, temporal or other information about the designed product in a given context, lead users see virtual reality as the potential to reduce risks, increase technological innovation and improve business processes. At the construction stages, virtual reality has been seen as a way to reduce redesign work and construction delays, review the installation sequence, facilitate concurrent engineering processes and increase the quality of the end product. In the preceding chapter, we explored how, by visualizing available engineering and design data in a more intuitive, dynamic and interactive manner, users of VR can relatively quickly prototype and evaluate design alternatives. In the following sections, we focus on how VR is used in construction and look at some representative examples of how technology development progresses towards automating processes, such as fault or clash detection. We also look at examples of how technologies

106

Visualizing construction

can inform processes (Zuboff, 1985), providing new insights to knowledge workers to explore how design and construction processes can be improved. Thus, the following sections chart the development of approaches to visualizing construction, from early experiments with using VR and automating clash detection in the 1990s (section 5.1) to the visualization of construction timelines in the 2000s (section 5.2) and the use of data from the site and VR for training in the 2010s (section 5.3). As in previous chapters, we conclude by considering future directions and developments in the use of VR systems in production processes (section 5.4).

5.1 1990–1999: from design into construction The cost of making design changes increases dramatically once a project reaches construction stages. By using virtual reality to check for design errors and incompatibilities, the amount of time, materials and money wasted on site can be reduced, lowering the overall design and construction costs. Virtual reality can also be used to improve the robustness and safety of the overall design, reducing the risk of design faults and hence the risk of litigation due to operational failures. In the 1990s, identifying errors and clashes early in the process was an important motive for construction contractors to use VR, as they were increasingly responsible for spatial coordination of detailed designs. On many fixed price projects, contractors found themselves responsible for any necessary rework due to incompatibility problems, which could result in significant costs associated with the redesign and delays, especially as profit margins were low. As one manager put it, “When we accept bad quality information, we accept risk.” Thus, management of the associated risks, although improving the accuracy of construction information, was seen as vital. Digitisation of processes did not necessarily make design coordination easier. Slight differences between the design drawings of different professionals may lead to constructability problems when standard drawing procedures are followed. In the past, contractors employed staff to manually check designs by overlaying paper drawings; however, as CAD can be infinitely precise, digital drawings often went unchecked, as everyone assumed that the information was correct. This type of error is often introduced into the process as different professionals redraw, rather than reuse, CAD data. For the contractor, this could make a digital coordination process worse than the paper-based practice. In the following example (Figures 5.4a–b), the problem of coordination was compounded, as there was poor communication and the process was not logged. Two different design professionals were alerted to the design conflict and both professionals made separate changes to their drawings, leading to a redesign that was incompatible in a different way. In collaboration with Salford University, UK contractor Laing Construction developed integrated project databases (Aouad et al., 1997) and used NavisWorks, which allowed them to combine 3D models to check for interference, and

Visualizing construction

107

5.4a Superimposed design drawings showing the walls in slightly different places in heating, ventilation and air conditioning (HVAC) drawings and structural steel engineering drawings Source: Laing Construction

5.4b A door detail from the same set of drawings Source: Laing Construction

comment, redline and otherwise interactively visualize the information. For this contractor, visualization in an interactive, spatial, real-time medium became central to the process of ensuring accurate construction information. They were early users of the NavisWorks software as a clash detection tool on a number of projects, including Basingstoke Festival Place, UK (see Box 5.2).

108

Visualizing construction

Box 5.2

Basingstoke Festival Place, UK

More than 100 CAD models were used in the detail design and construction of the Basingstoke Festival Place shopping mall. Once imported into the i3D review software, NavisWorks, the in-built compression reduced the file size, allowing the entire project to be navigated in real time (Figures 5.5a–b).

5.5a The full rendered model of Basingstoke Festival Place, created by Laing Construction in the NavisWorks software package Source: Laing Construction

5.5b The model of Basingstoke Festival Place, created by Laing Construction, with selective loading of subsystems Source: Laing Construction

Visualizing construction

109

Sectioning, annotation and clash detection functions in the review tool were used to highlight conflicts, including insufficient clearances for building use, maintenance or construction. For example, in one of the brick stair towers in the centre of the scheme, it was found that there would have been no room to put up scaffolding, and the design was changed. A manager said: “If we had not been able to identify this error prior to the building phase, the project would have been subject to a significant delay resulting in huge costs in terms of time and money.”

In the 1990s, construction contractors were rarely getting 3D building information from architects and engineers, but were rather receiving information in drawing formats. At about the same time, another contractor argued that building a 3D model for design review at the detailed design stage was valuable for enabling greater coordination (and hence reducing the number of change-orders). In this manner, they could simultaneously view CAD models from different consultant designers and fabricators and rapidly identify any problems with the constructability of the design. The project management and engineering group Bechtel has a track record of using visualization techniques within the oil and gas sector. Although different sectors use domain-specific software packages, in-house skills and a reputation for using these techniques have helped the company become a lead user of virtual reality within the construction sector. Bechtel London Visual Technology Group used virtual reality to help engineers coordinate design and construction on the Luton Airport project in the UK. On this project, there were about 10,000 construction drawings, so the potential for errors was high, and there was a need to rigorously check details. A virtual model of Luton Airport showing the different engineering subsystems, including the HVAC and steel subsystems, was distributed to the engineers on site. The model allowed both design and site engineers to perform visual clash detection and reduce the risk of errors, and, thus, their liability in any potentially induced costs. Although most professionals use computers, different disciplines and users favour different CAD and information authoring packages to support their specialized tasks. Managing and coordinating their activities and the exchange of data, both within the individual organization and across project teams spanning organizational boundaries, is not an easy task. During the 1990s, companies began to explore methods to more easily detect model discrepancies between different disciplines through automated collision detection, and investigate how VR use could help coordinate work to improve constructability and reduce costly redesigns and waste. These developments continued across the 1990s and 2000s, with an increasing range of tasks for using VR to check equipment location and key safety controls as elements within a number of design subsystems to improve safety.

110

Visualizing construction

5.2 2000–2009: simulating construction and operations During the 2000s, significant advances were made in combining 3D models with construction schedules to be able to simulate construction operations, as well as the operation of completed buildings and infrastructures (which we discussed more extensively in Chapter 3) in order to inform the construction process.

5.2.1 Checking safety-critical elements On some projects, virtual reality was used to ensure that safety-critical parts of the design worked within the context of the total design. Major rail initiatives in the UK increasingly see virtual reality as an effective way to review design in order to reduce the potential for accidents and possible charges of criminal negligence. For example, Bechtel Visual Technology Group used a virtual reality tool developed in conjunction with the engineering software specialist Infrasoft in the rail sector. In work that began in the 1990s on the Thameslink 2000 rail project to deliver an improved train line from Brighton to Bedford through London, virtual reality was used to help engineers model the position of new signalling equipment accounting for a driver’s eye view to avoid any potential red light oversight (Figures 5.6a–b). There was a significant investment in this software, with one to two people working on the model throughout the project delivery phase.

5.6a View of Farringdon Station, London, UK, in the Bechtel model of Thameslink 2000 Source: Bechtel London Visual Technology Group

Visualizing construction

111

5.6b View of a railway cutting from the driver’s viewpoint in the Bechtel model of Thameslink 2000 Source: Bechtel London Visual Technology Group

The VR model was used in project meetings to allow the client (Railtrack), suppliers, regulators and consultants to review the design and ensure that safetycritical aspects, such as the signalling, were coordinated with the rest of the subsystems (Figures 5.7a–b). As all companies within the consortia working on the project use and visualize the data, issues of intellectual property rights may be expected to arise along with the benefits of shared data visualization. Previously, a signal sighting committee, comprising as many as twelve people from rail companies and safety inspectors, would make four or five visits to the track for every signal layout change (Glick, 2001). The new software that was used was able to reduce this to a single visit per signal, according to the Thameslink 2000 modelling manager for the engineering group Bechtel. For all companies involved in railway design, safety is a critical issue. Railtrack claims that their £150,000 (~$200,000) investment in VR software is increasing safety, speeding up track improvements and saving millions of pounds (Glick, 2001). Once created, the model continued to yield value, as it was reused by the signal supplier to test reflectivity of new designs and improve signal design, as well as by train operating companies to train the drivers and familiarize them with the route. Thus, checking the location of safety critical parts reduced the risk of design errors and brought major benefits to the companies involved.

112

Visualizing construction

5.7a View of a signal at red from the driver’s viewpoint in the Thameslink 2000 model Source: Bechtel London Visual Technology Group

5.7b View of the same signal at green from the driver’s viewpoint Source: Bechtel London Visual Technology Group

5.2.2 Construction planning and monitoring In the 2000s, site construction became a focus of visualization research because contractors often deal with complex graph-based data to plan spatial-temporal tasks such as resource utilization, equipment logistics and physical or spatial conflicts. Given that construction progress is typically subject to changes, delays or revisions, for important tasks such as monitoring activities and comparing the progress against the planned baseline, graphical representation of discrepancies emerges as critical for quickly communicating and resolving conflicts during coordination meetings. Thus, at the time when the first edition of this book was published, 4D-CAD models – (3D)-CAD data linked to scheduling information – were being explored as a method to visualize the construction sequence and allow construction designers to more quickly prototype and test alternative construction plans. Disney Imagineering Research and Development worked with Stanford University on the Paradise Pier project (Koo and Fischer, 2000) to create a 4D-CAD model to improve engineering sensibility and construction management. Disney Imagineering displayed a 4D-CAD model on a large screen, allowing the general contractor to review the suitability of lay down areas and orchestrate manpower, verbalizing issues in a practical manner as “I can’t put this in here as that is in the way.” More than just enhancing understanding of the data, Disney Imagineering felt that presence within a virtual environment helped to create connections between people while reviewing common problems. One engineer summed this up as “Problems found together are solved together.” For Disney Imagineering Research and Development, 4D-CAD was a good place to tackle a quantitative analytical approach to construction and link this back to design choices. They own and operate buildings, so they can choose to spend more in capital at the design and build stages rather than in running costs in operation. The long-term vision was to measure and monitor life cycles (e.g. “how many people have walked on this carpet”) and use virtual reality to visualize this information, achieving a demand-driven approach to maintenance. Since the early 2000s, rapid developments in mobile technology, location tracking, sensors and cameras have pushed 4D-CAD visualization initiatives further by exploring dynamic methods for users to capture, visualize and interact with simulated construction data for site monitoring, hazard recognition or training. The 2000s also saw substantial work using augmented reality for site management and construction-related tasks. The existence of real-world (site) information provides a context for augmented reality to rather quickly spatially locate and superimpose computer-generated information relative to the user (Behzadan and Kamat, 2007). Augmented reality in this instance plays a role in both data capture and visualization. This approach opens opportunities for constructors on site to quickly record the actual progress and compare against the planned sequence to determine the discrepancies or defects (Behzadan and Kamat, 2007; Golparvar-Fard et al., 2009). Laing O’Rourke used augmented reality at Cannon Street Station, with markers set out on the concourse to allow them to display the model of the station, as well as other models (as shown in Figure 5.8).

114

Visualizing construction

5.8 An early use of augmented reality by Laing O’Rourke, displaying a model on the concourse at Cannon Street

5.2.3 Construction equipment operation training Virtual reality environments developed to support training of staff to operate machines or vehicles in realistic conditions, such as flight simulators, have remained at a more conceptual development stage in construction, compared to other engineering and manufacturing domains. The appeal of recreating real-life scenarios while minimizing risks and associated costs underpins the attempts to deploy virtual and augmented reality for construction training tasks using heavy machinery and equipment operation. For construction task scenarios, such as crane or excavator operation, site installation and quality inspections, both virtual and augmented reality offer capabilities to train novices in a realistic environment. In miner training, virtual reality has been explored as a task-based experiential learning environment to improve safe operation of miner equipment such as conveyor belts (Lucas and Thabet, 2008), and to identify safety hazards (Orr and Girard, 2003; Ruff, 2001). In these instances, virtual reality seeks to replicate the work environment, which would be too dangerous or expensive to use as a real setting. The nature of the task for which the novice is being trained at times may necessitate some aspect of the real environment or a contextual setting to be incorporated into training. In such cases, instead of recreating an entirely virtual environment, augmented reality is seen as a preferred method to retain the contextual authenticity of a real workplace, augmented by relevant virtual training information. For example, the work of Wang and Dunston (2007) builds support for using augmented reality as a way to superimpose virtual training information using real construction equipment because it allows for the more appropriate acquisition of motor skills for operating heavy equipment in real working conditions. In this case, the complexity of the real conditions and the

Visualizing construction

115

training objectives determine the viability of an augmented reality approach over the flexibility, but also associated costs to recreate the conditions using virtual reality.

5.3 2010–2019: training operators and augmenting operations on site The new generation of VR and AR hardware and software in the current decade has led to substantial experimentation with their use for training operators and for augmenting operations on site.

5.3.1 Automating fault detection High fatality and accident rates in the construction industry have motivated the development of various sensor-based alert and field safety systems linked to virtual representations of site equipment, such as cranes (e.g. Li and Liu, 2012), to monitor the particular construction equipment activity. This approach is expected to generate rich data about operation performance, which can then be visualized and analysed for potential operations improvements. Data-driven remote monitoring using virtual representation of site operations examples such as this one demonstrate broader attempts to capture and analyse real site data in order to increase the ability to predict and adequately alert for potential safety hazards or unsafe operations early. Recent research seeks to use augmented and virtual reality to proactively monitor and prevent defects during construction through its use of mobile technologies, sensors and machine learning. Automation of data registration and recognition underpins the large body of work to primarily allow construction teams to monitor construction progress and quality. Ongoing research (e.g. Zhu et al., 2011; Zhu and Brilakis, 2010) is developing methods to automatically recognize component materials using site photography to automate surface quality inspections for defects such as cracks, corrosion or other faults that may compromise structural integrity.

5.3.2 Simulating site operations Simulating construction sequences in the form of 4D VR models remains a staple use, although we have begun to see a shift in using VR as an interactive tool for planning construction and safety-related tasks, and not only reviewing the predeveloped sequence. In the example of a tower project in London, currently under construction, the project team uses a complex 4D model linked to interactive virtual reality to both create and review the construction and logistics plans. The complexity of this 62-storey project located in an historic urban context with limited site access is further complicated by the logistics planning illustrated by a 4,200-activity schedule. To enable the project team to actively create safety-focused construction

116

Visualizing construction

plans in VR, London-based company Freeform3D was tasked to develop an interactive 4D model using federated design and construction information from the contractor (Multiplex) and structural engineer (WSP). Over the period of six months in consultation with the project team, Freeform3D developed a 4D model that incorporated highly detailed site logistics information that included existing piling locations, hoarding lines, access gates, escape routes, tower crane positions, a temporary site staircase and safety handrails (Figures 5.9a–d). With a live link to the

5.9a–b Screenshots from immersive 4D model allowing team members to create and review the site safety and logistics plan Source: Freeform3D.

Visualizing construction

117

5.9c–d Screenshots from immersive 4D model allowing team members to create and review the site safety and logistics plan Source: Freeform3D.

construction schedule, project team members used an HTC Vive VR headset to review the resulting safety and site logistics with a close to real-life quality and accuracy. Compared to standard Gantt charts and drawing plans, walking through a virtual construction site and reviewing the construction sequence allowed the team to more easily inspect the plan and quickly identify the hazards and locations for appropriate signage, green routes or tether zones. For the contractor, the visible benefit of this approach was the efficiency with which the changes to the sequence and site logistics could be implemented, ultimately leading to time and cost savings.

118

Visualizing construction

Meanwhile, complementing the attempts to automate tasks such as remote site monitoring or fault detection, other studies aim to engage the users on site to perform field tasks with the help of virtual data overlaid in the real context. Extensive work in simulating site operations (Behzadan et al., 2015; Behzadan and Kamat, 2011) reveals the systems development trajectory towards integrating augmented reality with global positioning systems (GPSs) in order to accurately track the position of both the user and the CAD objects (e.g. equipment, material, personnel) in an augmented environment. This approach primarily allows the user to visualize and manipulate virtual data while maintaining an awareness of the real site activities, as well as reducing the need for processing large amounts of virtual data, as only relevant information is rendered over the existing background. The reduced amount of CAD data that needs to be rendered in turn allows for a real-time response from the system and interaction that is updated based on the changing position of the user or the object. At present, the accuracy of GPSs is still a limitation, and so correct placement of virtual objects in the real world uses marker-based tracking, or image processing.

5.3.3 New developments in augmented reality on site Augmented reality is primarily explored for operational and coordination tasks concerning construction equipment on site (Chen et al., 2011; Hammad et al., 2009), inspection, checking, or comparing as-planned with as-built progress (Kamat et al., 2011), but the practical application and its potential for maintenance tasks remain little realized. The potential for augmented reality to help workers during excavation activities to ‘see’ underground utilities and avoid accidental damages (Behzadan et al., 2015) could be extended and integrated with performance data that would equally enable maintenance workers to locate parts and components hidden behind walls or floors that need to be replaced or repaired. The processing power of mobile devices, including smartphones and tablets, that have embedded cameras can serve as powerful vehicles to provide users with access to asset data for various scenarios. Bechtel found that augmented reality can be used on site for getting input from remote experts, who can see what is visible to the operatives on site through their glasses. These glasses not only stream video, but can also record it, making site data available for new forms of analysis. Boeing’s Advanced Manufacturing Research Centre (AMRC) in collaboration with the University of Strathclyde’s Advanced Forming Research Centre (AFRC) and modular building designer and manufacturer Carbon Dynamic are looking into the use of augmented reality for construction tasks. One use is to examine wall components and construction using Microsoft’s HoloLens, which incorporates a number of environment-aware cameras to reveal a 3D rendering

Visualizing construction

119

5.10a–b How the physical environment looks in reality (a) and the 3D rendering that the operator of the Microsoft HoloLens sees (b) Source: AMRC.

of the plumbing and wiring hidden behind the surface (Figures 5.10a–b). The primary driver is the quality assurance for modular construction projects, which results in saved time and costs. Other developments in mining, such as the highly successful EU-funded project EMIMSAR (Enhanced Miner-Information Interaction to Improve Maintenance and Safety with Augmented Reality) (Horizon, 2020; 2014), demonstrate the applicability of augmented reality for scenarios that involve work with heavy machinery in confined spaces at low visibility. In these scenarios, miners can augment the view of their sensor-based equipment using portable or head-mounted displays for operation and maintenance tasks.

120

Visualizing construction

Still, broader application of augmented reality in site settings is bound by issues of correct and reliable registration of virtual objects so they seamlessly blend with the real environment, even when the user moves freely around the site (Azuma, 1997; Behzadan et al., 2015). The lag or inaccurate overlay of virtual information still presents a technological challenge that can disrupt the user experience and the illusion of the virtual objects blending in the real world. Thus, most current work focuses on resolving these technology-related challenges with limited usability testing in practical implementation.

5.3.4 Training construction and maintenance operators VR coupled with game engines increasingly offers capabilities for developing more effective and versatile learning and training environments that can potentially reduce overall costs and time involved compared to traditional on-site training. Hong Kong-based Gammon Construction in collaboration with the University of New South Wales (UNSW) and the University of Hong Kong implemented a VR training platform in an effort to reduce accidents on site. The Situation Engine1 developed at UNSW and used as a teaching tool provides an immersive environment in which construction workers wearing head-mounted displays navigate potentially hazardous 3D building sites (Figures 5.11a–c). Highly detailed and realistic recreations of real sites to the level of machinery and signage offer various safety scenarios where both students and construction workers can virtually walk through to identify risks of electrocution, falling objects or equipment-related issues, among others. The highly realistic representation primarily aims to bring site authenticity to the inexperienced workers, as well as to influence a change in

5.11a Gammon Construction worker wearing an HMD to walk through the virtual construction site in the Situation Engine Source: Gammon Construction.

Visualizing construction

121

5.11b–c Screenshots from the virtual construction site in the Situation Engine Source: Gammon Construction, University of New South Wales

behaviours to increase safety practices on site. One benefit that Gammon Construction reported over the six-month period of using the Situation Engine was significantly reduced training times. Mortenson Construction, a US construction and real estate development company, integrated VR into a range of project tasks, including early project development, collaborative design verification and design space validation to construction planning and digital fabrication. Mortenson first utilized VR while working on

122

Visualizing construction

the Walt Disney Concert Hall in Los Angeles, CA, in 2000, using Disney’s CAVE facility as part of collaborative team meetings. More recent experimentations involve HMDs, following consumer versions entering the market to simulate real-world scenarios, and integrated VR in use-cases for design, construction and operations. One effort looked into the benefits of using VR to train facilities maintenance and operations staff members to properly operate complex equipment and thus reduce the chance of faults and litigation after the projects are handed over. While the benefits of VR for Mortenson have been established, the next step is to look into augmented reality, which Ricardo Khan, senior director at Mortenson, sees as having the potential to innovate practices such as field layout (Figures 5.12a–b), automated daily logs, situational safety awareness, productivity time capture and knowledge database access, among others.

5.12a–b Envisioning augmented reality applications for site tasks such as field layout and crew verification Source: Mortenson.

Visualizing construction

123

5.3.5 Assembly methods prototyping As more activities get automated or taken off-site, planning how to assemble all the components in a product or building is a critical step in the product design and production (Seth et al., 2011). Similar to planning a construction sequence, planning an assembly process involves considerations not only about the optimal assembly sequence and time, but also issues of tool handling, reach angles and clearances, operations, parts and equipment access and worker and operator safety. The term ‘virtual assembly’ is used to describe an approach that uses immersive virtual reality with haptic devices, such as data gloves (e.g. Carlson et al., 2016; Gallegos-Nieto et al., 2017), which allow users to intuitively interact with virtual objects and realistically simulate the assembly of parts. This approach to testing and evaluating the assembly process continues to be highly represented by examples from manufacturing and mechanical assembly, but it demonstrates the approach to prototyping and evaluating processes in the early stages, which can influence the design itself. Current complexity inherent in the computing power and algorithms necessary to provide an integrated interactive experience with realistically simulated object behaviours and haptic (force and tactile) feedback still limits similar applications in the building construction practice. A broader challenge continues to be the lack of integration of CAD and data authoring systems with virtual assembly environments, which typically requires the development of custom communication channels using proprietary APIs to be able to read the object properties. However, this detailed and process-oriented prototyping of the assembly process to inform and change the design accordingly is an example where human expertise and tacit knowledge can be mobilized via advanced technologies and captured to consequently inform the design and production processes.

5.4 2020 onwards: towards the future of VR in construction In this chapter, we have seen the use of virtual and augmented reality to prototype and simulate operation and construction processes and engage interdisciplinary team members to test ideas, verify attributes and appraise options. Central to these initiatives are the increasingly structured 3D (building) information models. The uptake in use of structured information through BIM has made data exchange between the majority of information authoring and VR applications less problematic. While applications of VR in the 1990s had the goal of eliminating defects and errors and reducing changes late in the process, the emphasis has shifted to the use of VR, alongside a range of other digital technologies, to visualize the data-feeds that come from sensors on site and to review and update information from construction scheduling, monitoring and engineering simulation tools. However, visualization models are still often

124

Visualizing construction

recreated following revisions made to the models in native CAD and simulation data formats, and, consequently, changes made to models in a virtual environment are not easily updated. There are indications that this will change in the future, with more direct VR interfaces to modelling and simulation tools. Rather than the involved processes of translation, where models may take a long time to build and optimize, we see new experimentation with direct links between authoring and visualization packages to allow for real-time data visualization and manipulation, where non-geometric data is also directly transferred and no longer has to be manually configured. Integrating different types and levels of project data such as geographic, geometric and other associated parametric data and the ability to manipulate and test alternatives against the embedded performance metrics is the current direction in creating information-rich interactive virtual prototypes. The future directions in the use of VR in construction are part of the transformation of the sector. If VR is used in a compartmentalized way, it may merely automate existing processes. Members of an organization may see no benefits to the use of the technology and may even perceive it as a potential threat to their position within the organization. The danger of this approach is the view of the technology as a way to inadvertently expose flaws in one’s work and increase liability or the scope of work. Thus, to add value, VR must add information to organizational processes to enable professionals to perform their tasks better. We see VR and AR as interfaces to data that can increasingly be collected and surveyed remotely. Drones, robots and unmanned vehicles equipped with cameras and laser scanners provide construction professionals with opportunities to easily and quickly survey and inspect less accessible sites and facilities, such as bridges, reducing the associated safety risks of on-site workers. Coupled with VR or AR, site managers can remotely walk through and inspect the surveyed information with the flexibility to perform work traditionally located on site. The interest of commercial mobile manufacturers in VR and AR makes these capabilities accessible via smartphones, as well as devices tailored for use in construction, making it easier to take VR and AR onto the construction site. Health and safety as main concerns in design and construction are driving much of the integration of data-capture and surveying technologies with virtual reality for training scenarios, and, more increasingly, augmented reality as a support for site-related works. Augmented reality has attracted recent attention for its appeal to overlay essential data onto real objects in heavy equipment operation training, maintenance, remote operation and site progress monitoring. Lingering practical issues, such as accurate and reliable mapping of digital over physical information, as well as the extent to which AR may be distracting or otherwise affect sight and mobility (Sabelman and Lam, 2015), curtail its wider implementation and drive current research and developments in adequate hardware and BIM integration workflows. As with VR, the goal of AR technologies is to seamlessly interface with BIM and provide an unencumbered experience to support specific tasks.

Visualizing construction

125

Finally, we see early attempts to leverage research and developments in machine learning and context-aware artificial intelligence towards more predictive analytics (Firmatek, 2017), especially in the context of design, production and operation of smart built environments. The convergence between smart and autonomous technologies, AI, big data and VR/AR as interfaces to data opens opportunities for automation and remote operations such as concrete pouring or hazard detection, but also forecasting, scheduling and process optimization for quality checking and assurance based on accrued knowledge over time (Firmatek, 2017). An equally important consideration brought by these developments is the need to plan for adequate IT infrastructure, as technology components and wearable devices require sufficient bandwidth to process and stream volumes of realtime data between the site and the construction office to be useful, efficient and ultimately value-adding.

Note 1 Strachan, F., April 2016. Gaming technology transforms construction safety training. UNSW Sydney Newsroom, http://newsroom.unsw.edu.au/news/ art-architecture-design/gaming-technology-transforms-construction-safetytraining

References Aouad, G., Child, T., Marir, F., Brandon, P., 1997. Open Systems for Construction (OSCON). Final Report (DOE Funded Project). Department of Surveying, University of Salford, Salford, UK. Azuma, R.T., 1997. A survey of augmented reality. Presence: Teleoperators and Virtual Environments 6(4), 355–385. Behzadan, A.H., Dong, S., Kamat, V.R., 2015. Augmented reality visualization: A review of civil infrastructure system applications. Advanced Engineering Informatics 29(2), 252–267. Behzadan, A., Kamat, V.R., 2007. Georeferenced registration of construction graphics in mobile outdoor augmented reality. Journal of Computing in Civil Engineering 21(4), 247–258. Behzadan, A.H., Kamat, V.R., 2011. Integrated information modeling and visual simulation of engineering operations using dynamic augmented reality scene graphs. Journal of Information Technology in Construction (ITcon) 16(17), 259–278. Carlson, P., Vance, J.M., Berg, M., 2016. An evaluation of asymmetric interfaces for bimanual virtual assembly with haptics. Virtual Reality 20(4), 193–201. Chen, Y.-C., Chi, H.-L., Kang, S.-C., Hsieh, S.-H., 2011. A smart crane operations assistance system using augmented reality technology. Proceedings of 28th International Symposium on Automation and Robotics in Construction, Seoul, Korea, pp. 643–649. Firmatek, 2017. The future of work: How virtual reality and artificial intelligence are changing the jobsite. Firmatek 28 March, https://www.firmatek.com/2017/03/28/ virtual-reality/

126

Visualizing construction

Gallegos-Nieto, E., Medellín-Castillo, H.I., González-Badillo, G., Lim, T., Ritchie, J., 2017. The analysis and evaluation of the influence of haptic-enabled virtual assembly training on real assembly performance. The International Journal of Advanced Manufacturing Technology 89(1–4), 581–598. Gann, D., 2000. Building innovation: Complex constructs in a changing world. Thomas Telford, London. Glick, B., 2001. Virtual reality boosts train safety. Computing 19 September, https://www. computing.co.uk/ctg/news/1861342/virtual-reality-boosts-train-safety. Golparvar-Fard, M., Peña-Mora, F., Savarese, S., 2009. D4AR: A 4-dimensional augmented reality model for automating construction progress monitoring data collection, processing, and communication. Journal of Information Technology in Construction (ITcon) 14 (special issue), 129–153. Hammad, A., Wang, H., Mudur, S.P., 2009. Distributed augmented reality for visualizing collaborative construction tasks. Journal of Computing in Civil Engineering 23(6), 418–427. Hobday, M., 1996. Complex systems vs mass production industries: A new research agenda. Working Paper prepared for CENTRIM/SPRU/OU Project on Complex Product Systems, EPSRC, Technology Management Initiative GR/K/31756. Horizon 2020, 2014. Augmented reality improves safety and productivity in mines: Horizon 2020 – European Commission. Horizon 2020. http://programmes/horizon2020/en/ news/augmented-reality-improves-safety-and-productivity-mines. Kamat, V.R., Martinez, J.C., Fischer, M., Golparvar-Fard, M., Peña-Mora, F., Savarese, S., 2011. Research in visualization techniques for field construction. Journal of Construction Engineering and Management 137(10), 853–862. Koo, B., Fischer, M., 2000. Feasibility study of 4D CAD in commercial construction. Journal of Construction Engineering and Management 126(4), 251–260. Li, Y., Liu, C., 2012. Integrating field data and 3D simulation for tower crane activity monitoring and alarming. Automation in Construction 27, 111–119. Lucas, J., Thabet, W., 2008. Implementation and evaluation of a VR task-based training tool for conveyor belt safety training. Journal of Information Technology in Construction (ITcon) 13(40), 637–659. Nevey, S., 2001. Oral presentation given at the 3D Design Conference. The Business Design Centre, London. Orr, T., Girard. J., 2002. Mine escapeway multiuser training with desktop virtual reality. Proceedings of APCOM 02: Computer Applications in the Minerals Industries, Phoenix, AZ, October 7–10. Orr, T.J., Fligenzi, M.T., Ruff, T.M., 2003. Desktop virtual reality miner training simulator. International Journal of Surface Mining, Reclamation and Environment. Pajon, J.L., Guilloteau, P., 1995. Geometry simplification for interactive visualization of complex engineering data, in: Hernandez, S., Brebbia, C.A. (Eds.), Visualization and intelligent design in engineering and architecture II. Computational Mechanics Publications, Boston, pp. 51–57. Ruff, T.M., 2001. Miner training simulator: User’s guide and scripting language documentation. Spokane, WA: U.S. Department of Health and Human Services, Public Health Service, Centers for Disease Control and Prevention, National Institute for Occupational Safety and Health, DHHS (NIOSH) Publication No. 2001–136. Sabelman, E., Lam, R., 2015. The real-life dangers of augmented reality. IEEE Spectrum: Technology, Engineering, and Science News. http://spectrum.ieee.org/consumer-electronics/ portable-devices/the-reallife-dangers-of-augmented-reality.

Visualizing construction

127

Seth, A., Vance, J.M., Oliver, J.H., 2011. Virtual reality for assembly methods prototyping: A review. Virtual Reality 15(1), 5–20. Wang, X., Dunston, P.S., 2007. Design, strategies, and issues towards an augmented realitybased construction training platform. Journal of Information Technology in Construction (ITcon) 12(25), 363–380. Zhu, Z., Brilakis, I., 2010. Parameter optimization for automated concrete detection in image data. Automation in Construction 19(7), 944–953. Zhu, Z., German, S., Brilakis, I., 2011. Visual retrieval of concrete crack properties for automated post-earthquake structural safety evaluation. Automation in Construction 20(7), 874–883. Zuboff, S., 1985. Automate-informate: The two faces of intelligent technology. Organizational Dynamics 14(2), 5–18. Zuboff, S., 1988. In the age of the smart machine: The future of work and power. Basic Books, New York.

128

Visualizing construction

Chapter 6

Towards digital maturity

Since the ‘birth’ of VR systems in the last century, their reach and use in the built environment has grown significantly. The primary question and problem we deal with in this century is not one of technology availability, but that of achieving the potential and value the technologies can bring into built environment practice. Computational capabilities have become widely accessible through the pervasive use of multifunctional, inexpensive, portable computers. The last three chapters charted VR systems growth in development and their use in the operation, management and planning of cities (Chapter 3); the design of new buildings and infrastructure (Chapter 4); and their construction (Chapter 5). Recent applications and documented use-cases illustrate how far we now are from the infancy of VR in the built environment since the early experiments with architectural walkthroughs and associated challenges reported by Brooks (1986). The quality of real-time visualization of complex information models shows increasing maturity, with design benefits realized in implementations in particular operation, design and construction settings. The increasing quality and benefits of existing applications are evocative of further potential. Yet, the rapid pace of technological development in VR systems and underpinning digital platforms presents us with the challenge of making appropriate choices for using VR to obtain real value. In this chapter, rather than discussing the uptake of VR systems as a one-off digital transformation, we will use the notion of ‘digital maturity’ (Kane, 2017) to examine the development of new visualization processes and practices. This approach differs from that still found in many business cases for new technology, which frame technology as a binary choice, contrasting traditional paper-based practices with a new digital way of doing business. Rather than comparing and contrasting traditional modes of construction with the potential of VR and AR, we instead emphasize how the ongoing process of growth and development involved in the uptake of VR systems often introduces new visualization technologies into practices that are already digital. The process of gaining digital maturity involves a number of transitions, which often include cycles of practical learning. There may be a significant move, for example, from processes Towards digital maturity

129

and practices that involve the circulation of digital documents to those that involve interacting visually using VR and AR across widely distributed settings. The first edition of this book came out in the childhood of virtual reality in the built environment when there was early use but nothing pervasive. One built environment professional interviewed in the first edition of this book indicated that with using VR systems: As yet there isn’t a set of rules or guidelines . . . and I would imagine that as we go along we will start to develop a much more solid series of guidelines for what works and what doesn’t. Since then, a series of international conferences1 has brought leading scholars together to examine construction applications, and there has been ongoing experimentation in commercial practice as software and hardware have become more affordable. The examples that we have added to this second edition show that the use of virtual reality in the built environment is maturing, with a better set of guidelines emerging through the widespread experimentation and use of new generations of consumer devices and head-sets. These guidelines take the form of heuristics; they are not proofs, but rules-of-thumb that have been learned through experimentation and observation. We offer such guidelines here as a starting point for the next generation of experimentation with VR systems. To answer some of the questions that built environment professionals have as they begin to use VR, this final chapter first highlights characteristics of digital adolescence, with issues arising in technology convergence, cybersecurity and life-cycle integration. In the following section, the value proposition for implementing a VR system is considered in relation to its users, the information they need and the tasks they are expected to do. Broader strategies for ongoing growth and development in the use of VR in organizations are discussed in the context of a rapid pace of innovation in new visualization technologies. The final section of this chapter reflects on the main findings of the book and reviews potential future scenarios for VR and the built environment.

6.1 Digital adolescence Digital information is increasingly easily shared, stored, searched, remotely accessed and modified. Although the potential of digital information to offer flexible new ways of working is immediately apparent, and the previous chapters provide many examples enabled by VR, it has taken longer to recognize a range of considerations that emerge with the digitisation of the processes in operation, design and production of the built environment. Digital information is creating tighter integration between operation of the built environment and the design and construction of projects that deliver interventions, and is changing the relationship between users of the built environment and built environment professionals.

130

Towards digital maturity

6.1 Revisiting the use of VR and AR in the wider digital information and technology ecosystem

These changes affect project delivery models; the business models of infrastructure owners and their consultants, contractors and suppliers; and public sector management strategies for the built environment. They are altering relationships in the delivery supply chain between design, manufacturing and assembly, driving construction to resemble other manufacturing processes. This section revisits the overview diagram of the use of VR and AR in a wider ecosystem (Figure 2.14, redrawn as Figure 6.1) and discusses three issues that arise in this digital adolescence: technology convergence, cybersecurity and life-cycle integration.

6.1.1 Technology convergence A broad set of technologies is converging, with functions that were once highly distinct becoming linked together in new workflows, new hardware and new applications. The use of VR and AR as a user interface increasingly draws on data from a range of supporting applications used to capture, analyse and synthesize data before the VR model is developed. Although we briefly discussed these components in Chapter 1, we can now start to think about possible ways they converge to inform the information workflow and the choice of visual interface to achieve specific tasks. To summarize: Data-capture applications that can leverage VR in the built environment include technologies such as laser scanning, photogrammetry and video to generate ‘point clouds’ or images. They can potentially be used to display data captured in real time from the operating built environment through sensors. New developments in ‘continuous surveys’ offer information that is constantly updated to create a ‘digital twin’ to understand the difference, or ‘delta’, between models of a facility in development or use. In the operational phase, we begin to see the potential for extensive data capture on people movement and usage, e.g. from smartphones.

Towards digital maturity

131

Data analysis and synthesis applications offer a particularly tight integration of VR and AR with BIM applications, especially with the development of new translators and workflows that integrate VR capabilities into BIM applications. Information about the built environment in CAD and BIM data (design models in vector and object-based model formats), describing the products, subsystems and systems at the facility, neighbourhood, city and region scales, is often held in GIS. This information can be used to understand the impact of various forms of dynamic behaviour, represented in engineering simulation, scheduling and logistics software. Coupled with real-time data capture through sensors and user devices, current initiatives in machine learning open up new directions to synthesize and analyse data to understand user behaviour both in existing built environments and in virtual or augmented models. There is increasing maturity in the development of national and international standards for BIM and an active open source software community developing and making available VR software and workflows related to the built environment. Model development/visual interface in this context plays a critical role to allow relevant groups of users to understand and engage with increasingly complex and rich data associated with building and infrastructure projects. Data storage, however, is still an issue as live-streamed video, continuous survey and other data intensive technologies become more widely used.

6.1.2 Cybersecurity Controlled access and monitoring of digital information, and the associated legal and moral obligations, distinguish built environment professions from a broad group of users. Access to digital information about the built environment can be secured and controlled through encryption, user authentication and through monitoring usage. Professionals that are starting to use VR in teams and for wider engagement need to consider the security requirements associated with the different kinds of digital information involved in the built environment applications on which they work. A team of professionals (or particular professionals within the team) may require controlled access to more detailed digital information in order to accomplish their role, and they have responsibilities to keep digital information to which they have privileged access safe in the same way that they have responsibilities to ensure safety in the physical realization of built environments. Digital information, stored in a cloud server, usually in a structured, collaborative form that provides a common data environment (BS 1192:2007+A2:2016, 2016) is central to collaborative practices in designing, delivering, operating and disposing of physical assets. The push for transparency in the collaborative process and sharing of information for coordination and validation purposes also presents additional considerations for managing issues such as intellectual property, version control, authorized access to specific information and safe-guarding project-sensitive information. As increasingly digitized private citizens’ financial, medical and other personal data raises issues for added security from any potential misuse and manipulation, digital transformation in built environment professional practice is equally susceptible to hacking, phishing, theft and data corruption 132

Towards digital maturity

or data loss. Unmanaged handling of digital files on work or private computers opens the digital network to vulnerabilities, and safety-sensitive projects like infrastructure, security components, utilities and building systems can easily become more visible to broader (unauthorized) viewers and potential competitors. To address issues of cybersecurity in the UK, the Centre for the Protection of National Infrastructure (CPNI) sponsored the development of the recently published “Specification for security-minded building information modelling, digital built environment and smart asset management” (PAS 1192–5:2015, 2015) as a set of guidelines for project teams to use to identify and plan to control and manage information-related risks such as malicious acts, as well as loss and disclosure of intellectual property and commercially sensitive or personally identifiable information. With the introduction of a building asset security (BAS) manager role, these guidelines become a starting point in design and delivery considerations for growing digital built initiatives such as smart buildings, smart infrastructure and smart cities. As the digital maturity of the sector increases, we envision security mindedness becoming an everyday part of the use of VR systems in the built environment, informing the development and viewing of models and how (and whether) they are shared, stored, searched, accessed remotely and modified.

6.1.3 Life-cycle integration Both virtual and augmented reality approaches are used as interfaces for digital information in decision-making throughout the project life cycle. Visualization is important in generating, simulating, reviewing and verifying information and providing a vehicle for associated training and marketing across the life cycle, as professional teams work together remotely or in co-located settings. Digital information is integrating activities that used to be considered separately across the life cycle. The pervasive use of digital information challenges the traditional models of project delivery: no longer is the sole deliverable a physical product or associated services, but rather digital information about the delivered products and services. Thus, as we discussed earlier in the book, in addition to delivering physical assets to owners and operators, project teams are increasingly required to deliver structured digital asset information, or a digital twin. This is profoundly influencing the way that public and private owners and operators conceive of infrastructure delivery projects, leading to a greater focus on outcomes rather than outputs, and a broader digital context within which project data can be situated, for example in the context of ‘smart cities’. No design process starts with a blank sheet of paper. It is for this reason that we start with ongoing city operations, before considering design and construction, rather than presenting the more conventional linear representation that proceeds from design to construction and then operations. We see the development of VR applications across the life cycle as supporting this refocusing to start with operations and the consideration of outcomes in this broader context. Information received from the behaviour of built environment users, either in participatory practices or through big data initiatives, can influence infrastructure Towards digital maturity

133

investment decisions, as operational outcomes, such as reduced congestion and increased capacity, may be achieved by either changing user behaviour or building new infrastructure. The extent of information available about infrastructure usage and asset operation is thus influencing decisions regarding not only maintenance and repair but also new investment. The use of digital information has precipitated government and industry initiatives to change procurement contracts, stage-gate processes, interactions with the client and standards. The resulting integration of capital and operational expenditures to consider total expenditure has far-reaching consequences, permeating many aspects of design, manufacturing, assembly, testing and commissioning, leading to a greater integration between usage, delivery and operations. This is breaking the mould of 1960s approaches to project management, enabling more rapid and agile forms of organizing (Levitt, 2011). New questions arise about how VR may be used within new forms of organizing to support life-cycle integration and more circular economy thinking.

6.2 Defining the value proposition for VR systems In defining the value proposition for investing in VR systems, organizations need to consider who the primary users of these systems are, and what information they will need for specific tasks – we tend to refer to these considerations as a usecase, or application. In Chapter 1, we discussed how information about the built environment is used to develop a VR model. For built environment applications, developing a value proposition for choosing an appropriate system has to consider the intended user experience of the built environment through select input and output devices. However, the choice of hardware is often subject to considerations of adequate VR/AR model development workflows in terms of data capture, analysis and synthesis (Figure 6.2). There has been substantial work on the development and testing of these workflows in recent years, most involving game engines

134

Towards digital maturity

6.2 Input and output devices shape the user experience

such as Unity as a platform to integrate data in an interactive visualization model, or more recently, a growing range of plug-ins (e.g. Fuzor,2 Enscape,3 etc.) that allow more direct ways to visualize native CAD and BIM models in VR.

6.2.1 Users’ experience Factors such as available budget, existing devices, task and peripherals also influence the choice of VR or AR hardware and software. Although the temptation often seems to be to purchase high-end ‘out of the box’ VR systems, in our conversations with early technology adopters who only just began to develop competence in VR for the built environment, we advised them to start simple and consider VR as a scalable solution that is upgraded over time. Starting simple may mean setting up a low-cost but nevertheless effective solution with a basic VR output device such as a 3D projector for larger user groups, head-mounted display or smartphone for smaller or networked groups of users. Often, a more involved aspect of using VR is a model development workflow for displaying the content in VR. This refers to considering what information is needed for the given task, and where and in what format the information will be collated from before the visualization model can be loaded into the virtual environment. Given that practitioners may also learn about additional VR uses through initial testing and documenting relevant lessons learned, the system can be gradually upgraded by adding input and output devices to accommodate aspects of immersion and presence deemed important to the task, users and information needs. This recommendation is partly grounded in the speed of technological change (which we discuss in more detail in section 6.3.3), as well as from observing many expensive VR set-ups being developed and launched and then underused. Additionally, we have seen some real innovation and creativity with simple set-ups.

6.2.2 Information about the built environment A significant part of defining a value proposition for VR systems has to be around the fit with a broader digital strategy and the workflows for data capture, analysis and synthesis to create a virtual model, as well as for capturing new data from users’ interactions with that model to incorporate back into professional workflows. These workflows and associated resource requirements largely determine both the value and the cost of implementing VR systems (Figure 6.3). Indicative questions of these considerations might be: Is there suitable software available? Do the tasks involve primarily geometry-based visualization, or is there a need for additional automation and scripting (e.g. custom user interactions or information query functions)? Does that data come from a single standard application or are the models federated using different file formats? What data is useful to include in the virtual environment? Thus, questions around how data capture, data modelling or analytics interact and integrate towards developing a VR-ready model as a visual interface to the

Towards digital maturity

135

underlying data are critical for making a value-based judgment. As noted earlier, models created for use within the professional project team and supply chain may be markedly different from models created for wider interactions with clients, funding institutions, planners and end-users, as roughly exemplified in Table 6.1. As users of VR systems become more sophisticated, they are increasingly using a wide range of models and combining a range of functionality across use-cases. Virtual reality may be used to convey more abstract information within the project team, explore the detailed interactions between engineering systems and communicate design. Digital information may be filtered according to access rights and may be updated in real time. A viewing perspective from outside the model is often used and the interface may include design aids and allow free viewing of the model. Virtual reality may be used to represent (photo) realistic information to more easily explain design to less experienced and non-expert users, or to communicate the physical appearance of the design. Objects in these models are thus made to look like the real objects that they represent. The models show surface detail, and are often used to explore layout options, way-finding or aesthetic considerations, such as external appearance, interior decoration and furnishings. A viewing perspective of a person within the model is often used and interaction may be guided and supported by predetermined viewpoints. Economies of scale and scope may be also achieved in developing VR models, where effort can be leveraged as a result of either: 1

2

136

The size of the project: There may be a particular value proposition where there is significant complexity or risk that can justify the additional resource required to use VR to address this complexity. The use of VR and AR for large infrastructure projects in Crossrail is an example of this; or Standardized components: Some designs have a high degree of standardization, allowing standard workflows to be introduced and design effort to be reused across different projects. Building VR models to consider different configurations or customizations may be easier, where modules are already represented in VR. IKEA kitchen design is an example of this.

Towards digital maturity

6.3 Bidirectional workflow for data capture, analysis and synthesis into the VR/AR model

Table 6.1 Attributes often emphasized in models for professional use versus models for wider involvement

Representation

Professional use

Wider involvement use

Abstract

(Photo) realistic

Symbolic

Iconic

What is represented

Engineering system Geometry + parametric data

Geometry, surfaces Lights, textures

Work mode

Rapid design changes in real time

Fine-tuning of design off-line

Primary viewpoint

Exocentric viewing perspective

Egocentric viewing perspective

Aids and guides

Design aids

Navigation aids

Free-viewing

Controlled viewing

Publishing model: Privileged access to data filtered according to role;

Publishing model: Publicly available data only; limited resources behind the model;

Associated with the underlying dataset

A ‘visualization’ derived from project data but not linked to it

Relationship with underlying data

When the first edition of this book came out, the major challenge practitioners were facing was how to take virtual reality technology out of the laboratory to explore its potential in transforming practice. Since then, the availability of standard desktop PC software and hardware started to make this possible, and there were examples of use in a range of construction-related firms, particularly in large complex projects that could afford the investment. Similarly, in small projects where there was extensive reuse of components, and, hence, through libraries of models of parts, the investment in technology and modelling could be spread across a number of projects. As we discovered in previous chapters, there still appears to be an opportunity to define the value proposition in terms of the complexity of large projects, or the reuse of components in modular and standardized solutions.

6.2.3 Tasks What are the primary tasks users are expected to do to create the value proposition for using VR and AR in the built environment? Studies in the realm of human-computer interactions use scenario-based approaches (e.g. Carroll, 2000; Carroll and Rosson, 2003) that can provide a great level of detail in understanding what the users may do and what the corresponding information and system requirements should be. In this instance, we can illustrate this approach through a simplified list of common questions that arise in our conversations with early VR users to better understand the users, task characteristics and potential system requirements

Towards digital maturity

137

(Table 6.2). These include whether it is important to view the virtual environment in stereo. We have observed that the users of collaborative, room-like VR tend to feel more comfortable viewing the information without 3D glasses because they can be distracting, unnecessary or physically uncomfortable. However, if the task relies on depth perception, such as understanding size or spatial coordination of HVAC, then stereoscopy can be valuable. If the type of information involves primarily looking at the systems layout in the floor or a ceiling, such as pipes, ducts or cables, then choosing a system that allows the user to simply look up or down rather than rotating the model using a controller would make the task easier to perform. This can be achieved through integrated user-tracking, the provision of projected floors and ceilings for larger groups or using an HMD for single or smaller user groups. Hence, understanding who the primary and potential users are and what they might do in the virtual environment should lead to tailoring the model to the task and users; for example, the appropriate balance of abstraction and realism, the best viewing perspective and any additional information to help perform given tasks Table 6.2 Example questions to understand user task and activity characteristics and corresponding technology requirements and considerations Guideline questions

VR features and considerations

What types of tasks and activities are anticipated around the technology (e.g. short/long; standing/seating; reviewing/ training; small/large user groups)?

Room-based VR system; openfootprint curved/angled/flat displays, or HMDs. Consider user comfort with space and technology

Does the task rely on depth perception?

Stereoscopy

Is the task relying on egocentric viewing of the virtual space? Are the virtual spaces typically confined?

Immersion (HMD or CAVE)

Does the task require alternating between exocentric and egocentric views (e.g. plan view vs. first person views)?

Interactivity, controllers, display

Is the viewing scale of the virtual model relevant for the task?

Large screen/HMD

Is one person the primary viewer of the displayed content?

Tracking (HMD)

Are the people viewing the VR model in the same space?

Collaborative multi-user VR (CAVE)

What is the typical group size expected to be in the same VR viewing space?

Large field of view, multiple screens layout and larger screen(s) size

Do the model and task involve viewing overhead or floor-level content (e.g. mechanical systems, floor systems, road surfaces, etc.)?

Projected onto ceiling and/or floor, HMD

What are the existing physical space features/dimensions? What is the available budget for equipment and support?

Scalable systems, VR as a (upgradable) kit-of-parts

138

Towards digital maturity

Table 6.3 Examples of tasks in which a value proposition has been defined in operations, design and construction

should be considered. Some tasks may require the problem domain to be understood in different ways; as the medium is flexible, the user can move between different views of a model to facilitate their thinking. Novice and expert users vary in their ability to use different forms of representation, with novices requiring more support when using virtual reality. Navigation in virtual reality can be aided by making landmarks, route and survey knowledge available. By tailoring the model to the task and users, organizations can leverage greater advantage from their use of virtual reality. In the first edition of this book, organizations had started to use virtual reality in demonstrating technical competence, design review, simulating dynamic operation, coordinating detail design, scheduling construction and marketing. As this updated edition shows, there is now much wider use of the technologies and a broader range of application areas and tasks (Table 6.3). Virtual reality, therefore, should be viewed as a scalable solution, a kit-ofparts that can be flexibly configured and gradually upgraded to support a range of users performing custom tasks, and integrated with other domain-specific tools  and techniques. VR should not be seen as a replacement for standard software applications, but as a complementary visualization approach for exploring design options and scrutinizing design alternatives. Mature VR users often combine it with other forms of representation to view problems in different ways and to involve all in design discussions.

6.3 VR strategy: growing and developing capabilities As growing opportunities to use advanced visualization provide arenas for competition and commercial benefits, the vision, strategy and leadership for VR use should hence extend beyond the initial value proposition for investing in

Towards digital maturity

139

hardware and software. In other words, the long-term vision should account for how VR use may grow over time within a wider set of technologies, and how it will integrate into an engineering and engagement strategy. While we advocate for early adopters to start simple and experiment with currently available tools, it is also useful to budget for and anticipate technology upgrades and developments. Given the rapid nature of technological change, a challenge in defining the value proposition and developing the associated technology strategy is not to be locked-in to particular vendors or tools, but to have ownership of the digital asset information and the ability to view it in a variety of ways that are useful to different applications. The VR strategy needs to consider the skills required to set-up, maintain and upgrade VR systems; the potential to leverage efforts in model building; the technology maturity of the organization; and technological change and innovation in associated technologies and how these affect the business benefits that the organization can obtain using VR systems. The following sub-sections consider the innovation strategy and capabilities associated with using VR, locus of responsibility within the organization and the digital transformation of the construction sector as a changing external landscape within which organizations using VR operate.

6.3.1 Innovation strategy and capabilities Innovation capabilities applied to technology context can be outlined through the steps of searching, selecting, configuring and deploying (Dodgson et al., 2008; Helfat et al., 2009). In relation to an innovation strategy for VR, searching involves technology road-mapping, scenario planning and foresight; selecting involves analysing the portfolio of technologies within the organization and comparing with peers; configuring involves the coordination and integration of innovation efforts; and deploying involves implementing new and improved technology and obtaining value from such innovation. A challenge for innovation strategy in the context of VR and the built environment is that the pace of change is relatively high in the associated hardware and software, with new technologies coming out every few months, whereas the rate of change in the built environment is relatively slow – buildings and infrastructure may take years or decades to build and be in use over decades or even centuries. The exponential growth in computing power means that VR headsets now have a market of millions of units rather than a few prototypes. Project-based firms may also face challenges in using digital capabilities such as VR on particular projects because some projects have their own digital strategies. The firm may seek to influence the project’s digital strategy to better align it to their firm. Where this is not completely possible, the firm may have to reconcile capabilities, systems and strategies with the project during their work on its delivery (Lobo and Whyte, 2017).

140

Towards digital maturity

Both projects and firms may have to look for mechanisms to evaluate and experiment with novel VR applications and use-cases, while protecting day-to-day delivery activities. To address the challenge of rapid technological change and capture newly created opportunities, major projects have begun to implement innovation programmes by using small projects as a proof of concept in implementing new technologies. Such a mechanism provides an opportunity to plan and foster experimentation with new VR systems, accelerate their adoption and prepare for growing and developing new activities.

6.3.2 Responsibility within the organization The approach to developing VR vision, strategy and leadership within the organization, and to allocating responsibility for deployment and model development, depends in part on the nature of the organization – for example, whether it is an owner-operator or project-based firm. Owners and operators, whether governmental organizations, public-private partnerships or private firms, will have a portfolio of maintenance, retrofit and new projects, and will be able to consider the implementation of VR across these over a relatively long time-frame, utilising their supply chain to develop models. Major projects, such as Crossrail in London, have led some recent developments in the use of VR and AR, as they had embedded innovation programmes and strategies to ensure that they take up and utilize new technology. For project-based firms involved in the delivery of new and improved built environments – architectural designers, engineering consultancies, construction contractors, sub-contractors or temporary works providers – the value proposition for investment in VR may be made either by a central service, which then provides VR as a differentiator and offering to projects and clients, or in relation to work on a particular project or with a particular client. Three main scenarios for deployment and model building are within the central technical department, part of the work on client projects or developed through a technology consultancy. 1

2

VR as a service provided by a central technical department: Some organizations have a specialist visualization group that champions the use of VR on all projects. This allows for staff with specialist skill sets, including programming and scripting, and a more strategic corporate approach, although there can be challenges in getting new technology taken up by those working on projects. VR as a project-based activity: Some organizations have introduced VR at the project level, either through the passion of their own staff working on the project or because of a project-wide initiative. The organizational challenge of such innovation on projects is in making the innovation known across the organization and reusable on other projects.

Towards digital maturity

141

3

VR as a purchased service: Some organizations use VR consultancies to create models, work with them to improve the experience of viewing models or provide the hardware and software. Outsourcing allows organizations access to expertise and may reduce their risks, whilst leveraging benefits from the technologies, particularly where it increases flexibility.

Within most organizations that use virtual reality in-house, a few individuals act as visualization specialists. These visualization specialists may have different competencies and backgrounds than other professionals within the organization, particularly when using models to communicate with non-professionals. Model creation is, however, increasingly a generic skill with workflows that can be used by a wider range of professionals to participate in the development process. Where VR is a purchased service, the organization still requires skills to collaborate effectively with VR consultancies and to use the outputs they produce within operation, design and construction processes, and to have sufficient understanding of the underlying technologies to use the models well and to explore and exploit new opportunities in their use.

6.3.3 Transformation of the built environment and associated industries Operations, design and construction are being transformed through the use of digital information (Institution of Civil Engineers, 2017), which enables new value-driven outcomes rather than output-focused approaches. VR as an interface to digital information helps professionals to understand the vast datasets that are now available to them. During operations, digital information can be captured and visualized to inform preventative, rather than typically reactive, maintenance and retrofit. In project delivery, collaborative sharing of digital information is altering relationships in the supply chain between design, manufacturing and assembly, driving a more cyclical and iterative rather than linear approach to design development and production. Digital information can be authored and collated in BIM to enable a range of new approaches to production processes, including off-site and near-site production, greater use of 3D-printing and robotics and data analytics with greater information on material supply. Major infrastructure projects face significant data integration challenges as digital information, as well as physical assets, becomes a deliverable. A digital representation of the project, or a federated dataset ‘project model’, which includes 3D geometrical information and associated attribute data, is now accompanied by its operations-driven digital asset information model, rather than documents, through which work is tracked. There are also new implications for how the sector conceives of its supply chain. Growth of integrated project delivery models, such as alliancing, or integrated project delivery is part of the digital transformation

142

Towards digital maturity

of the sector, where the use of digital information indicates far-reaching consequences for the supply chain. The procurement of digital technologies to support the flow of digital information has typically been considered aside from the normal processes of project management. While on early projects, budgets for IT were not well tracked or controlled, with the increasing digital maturity of the sector, project managers are making more strategic choices about their technology partnerships and better understand the consequences of decisions about digital technologies. In the delivery of infrastructure, the value of digital technologies in improving sustainability has for many years been assumed, as digital technology use may reduce material waste. This may be the case, but it discounts the unknown environmental costs associated with the digital information itself – the digital technologies and devices used to manipulate it, and the servers and Internet used to store and communicate it.

6.4 The future of VR in the built environment What is the future of VR in the built environment? In this book, we presented the trajectory of VR development and applications in the operations, design and construction of the built environment, which reveal notable improvements in available technologies in the recent decade. We draw on the notion that “The future is already here – it’s just not evenly distributed” (Gibson, 1999) to extrapolate these trajectories and highlight some potential future scenarios. As we saw in Chapter 2, the exponential growth in computing power over recent decades, as predicted by Moore’s Law (1965), has enabled VR systems to generate realtime graphics using progressively smaller, cheaper computational devices: from dedicated room-sized supercomputers to desktop personal computers, laptops and tablets, and mobile and wearable devices. Through this review of the current use of VR systems and technology convergence, we can anticipate technological developments such as: •

More integration with BIM: The lack of bidirectional flow of model data between BIM tools and VR still often constrains the use of VR to primarily reviewing – rather than generating or modifying – model data, as changes made to the VR model are not updated in its BIM-native counterpart. This is slowly changing as we begin to see a range of plug-ins and translators that integrate certain BIM tools with VR systems with capabilities to not only see the changes made to the BIM model in VR in real time, but also update any changes made to the VR model back in BIM. As this integration is expected to continue, we can anticipate richer interactions to accompany these translators to allow users to query, modify or filter information, or otherwise test model options within VR to visualize environmental performance and sustainability targets.

Towards digital maturity

143











144

More wearable and auto-stereoscopic devices: VR systems such as HMDs are gradually becoming more lightweight, but truly untethered systems with gesture-based interactions will allow users unencumbered experiences of free movement around virtual spaces. Recent experimentation with backpack and in-device computers along with improvements in screen technology are making the use of VR or AR more comfortable for extended periods. Holographic and auto-stereoscopic displays are also enabling more intuitive interfaces with virtual models. More data capture and video: Initiatives such as smart cities, smart infrastructure and smart buildings can benefit from integrating VR models with (Big) data and agent-based simulations to understand operations and thus inform their design. Innovative developments in automotive engineering, such as autonomous vehicles, currently drive advances in real-time data capture and 3D street map developments needed for navigation. A growing number of technology start-ups, such as CARMERA4, behind these developments have begun to offer digital data services such as on-demand site analytics catering to built environment professions. We anticipate that future VR systems will be enabled by advances in data capture and processing, computer vision, machine learning, storage and communication technologies that make it easier to retain and interpret footage from the site and generate spatial data from photographs and videos. This in turn will increase the possibilities for off-site and remote site monitoring and collaborative work. More sensory-rich applications: Although the visual aspect continues to dominate the VR experience, additional sensory cues such as sound or touch can play a great role in how we experience and interact with built environments. Projected imagery on physical objects or haptic feedback devices can increase the sense of presence and aid in manual workrelated tasks in training and operations. Examples such as train operators embracing VR systems to improve safety and productivity long before they go on site suggests a path towards potentially widespread use of the technology. New platforms: Since the first edition of this book, applications such as SecondLife and Google Street View were game-changers in areas of VR in the built environment. We see new platforms for the distribution of VR content, including YouTube and SteamVR, as well as emerging technology start-ups that offer digital content and services, and can anticipate further growth in new platforms for VR that will support a range of uses and applications. We continue to witness how technology developments outpace our ability to fully understand how to maximize the benefits from using them. The decreasing cost of VR systems provides new opportunities for an ecosystem of innovation to develop, with many professionals experimenting with these technologies in practice. Distributed virtual environments and teleoperations: We anticipate a growth in the use of distributed virtual environments using portable devices to stream video between professionals at the construction site and in the design office, or between design offices. The development of multi-user virtual environments includes the move beyond cartoon representations of

Towards digital maturity

other people to superimposing 3D images of collaborators within virtual environments in order to achieve social presence in remote collaborative settings. We also anticipate a rapid uptake of teleoperations as construction sites become more automated, with remote operators having an augmented view of the drone, crane or other robot that they are operating. Yet, there are a range of alternative scenarios for the future use of VR and AR in the built environment, and there is plenty of scope for experimentation and innovation. In this book, we review the wide range of current applications, both in relation to the work of professional teams and to the users of the built environment. Particular areas of application have been the focus of significant recent developments in professional use, such as the use of AR with remote experts on construction sites and the use of VR for design exploration and safety training. As part of this growth and development, we move away from an ‘all or nothing’ discussion of reality and virtual reality to explore how VR techniques have become used across a range of augmented, mixed and virtual reality applications to help projects deliver and firms compete, and ultimately to enable us all to benefit from a better built environment. The growth in computing power has led to new opportunities to visualize and understand built environments. We hope this provides the starting point for our readers to take forward the use of these technologies to improve productivity and enhance the quality of life of those that use the built environment.

Notes 1 2 3 4

www.convr.org/ www.kalloctech.com/ https://enscape3d.com/ www.carmera.com/#products

References Brooks, F.P., 1986. Walkthrough: A dynamic graphics system for simulating virtual buildings. Presented at the Workshop on Interactive 3D Graphics, Chapel Hill, NC. BS 1192:2007+A2:2016, 2016. Collaborative production of architectural, engineering and construction information. Code of practice. British Standards Institution (BSI), London, UK. Carroll, J.M., 2000. Making use: Scenario-based design of human-computer interactions. MIT Press, Cambridge, MA. Carroll, J.M., Rosson, M.B., 2003. Scenario-based design, in: Sears, A., Jacko, J.A. (Eds.), Human-computer interaction: Development process, participatory design: The third space in HCI. Taylor & Francis Group, Boca Raton, FL, pp. 145–165. Dodgson, M., Gann, D.M., Salter, A., 2008. The management of technological innovation: Strategy and practice. Oxford University Press on Demand, Oxford.

Towards digital maturity

145

Gibson, W., 1999. The science in science fiction on Talk of the Nation. Natl. Public Radio (NPR) Broadcast, November 30. Helfat, C.E., Finkelstein, S., Mitchell, W., Peteraf, M., Singh, H., Teece, D., Winter, S.G., 2009. Dynamic capabilities: Understanding strategic change in organizations. John Wiley & Sons, Chichester, UK. Institution of Civil Engineers, 2017. State of the nation 2017: Digital transformation. Institution of Civil Engineers, London. Kane, G.C., 2017. Digital maturity, not digital transformation. MITSloan Management Review 4 April. http://sloanreview.mit.edu/article/digital-maturity-not-digitaltransformation/ Levitt, R.E., 2011. Towards project management 2.0. Engineering Project Organization Journal 1, 197–210. Lobo, S., Whyte, J., 2017. Aligning and reconciling: Building project capabilities for digital delivery. Research Policy 46, 93–107. Moore, G.E., 1965. Cramming more components onto integrated circuits. Electronics 38(8), April, 114–117. PAS 1192–5:2015, 2015. Specification for security-minded building information modelling, digital built environments and smart asset management. British Standards Institution (BSI), London, UK.

146

Towards digital maturity

Index

Page numbers in italic indicate a figure and page numbers in bold indicate a table on the corresponding page. 2D (two-dimensional) representations 17–18 3D (three-dimensional) models 18 3D movies 36–37 3Drepo 97 3D scanning 38 3D-sketching tools 73 4D-CAD models 114, 116–118, 117–118 4D-Mapper 61 Aberdeen, Scotland 62 Advanced Forming Research Centre (AFRC) 119–120 Advanced Manufacturing Research Centre (AMRC) 119–120 aerospace industry 103, 104 aesthetic practices 22–23 AFRC see Advanced Forming Research Centre airports 52, 82, 82–84, 84, 87–88, 110 airspace engineering 104 AMRC see Advanced Manufacturing Research Centre animation 8 Antwerp, Belgium 62 application programming interface (API) 38 AR see augmented reality Arcus Software 55 ARKit 38

Index

artificial intelligence 126 ARToolkit 36 Arup 95–97 assembly methods prototyping 124 Atari ST 34 Atkins Rail 59–60 auditory interfaces 5 augmented reality (AR) 2, 61; construction and 114, 119–121, 123, 125; development of 33–38; devices 1; equipment operation training 115–116; new developments in, on site 119–121 augmented virtuality 2 aural interfaces 5 automated collision detection 110 auto-stereoscopic devices 144 Avatar 37 avatars 27, 57, 98 BAS see building asset security Basingstoke Festival Place 108, 109–110, 109 BBC Micro 34 Bechtel 52, 52, 53, 81, 84, 110, 119 Bechtel London Visual Technology Group 110, 111 Belfast: Go Explore VR 360 app 63 big data 38, 133, 144 BIM see building information modelling

147

BIM CAVE 23, 23 blockchain technology 68–69 Boeing 119 Bond Station 64 Bristol, England 62 Brunelleschi, Filippo 8, 18 building asset security (BAS) 133 building estates 68 building information modelling (BIM) 6, 10–11, 18, 38, 91, 132, 143 built environment 27; changing experiences in 7–8; construction of 103–128; cyber-physical interactions 60–68; design of 72–102; digital twin 8–10, 9; future of VR in 143–145; information about 135–137; management and maintenance 65, 67–68; operation of 51–52, 58–60, 65, 67–68; professionals 11, 11; transformation of 142–143; users 11, 11; virtual reality and 1–2, 7–8 CAD see computer-aided design CALVIN 19, 19 CAM see computer-aided manufacturing Cannon Street Station 114, 115 Carbon Dynamic 119 car manufacturing 104 CARMERA 144 CATIA 77 cave automatic virtual environment (CAVE) 19, 26, 26, 35, 81, 90 Centre for the Protection of National Infrastructure (CPNI) 133 CH2M 97 circular economy 65 cities: birds’-eye view 44; cyber-physical interactions 60–68; early models, 1990-1999 46–52; exploring models 54–56, 54; management, maintenance, and operation of 65, 67–68; model development 32–33, 32; multi-use models, 2000-2009 52–60; nextgeneration models 61; planning and stakeholder engagement 47, 49, 51, 57; planning models 44; smart 61, 62–64, 68–69, 133, 144; street-level view 44; visualization 43–71 CityGLM format 61

148

Clarke, James 35 climate change 67 cloud computing 62 cloud servers 132 cognition 20, 61 collaborative design reviews 81–85 collaborative product design 93–94 Commodore 64 34 complex product industries 103, 104–106 complex projects 89–95 computational capabilities 129 computer-aided design (CAD) 46–47, 76–77, 107 computer-aided manufacturing (CAM) 77 computer numerically controlled (CNC) 77 computing power 1, 33, 52 construction 9, 27, 38, 60, 72–73; accident rates in 116; assembly methods prototyping 124; checking safety-critical elements 111–113; design and 107–110, 108; equipment operation training 115–116; fault detection automation 116; future of VR in 124–126; new developments in augmented reality on site 119–121; planning and monitoring 114; simulations 111–116; site operation simulations 116–119; training construction and maintenance operators 121–123; visualization in 103–128 context-aware artificial intelligence 126 ‘continuous surveys’ 131 control devices 4–5 crowd-sourced design feedback 97 Cullinan Studio 91 cultural heritage 64–65 cyber-physical environments 2, 8–10, 60–68 cybersecurity 68–69, 132–133 Daqri 37 data analysis and synthesis 11, 132, 136 data analytics 62, 68–69 data capture 10–11, 38, 60, 64, 131, 135–136, 136, 144 DataGlove 34 Davis, Charlotte 6 Denis, Jean-Marc 7 design: abstractions 79–80; CAD 76–77; choosing options from standardized

Index

libraries 85; construction and 107–110, 108; coordination and review in complex projects 89–95; future of VR in 98–99; healthcare 89–90; marketing 86–88; modelling 11; parametric 77; participatory 75–76, 85; process 72–73, 77–80; project 124; reviews 78, 80, 81–85, 91–93, 98; role of VR in 77–80; through digital media 76–80; transforming practices 88–97; for universal access 84–85; visualization 72–102; of wayfinding and spatial optimization 95–97 Design for Manufacture and Assembly (DFMA) 103 digital adolescence 130–134 digital data 10–11, 10 digital information 130–131, 136, 143; access and monitoring of 132–133; perceiving 15–23; use of 133–134 digitally mediated visualization 8 digital maturity 129–145 digital media 76–80 digital models 18 digital prototypes 72 digital technology: integration of 38; procurement of 143; use of 7–8 digital twin 8–10, 9, 131, 133 disaster relief 68 disembodied experience 21 Disney Imagineering Research and Development 114 distance, perception of 21 distortion 20 distributed virtual environments 144–145 drones 125 Dubai International Airport 82, 82–84, 84 dyadic relationships 16 egocentric perspective 28, 28–29, 78 Elite 34 embedded sensors 60–61 emergency response management applications 44, 68 EMIMSAR 120 emotional experience 20 enabling technologies 33–38, 33 encryption 132 Enlighten software 88–89

Index

Environmental Simulation Center 44, 45, 49–50 Evans, David 34 exocentric perspective 28–30, 30 experiential learning 23 Fakespace 35 Farringdon station 111 fault detection automation 116 film 8, 36–37 flight simulators 115 Forcados Crude Loading Platform 104–105, 105 frame rates 6–7 Freeform3D 117 freeform surfaces 77 fully immersive systems 24–25, 24, 91–93 game engines 64, 66, 88–89, 134–135 Gammon Construction 121–122 Gehry, Frank 77 Geisinger Health System 90, 90 geographic information systems (GIS) 53 Geoweb 3D 61 Gibson, William 39 GitHub 36 glass windows 8 global positioning systems (GPS) 38, 119 Google Cardboard 27, 37 Google Earth 45, 52–53, 55 Google Glass 37 Google Sketch-up 36 Google Street View 45, 52–53, 53, 55, 144 graphical user interfaces (GUIs) 32 Gravity Sketch 73 guides 31–33 haptic interfaces 6 hardware 6–7, 7 head-mounted displays (HMDs) 1, 10, 26–27, 34, 91, 95, 123 healthcare design 89–90 Helsinki Jätkäsaari 64, 64 Helsinki model 56 Heriot Watt University 64 highway design 58, 58 historical models 55, 56 HMDs see head-mounted displays

149

HoloLens 37, 119–120, 120 housing developers 86 HTC Vive 26, 30–31, 37, 74 human-computer interactions 137–138 human perception see perception hybrid environments 60–68

lighting 88–89, 88–89 linear perspective 8 LinkNode 64 London 62, 62, 64 London Luton Airport (LLA) 84, 110 Lynn, Greg 77

ICI/Fluour Daniel 104, 105 iconic representations 16–17 IGLOO 97 IKEA 85, 136 ImmersaDesk 35 immersion: for design reviews 91–93; extent of 24–27 immersive workbench 19 information technology (IT) infrastructure 126 infrastructure management 68 innovation strategy 140–141 input devices 3, 3, 4–5, 5 integrated project delivery models 142–143 interactive experiences 20–21 interior design applications 78, 85 Internet 57, 63 Internet of Things 38, 60–61, 62 iPhones 36

machine learning 10, 126 manufacturing 103 ‘Map of the Empire’ 57 marketing 63, 86–88 mining 103 mirrors 8 mixed reality environments 2 mobile devices 38 Mobile Immersive Collaborative Environment (MICE) 26 mobile phones 36–37 mobile technology 114 model development 132, 136 models 18; 3D 18; city 32–33, 44, 46–61; historical 55, 56 Moorgate Station Upgrade Capacity (MSCU) 97 Mortenson Construction 60, 122–123 motion sickness 6, 20 Mott MacDonald 51 mouse 34 MTR Admiralty Underground Station 95–97, 95–96 MultiGen-Paradigm 47, 58 multi-sensory virtual reality 65, 66–67 multi-user project-based systems 26 multi-user virtual environments 144–145 multi-user worlds 98 multi-use urban models 52–60 Munich Airport Center 87–88, 87

Jaguar Racing 104 Khan, Ricardo 123 kinesthetic interfaces 6 King, Rodney 22 Kizu model 47, 48 knowledge construction 16 Kodan 47 Krueger, Myron 34 Laboratory of Architectural Experimentation (LEA) 75–76 lag times 7 Laing Construction 107 Laing O’Rourke 114 Lanier, Jaron 34 laser scanning 10 Latour, 17 lenses 8 LiDAR 64 life-cycle integration 133–134

150

navigation aids 31–33, 31 navigation modes 30–31 NavisWorks 107–108, 109 New York 62 non-immersive systems 24, 25 NURBS 77 object-centered perspective 28–30, 30 objects 16 Oculus Rift 5, 25, 37, 74, 91 oil and gas sector 103, 110

Index

olfactory interfaces 6 online virtual reality 36–37, 37 Open Graphics Library 35 Øresund Region harbour area 63, 63 organizational responsibility 141–142 “Osmose” 6 output devices 3–5, 3, 5 Paradise Pier project 114 parametric design 77 participatory design 75–76, 85 Pennsylvania State University 90 perception 6, 8; cognition and 20; of digital information 15–23; in virtual environments 20–23 peripheral applications 35 Persimmon Homes 86 personal-professional boundaries 7–8 petrochemical industry 104 photogrammetry 10, 62, 64 Playstation VR headset 37 Pokémon Go 4, 11, 62 Porta Susa 57 ports 63 position tracking system 4–5 Powerwall 93 presence 14, 24 Princeton Junction 49 proactive maintenance 68 product design 124 project delivery 133–134 Project SAGE 34 project size 136 Proof House Junction 59–60, 59 radio-frequency identification (RFID) 38 rail sector 111–112, 111, 112, 113 Railtrack 112 real-time visualizations 129 remote design reviews 98 rendering 6 representations: 2D 17–18; 3D 18; combining different 18–19; iconic 16–17; reality and 57; symbolic 16–17; in virtual reality 15–19 Revit models 91, 92, 95 RFID see radio-frequency identification robotics 10 robots 125

Index

Rolls-Royce 104 Rome Reborn project 65 safety checking 111–113 safety systems 116 safety training 11, 27 Samsung Gear VR 37 Sante Fe model 50 scale models 18 scanning technologies 93–94 Schools of Cartography 57 Scott Brownrigg 88–89 Sculptor 78 SecondLife 98, 144 security 68–69, 132–133 semi-immersive systems 24, 25, 91–93 semiotics 16 Sensations of Roman Life 66–67, 67 senses, interaction of 20 sensors 10 sensory-rich applications 144 Shell Petroleum Development Company of Nigeria (SPDC) 105–106 Silicon Graphics International (SGI) 34–35 Singapore 62 single-user head-mounted displays 26–27 site operation simulations 116–119 Situation Engine 121–122, 121–122 sketches 17–18 Sketchpad 34 SketchUp 66 Skyscraper Digital 54, 54, 86 smart cities 61, 62–64, 68–69, 133, 144 smartphones 7, 36–37, 39 smell, sense of 6 social experiences 22–23 social media 7, 62 software 6–7, 7 space element 78 spatial experience 21–22 spatial optimization 95–97 splines 77 stakeholder engagement 47, 49, 51, 57 standardized components 136 SteamVR 144 STEPS (Simulation of Transient Evacuation and Pedestrian movements) 51–52, 51 Strathclyde University 84–85, 119 sustainability 65, 68, 143

151

Sutherland, Ivan 34 symbolic representations 16–17 tasks 137–139, 138, 139 technological change 135, 140 technological integration 60–61 technology choices 10–11 technology convergence 131–132 teleoperations 144–145 Thameslink 2000 111, 112, 112, 113 Theatron Project 55 Tilt Brush 73, 74 touch, sense of 6 tracking and sensing devices 38 training applications 115–116, 121–123 transportation simulations 52, 58–60 Trent 800 Engine 104, 104 triadic relationships 16 U-Data Solutions 47, 48 Unity game engine 36, 64, 66, 88–89, 135 universal access 84–85 University of California in Los Angeles (UCLA) 46–47 unmanned vehicles 125 urban planning 47, 49, 51, 57 Urban Simulation Team 46 user aids 31–33 user authentication 132 user experience 14–42, 135, 136; customization of visual interface 27–33; disembodied 21; multi-user 26–27; perceiving digital information 15–23; shaping 23–33; single 26–27; social 22–23; spatial 21–22 user requirements 81 value engineering 85 video 144 video games 35 viewer-centered perspective 28, 28–29 viewing perspectives 28–30, 28 virtual assembly 124 virtual displays 5 virtual environments: distributed 144–145; interaction in 20–21; navigating in 31–33; perception in 20–23 virtual heritage 64–65

152

virtuality continuum 2, 24 virtual objects 121, 136 virtual reality (VR): built environment and 1–2, 7–8, 143–145; commercialization of 34–35; in construction 103–128; definition of 3–4; design process and 72–99; digital maturity 129–145; future of 39, 143–145; in heritage 64–65; as interface to city information 62–64; online 36–37, 37; representation in 15–19 virtual reality (VR) systems 3–4, 3, 5, 75–76; characteristics of 4–7; development of 33–38, 33; extent of immersion 24–27; growing and developing capabilities 139–143; guides and user aids 31–33; hardware 6–7, 7; input and output devices 4–5, 5; multi-user project-based 26; navigation modes 30–31; next generation 37–38; single-user headmounted displays 26–27; software 6–7, 7; types of 24–27; user experience in 14–42; value proposition for 134–139 Virtual Reality Modelling Language (VRML) 35 virtual scalpels 5 visual interfaces 132, 136; customization of 27–33 visualization: cities 43–71; construction 103–128; design 72–102; digitally mediated 8; real-time 129; use of 10–11 visual literacy 27 visual practices 22 VPL Research 34 VR see virtual reality VRail 58–60 VTT 64 Wagstaffs 61, 64 Walkthrough 49 wearable devices 144 Whirlwind 34 Wi-Fi 38 W Industries 34 Wooton, Sir Henry 18 WS Atkins 58, 59 YouTube 144

Index