Test

Page 1

Manuscript Click here to download Manuscript: Onsite Building Information Retrieval by Using Projection-Based Augmented Reality.doc

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65

ONSITE BUILDING INFORMATION RETRIEVAL BY USING PROJECTION-BASED AUGMENTED REALITY Kai-Chen Yeh, Meng-Han Tsai, Shih-Chung Kang Affiliations Kan-Chen Yeh Graduate Student National Taiwan University Dept. of Civil Engineering, No. 1, Sec. 4, Roosevelt Road, Taipei, 10617 Taiwan. Email: bluemint@caece.net

Meng Han Tsai PhD Candidate National Taiwan University Dept. of Civil Engineering, No. 1, Sec. 4, Roosevelt Road, Taipei, 10617 Taiwan. Email: menghan@caece.net

Shih-Chung Jessy Kang (Corresponding Author) Associate Professor National Taiwan University Dept. of Civil Engineering, No. 1, Sec. 4, Roosevelt Road, Taipei, 10617 Taiwan. Email: sckang@ntu.edu.tw Tel: +886-2-3366-4346 Fax: +886-2-2368-8213

1


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65

ABSTRACT This research focuses on a long-standing problem at construction sites: onsite information retrieval. We have therefore developed a wearable device that can project the construction drawings and related information based on the needs of the users. This device is envisaged to help engineers avoid carrying bulky construction drawings to the site, and to reduce the effort required in looking for the correct drawings to obtain the information they need. This device includes four modules: the information integration module, the display module, the positioning module, and the manipulation module. The information integration module is used to transfer the information in the building information model (BIM) into images to enable the onsite retrieval from the device we developed. The position module enables users to input their locations and automatically search for the images that the users might need. The manipulation module can analyze the gestures of the users from the touch screen and accelerometer in the devices, and then crop the images to eliminate the unneeded information. The display module, which directly links to the projector, can continually calculate the images processed by the previous three modules and scale the images accordingly, ensuring that the projection results in a correct scale. We also developed a hardware device, coined the iHelmet, to implement the four modules. It consists of a construction helmet (weight: 460g), an iPod Touch (weight: 115g) and an Optoma LED projector (weight: 114g). To validate the usability of the iHelmet onsite, we conducted a user test with 34 participants. We compared the efficiency and effectiveness of retrieving building information using the iHelmet with using the traditional 2D drawings approach. The results showed that the mean completion times were significantly shorter for participants using the iHelmet (iHelmet: 44 seconds,

2


traditional approach: 99 seconds). The mean success rates of participants arriving at 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65

the correct answers were also significantly improved for those using the iHelmet (iHelmet: 91.6%, traditional approach: 64.3%). Keywords: building information model, augmented reality, mobile device, projector, wearable computing

CHALLENGES IN EXPLORING BUILDING INFORMATION The main challenge in exploring building information is in retrieving onsite information using 2D drawings as references to interpret construction information (Azhar et al. 2008). At present, construction engineers are frequently inconvenienced by the need to explore detailed information from the working drawings at the site. They find it cumbersome obtaining building information related to the construction drawings in this manner. Current construction practices have three main drawbacks of using 2D drawings to explore building information: Poor portability and improper handling of the drawings: 2D drawings usually contain large volumes of information from various construction disciplines, and so the size of the 2D drawings needs to be large enough to accommodate optimum amounts of information. This makes drawings inconvenient to carry around in complex and risky construction environments. It is also difficult for users to find the information they need from a large number of references. Display problems: Information from 2D drawings are presented two dimensionally (on flat paper), which make it particularly difficult to present spatial relationships between building elements in a real-world 3D environment. People are therefore

3


required to transfer the information from a two dimensional representation to an 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65

imaginary three-dimensional representation by themselves in order to match the actual environment. Browsing and readability problems: 2D drawings usually include many predefined symbols for different construction components, which hinder users' abilities to understand clearly the meanings behind the drawings. Users need to study 2D drawings for multiple times in order to understand the meaning behind the various symbols. This is usually inconvenient and inefficient for the users. The technology of exploiting 3D models for exploring building information has recently gained popularity, as desktop computers are increasingly able to support graphics that are more sophisticated. However, on-site use of desktop computers is not particularly feasible due to their poor portability.

RELATED WORKS Many studies have attempted to solve the browsing and display problems with a variety of approaches. The following presents related work on information integration, information display, and mobile devices. Information integration Previous researchers have attempted to improve the efficiency of browsing building information by developing better ways of integrating the information. They used model-centric information methods such as 4-dimensional modeling (4D modeling), n-dimensional modeling (nD modeling), and BIM modeling to integrate the information, which provide the information based on a 3D model.

4


4D modeling and nD modeling 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65

Both four-dimensional technology (3D model plus a schedule) and n-dimensional technology are viewed as successful means of integrating building information. McKinney and Fischer (1998) indicated that 4D models could remove model abstraction by linking 3D building models and schedules. Koo and Fischer (2000) analyzed how 4D models enable more people to understand schedules quickly and to identify potential problems. Korman et al. (2003) proposed that project teams using these 4D models to check for conflicts could improve the coordination of mechanical, electrical and plumbing/process piping (MEP) systems. Chau et al. (2004) found that 4D visualization assists in cognitive, reflective, and analytical activities. Lee et al. (2002) reported on integrating vision environments to allow nD enabled construction and management to be undertaken. Tanyer et al. (2005) used the n-dimensional modeling approach to support collaborative working. Aouad et al. (2006) incorporated all the design information required at each stage of the life cycle of a building facility into an n-dimensional model. Many other researchers, such as Chau et al. (2005), Dawood and Sikka (2006), and Staub-French and Khanzode (2007), have also reported successful application of 4D technologies for coordinating sub-constructors in real projects. Kang et al. (2007) designed and implemented a user study (N=42) and concluded that 4D models are able to assist construction teams in detecting logical errors in the construction sequence more efficiently. Using an experimental exercise, Dawood and Sikka (2008) provided quantitative evidence that using 4D models could increase the efficiency of communication and the interpretive ability of a construction project team.

5


Building information modeling (BIM) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65

Building information modeling (BIM) has recently attained widespread attention in the architectural, engineering and construction (AEC) industries (Bouchlaghem et al. 2005; Koo and Fischer 2000). The Building Information Model (BIM) is a digital representation of physical and functional characteristics of a facility (NIBS 2006; Eastman et al. 2008; Jernigan 2008). Compared to 4D and nD modeling, BIM focuses on the integration of building information. Numerous scholars have discussed the opportunities and potential benefits of using BIM (Goedert and Meadati 2008; Ku et al. 2008; Manning and Messner 2008). Eastman et al. (2008) attempted to use BIM to facilitate coordination in building projects. Dossick and Neff (2010) found that BIM can integrate information from architecture, structural engineering and MEP systems into a single model. Goedert and Meadati (2008) also indicated that using BIM in projects can lead to greater efficiencies through increased collaboration, resulting in improvement in project team communication and cooperation, and the coordination of construction projects. BIM is also an excellent tool for data management, being capable of efficient information retrieval and display (Davis 2007). Many case studies have also provided anecdotal evidence to support the view that the use of BIM makes the building process more efficient and effective (Howard and Bjรถrk 2008; Kam et al. 2003). Khanzode et al. (2008) discussed the benefits and lessons learnt of implementing building VDC technologies for coordination of MEP Systems on a Large Healthcare Project. Kaner et al. (2008) reported that using BIM in structural engineering firms can improve labor productivity. Jordani (2008) managed costs by using a BIM tool to operate and maintain the building for its entire life cycle. Howard and Bjork (2008) found that using BIM in projects can extend these benefits to all of the members in the project team in the entire construction process (Howard and Bjรถrk 6


2008). The General Services Administration (GSA) requires all AEC firms dealing 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65

with them to include a BIM as part of all work proposals, commencing in the 2006 fiscal year (Goedert and Meadati 2008). Information display Some researchers have attempted to improve the display of information in order to provide more readable and well-visualized information for engineers. Notably, they have taken advantage of both virtual reality (VR) and augmented reality (AR) to display the information. VR is commonly applied to a computer-simulated environment that enables users to interact with a virtual environment or a virtual artifact. AR can be viewed as an extension of VR that inserts virtual objects into a predominantly real world scene, and enhances an individual’s perception of his or her environment. Virtual reality (VR) Exploiting Virtual Reality (VR) technology for construction planning has gained popularity, as desktop computers are able to support more sophisticated graphics capabilities. A more typical application is similar to the virtual construction way of thinking - visualizing planned construction using desktop PC virtual environments to create graphic simulations of construction processes, perhaps even including equipment operations (Leinonen and Kähkönen 2000). Kumi and Retik (1997) presented a framework based on VR technology for the realistic visualization of construction projects, simulated at the activity and component levels (Adjei-Kumi and Retik 1997). Murray et al. (2000) described a virtual environment to support the construction process of buildings. Maruyama et al. (2000) proposed a concept of virtual and real-field construction management systems (VR-Coms) to evaluate 7


productivity and safety in virtual simulated and real-field constructions. Hadikusumo 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65

and Rowlinson (2002) developed a design-for-safety-process (DFSP) tool to identify safety hazards (Hadikusumo and Rowlinson 2002). Augmented reality (AR) Augmented reality (AR) technology superimposes the view from a physical, real-world environment with the objects from a virtual world. In the past few years, AR technologies have developed rapidly. Many investigators have used augmented reality (AR) technologies to display construction information. Dunston et al. (2002) conducted an experimental study of an AR system that they had developed to support design activities for mechanical contracting. Wang and Dunston (2006) provided theoretical validation for the augmented reality (AR) technique, a technique that deals with the combination of real world and computer generated data, that can reduce the amount of mental workload required of engineers for AEC tasks. They indicated that AR can facilitate communication and information sharing among architectural design team members, provide better spatial cognition than traditional design tools, and improve design creativity and problem solving in the architectural design process (Wang et al. 2008). They also compared with paper drawing and found the AR system better facilitates design collaboration tasks (Wang and Dunston 2008). Kamat and El-Tawi (2007) discussed the feasibility of using augmented reality (AR) to evaluate earthquake-induced building damage. Behzadan and Kamat (2007) addressed the registration problem during interactive visualization of construction graphics in outdoor AR environments. Golparvar-Fard et al. (2009) discussed methodology for generating, analyzing and visualizing progress with D4AR (4 Dimensional Augmented Reality) models. The examples mentioned above

8


demonstrate a rational interest in developing AR systems to serve the computer 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65

interfacing needs of the AEC industry. Wearable and mobile device Some investigators have taken advantage of the rapid development of wearable and mobile devices, and have employed them in construction sites. Mobile devices now are very powerful tools for storing detailed information about large volumes and types of construction materials, as well as data retrieved from the sites and equipment. Therefore, in a short space of time, there has been a growing trend towards the use of wearable and mobile computing tools to support construction work (Williams 1994). Also, in order to manage information in complex construction environments, as well as the changing locations of workers, smaller and more portable operating equipment is required (Williams 2003). Starner et al. (2000) developed the Gesture Pendant that enables its wearers to use palm and finger motions to control household equipment. GestureWrist (2001) uses wrist-type acceleration sensors to capture simple arm actions (Rekimoto 2001). Tsukada and Yasumura (2002) developed a wearable interface, the Ubi-finger, which detects hand movements by sensing touch and acceleration changes. Research has also been done in an attempt to address this use by investigating the use of mobile devices in construction. Garrett et al. (2005) described a navigational model framework to create and manage different views of information contained in product and process models more effectively and efficiently (Reinhardt et al. 2005). Lipman (2003) successfully used Virtual Reality Markup Language (VRML) to model steel structures on PDAs. (Saidi et al. 2002) provided quantitative data, which showed that

9


handheld mobile devices are able to improve the progress and quality of construction 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65

projects. Some investigators have worked on using the helmet to collect multimedia information. The Digital Hardhat System has been developed to help users collect and share information. It can help users quickly respond to problems at remote sites (Stumpf et al. 1998). Reynolds and Teizer designed the SmartHat (Eisenberg 2010), a helmet that sounds a warning when the wearer inadvertently gets too close to potentially dangerous equipment (Teizer, et al. 2009; Teizer and Vela 2008). Although current technology enables dispersed users to capture and communicate multimedia field data to solve problems, users who work at construction sites need to be able to understand the information in real time.

RESEARCH GOALS To reduce the difficulties in onsite information retrieval, we aimed to develop a lightweight device that could project construction drawings and related information based on the location of the user. We took advantage of BIM and AR to develop a system which would not just be able to provide better integration of this information, but also display the information in a more readable and well-visualized manner. We proposed to integrate this system into a lightweight device that would be suitable for users at construction sites. In our study, we attempted to implement the device in order to realize location-based presentation of building information, called the iHelmet. The iHelmet is designed to improve the efficiency of browsing building information, and provide a more intuitive way to display and view information.

10


We designed and conducted a user test to validate the usability of the iHelmet by 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65

comparing the completion times and the success rates of information retrieval tasks, to determine if the iHelmet would be able to improve the browsing efficiency of building information by providing more well-visualized and readable information to users.

SYSTEM ARCHITECTURE The iHelmet is a system that we designed and implemented to integrate building information model (BIM) with augmented reality (AR). The system architecture of the iHelmet is shown in Fig. 1. It is divided into three layers, namely the user interface layer, the data process layer, and the data storage layer. Each layer is composed of major units or functions shown as blocks in Fig. 1. The arrows used to connect blocks in Fig. 1 represent the direction of data or message processing. The user interface layer is composed of three modules: the display module, the positioning module, and the manipulation module. This layer sends the manipulation and interaction to the image-based model from the user, and sends visual feedback to the user through the display module. The data process layer is composed of an image-based model, which is a component of the information integration module. According to the manipulation command sent from the user, image-based models will make the corresponding changes. As the image-based model changes, the projection will change simultaneously. The data storage layer is composed of a BIM model, another component of the information integration module. The BIM model is used for saving all the components of the room and the information of the components. In this system framework, the

11


user can directly manipulate the image-based model and receive immediate visual 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65

feedback.

DEVELOPMENT ENVIRONMENT In this research, we selected the iPod Touch1, the Optoma PK1012, using iPhone SDK3.03 and Objective-C4 as the programming languages for the improvement of iHelmet’s environment through building many visual effects and interactive approaches. The iPod Touch is originally designed as a portable media player, a personal digital assistant, and a mobile Wi-Fi platform. The iPod Touch was selected for its compact size and light weight (W × H × D: 61.8mm × 8.5mm × 110mm, 115g), its multi-touch capabilities and its accelerometer. Through the iPod Touch, users can easily employ touch or gesture controls to command manipulations. The Optoma PK101 was chosen as the projector in the iHelmet. The reasons for this choice was also for its compact size and light weight (W × H × D: 50mm × 15mm × 103mm, 114g), and rechargeable long-lasting battery (averaging 3 hrs usage time). With this thin projector, users can project information from mobile devices more conveniently. The software development environment used iPhone SDK3.0 and Objective-C running on a Mac operating system. iPhone SDK allows developers to develop applications for the iPod Touch and provides interactive functions for development.

1

The iPod Touch is a portable media player, personal digital assistant, and Wi-Fi mobile platform designed and marketed by Apple Inc. 2 The Optoma PK101 is pico pocket projector published by Optoma Technology, Inc. 3 iPhone SDK is a software development kit developed by Apple Inc. 4 Objective-C is the primary language used for Apple's Cocoa API, and it was originally the main language on NeXT's NeXTSTEP OS. 12


Objective-C is a reflective, object-oriented programming language, which is used 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65

primarily on Apple's Mac OS X and iPhone OS. Design and Implementation of iHelmet Safety helmets are obligatory personal protective equipment for all construction sites. Therefore, for this research, we designed our device as a safety helmet on which we would be able to install both the iPod-Touch and the projector, as shown in Fig. 2. We cut a small aperture at the front of the helmet allowing the projector to display the building information. We set up a small holder frame on the right side of the helmet to secure the mobile device safely, as shown in Fig. 2 (a). We fixed a platform inside the helmet to hold the projector inside the helmet without disturbing the user. In order to avoid sliding, we used an acrylic sheet to fix the projector, as shown in Fig. 2 (b). We used an electric wire to connect the iPod-Touch with the projector, as shown in Fig. 2 (c).

METHODS AND IMPLEMENTATIONS To realize the projective AR device, iHelmet, we implemented four modules– the information integration module, the positioning module, the manipulation module and the display module. The following are detailed methods and implementation of each module. The Information Integration Module The information integration module retrieves building information from a BIM model and integrates it into an image-based model. The main advantage of having an image-based model is its efficiency in loading and displaying the model. The efficiency depends on the total number of building elements; the more building

13


elements there are, the more random access memory (RAM) is required. The memory 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65

problem is more apparent in a mobile device, where memory capacity is usually smaller than that of a desktop computer. The image-based model displays the building information as images, with only one image at a time. The efficiency of loading and displaying, therefore, depends solely on the resolution and compression format of the image instead of the total number of building elements. This image-based technique can greatly improve the efficiency of browsing information. Although some building information was eliminated when transferred to images, we added additional tags to the images to categorize the images and retrieve the building information from the image easily. In this research, we used Autodesk Revit to build our BIM model. This BIM model includes floor plans, cross-sections, and MEP details about various building elements, such as pipe and rebar dimensions or information on socket inserts. We retrieved information from this BIM model in image format and integrated these images into an image-based model. Fig. 3 shows the procedure of information retrieval. First, we divided the BIM model into floors and retrieved floor plans from the BIM model in image format. Second, we divided each floor plan into several view units by areas that include rooms and common spaces. The public area here includes the corridor and the lavatories. For each view unit, we enlarged the floor plan to fit the area in order to present it more clearly. Third, we retrieved detailed information about the view units from the BIM model, including information on doors, windows, floors, walls, and other building elements. Users in the view unit need only to choose a building element as a target to receive all the details about it. In this research, we used a wall as an example and

14


retrieved all detailed information about this wall including rebar, piping and electrical 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65

information. If the users choose this wall as the target, they can browse the rebar plans, piping plans, and electrical plans of this wall. Furthermore, they can browse the dimensions of the rebar, the pipes, and information on the sockets. The Positioning Module The positioning module can specify the location of a user, which is an essential step in providing location-based information. Fig. 4 shows the procedure for positioning. First, users need to select a floor in order to obtain the floor plan. Second, users need to select a view unit such as a room in the floor plan. The view unit represents a room or public area where the users are in. Third, users have to specify their location within the view unit. We provided two instructions (distance value and projection range) to help specify the user’s position, in order to make the positioning more precise, as shown in Fig. 5. Fourth, the user needs specify the direction of the view. Finally, users can obtain related detailed information based on their location. The following provides detailed descriptions of the two instructions - distance value and projection range. Distance value As shown in Fig. 5, the format of the distance value is a direction sign plus a distance value. The direction sign will automatically change according to user’s position. For vertical distance, if the user’s position is closer to the North side of the view unit, the sign will be N, whereas the sign will be S when the user is closer to the South side. For horizontal distance, if the user’s position is closer to the East side, the sign will be E, whereas the sign will be W when closer to the West side. To calculate the distance, we first need to calculate the proportion (PS) of the screen’s dimension in pixel to unit 15


length to get the dimensions of the area plan in supposedly millimeter units as shown 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65

in equation (1):

(1)

where Sp stands for screen’s dimension in pixels and S i stands for screen’s dimension in inches. Second, we need to transfer the distance between the touching point (the point on the screen that the user's finger is touching) and the two edges of the view unit displayed on the screen. From the proportion PS calculated from equation (1), we can convert the distance between the touch point and two edges of the view unit into a distance in the real environment, as shown in equation (2):

(2)

where Dr stands for the distance in real environment and D v stands for the distance between the touch point and the two edges of the view unit. Fig. 6 shows the relationship between equation (1) and equation (2). Projection range As indicated in Fig. 5, the small red rectangle represents the projection range. The dimensions of the rectangle change according to the position of the user. We use equation (2) and the throw ratio of the projector to change the dimensions of the projection range. The size of the projection range will increase with the distance and vice-versa. Fig. 7 shows three screenshots that shows displayed images from different

16


distances. Though this rectangle is small, users are likely to be able to find a suitable 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65

place to project the information. The Manipulation Module The manipulation module can simplify the process of browsing information so that users can browse and diagnose problems in the information efficiently. Hence, this research proposed four control types for different purposes, as shown in Fig. 8. The following are detailed descriptions of these four control types. Single Tap We used single tap to select a floor, select a view unit, and lock/unlock the view unit. Tapping the upper right portion of the screen selects upper floors, and tapping the bottom right portion of the screen selects lower floors. Selection of a view unit is through tapping and dragging across the screen. After a view unit is selected, tapping the upper side of the screen locks the view unit to avoid any accidental taps by the user. If users want to select another view unit, tapping on the lower side of the screen unlocks the view unit allowing users to make another selection. Slide We used slide to enter the next view or return to the previous view. Once users have selected a floor or a view unit, they can enter the next view by sliding their finger across the screen from left to right; sliding the finger across the screen from right to left returns to the previous view to select another floor or view unit.

17


Double tap 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65

Users can obtain detailed information by double tapping the screen. For example, when viewing the rebar plans, double tapping on the screen shows the information about the rebar, such as the dimensions of the rebar (diameter, cut-off length, bend, etc.) or the arrangement of the rebar. When viewing piping plans, double tapping on the screen shows the dimensions and positions of the pipes or conduits. When viewing the electrical plans, double tapping on the screen shows the detailed information of sockets and other inserts. Gesture control We developed an intuitive gesture control to regulate the vertical range of the information when projecting. According to the elevation angle of user’s head, the information displayed will automatically change in order to match user’s view range. As the user looks up, the display will change to the information on the upper side. As the user looks down, the display will change to the information on the lower side. The Display Module In order to provide users with more visualized and more realistic information, we proposed to use full-scale projections to display the information. Since different positions have different scale ratios, to generate a full-scale projection, we first have to specify the user’s position, as discussed in The Positioning Module. Second, we have to calculate scale ratio of the projection (R P). We can get the Projector’s Throw ratio RT (Distance/Width) from the projector’s manual. From the throw ratio, we can calculate the scale ratio (RP), as shown in equation (3):

18


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65

(3)

where WO stands for the original dimensions of the screen and WP stands for the dimensions of the projection of the screen. Then we can calculate the dimensions of the projection of an image after being projected, as shown in equation (4):

(4)

where IP stands for the dimensions of the projection of an image and I O stands for the original dimensions of an image. Third, we have to modify the dimensions of the projection in order to match the original dimensions of the contents of an image. We can calculate the scale ratio (RS) by the proportion of the dimensions of the projection and the image content’s original dimensions, as shown in equation (5):

(5)

where CO stands for the original dimensions in the contents of an image.

VALIDATION To validate the usability of the iHelmet, we conducted a user test. The following sections will introduce details about the user test and includes the demo case, the test plan, test participants, test environment, test results, and findings from the user test.

19


Demo Case 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65

We selected a building project as a demo case to validate the system. We built a virtual model of the research building, which had 4163 elements in the 3D model. Here, we intended to evaluate how effectively the iHelmet could help users explore and understand relevant construction information. We also investigated, via a user test, whether users could effectively obtain specific building information by exploring information from the iHelmet. The details are described as follows. For this demo case, we used an actual research building as a reference to build our BIM model, namely the Civil Engineering building at National Taiwan University, as shown in Fig. 7. The demo case involves ten floor levels and one basement floor. Test plan As shown in Fig. 8, the test plan followed the 2Ă—4 arrangement (2 reference types and 4 levels of questionnaires). All users were required to perform two reference types, i.e. use both the iHelmet for referencing and use 2D-drawings as references. In the test, all users were required to perform and answer four levels of questions. The following sections explain the test plan in more detail. Information reference in different reference type The iHelmet: We used an iPod-Touch (2nd generation, 32GB) as a mobile device to store building information and provide the interaction interface, and a helmet with a small battery-powered projector installed inside to project the information. 2D-drawings: We printed out five series of 2D-drawings in A3 sized paper, including architectural floor plans, structural floor plans, structural reinforcement plans, pipe

20


plans and electrical plans. Each plan had its own legend to help users to understand 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65

the meanings of symbols. Four levels of questions Construction practice usually follows some regular steps to find the specific information of a building. Four levels of queries were given to all participants to answer, specifically designed in relation to these steps. The steps are: (1) find the specific drawing, (2) find the specific area, (3) look up the legend and (4) transfer the spatial information. Our questions in the test were related to these designated steps. All participants used the same questionnaire to perform and answer four levels of questioning. They are briefly described as follows: Exploration: To explore the drawings they need in order to obtain the building information related to the drawings. Location: To find the location based on user’s location or a specific building element’s location in the drawing. Legend: To look up the legend to get more information about the equipment, such as the dimensions of pipes or sockets. Spatial information: To transfer spatial information in order to understand spatial relationships. Test participants There were 34 participants in the user test, 27 male and 7 female. Their ages ranged from twenty-two to thirty-four years. The participants included 11 civil engineers and 23 graduate students from a civil engineering background. All participants had studied construction management related courses. 21


Test environment 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65

The user test was performed in a controlled environment that was limited to the room shown in Fig. 9. In this room, each participant was asked to sit at the south side of the north table in the room. A researcher, sitting at the east side of the table, conducted and facilitated the test procedure and guided the participants through the test. Using the iHelmet to answer question 4 was a special case that needed users to find spatial information and project the information with the projector. Each participant needed to sit at a pre-positioned seat in order to make sure that every participant answered the question at the same place. Test results The test statistic that would take a value as extreme or more extreme than the observed value is called the p-value (Îą=0.05) of the test. If the p-value for a certain factor is less than or equal to 0.05, then the factor can be considered significant. The same level of significance was used in studies conducted by Wang and Dunston (2008). An Îą level of 0.05 was used for all statistical tests and analysis. The test results assessed how quickly and accurately participants performed the task when using the iHelmet. Thus, we measured the completion times and success rates when using the iHelmet. They are summarized as follows: Completion times: Completion time means the amount of time each user took to answer the questions during the test. The statistical results, including mean value and standard deviation for five dimensions are shown in Fig. 10. As shown in Table 1, the results for all four questions were positive and statistically significant (p <0.05) in the t test. Users of the iHelmet spent less time than that required for the 2D drawings to complete the tests. In addition, we found that when 2D drawings were used as 22


references, an experienced user who is a civil engineer tended to have better 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65

performance, whereas when the iHelmet was used as the reference, there was no difference in performance due to experience. Success rates: The Success rate means whether the user found the correct information and gave the right answers. The success rates are shown in Fig. 11. From the results shown in Fig. 11, we concluded that use of the iHelmet produced a higher average success rate in three out of four testing questions. We also used statistical methods, the t test and the chi-square test, to check for differences between groups. We found that the total success rate for the test questions was statistically significant. Findings from the user test Overall, the use of the iHelmet as a presentation tool for interactive information presentation received positive results. We found conclusive evidence from four aspects: First, we combined a mobile device and a small projector with a helmet. Most users deemed it quite practical when using it in construction as it avoids the need for carrying additional equipment and provides hands-free observation. A few users (2 out of 34 users) opined that they felt uncomfortable being equipped with an electronic device on their heads. Secondly, some users pointed out that their lines of vision were hindered by part of the helmet, and this made it difficult to view the projected information. Because of the helmet’s original design, this phenomenon is inevitable unless we modify a portion of the helmet.

23


Next, all users agreed that using a full-scale projection to present spatial information 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65

allowed them to visualize the information well, and was more intuitive. Real-scale projections present well, and are compatible and favorable for presentations, with users likely to find the information easy to understand. Finally, all users agreed that the interaction mechanism to browse building information was better than the traditional 2D approach that involved sifting through a large stack of drawings. They felt that the iHelmet could improve the cognitive power required to capture and understand building related information more precisely and promptly.

ADVANTAGES OF THIS TOOL Major advantages of this technology over traditional 2D drawings are manifold. They include: First, the full set of devices is very small, with each device fitting easily into one’s palm. Therefore, it is comparatively handy and very portable. On the other hand, 2D drawings are bigger; and are sometimes too big to carry or even to open on a standard table on site. Secondly, the iPod Touch is less vulnerable to sudden rain or moisture because it can be quickly and easily protected by using a small waterproof bag or cover. 2D drawings are actually much more vulnerable and inconvenient in these common scenarios. With the wider usage scenarios, we expect that iHelmet can be more easily deployed onsite. Thirdly, it does not require an extra carrier for the user to take as the complete equipment array is accommodated within the user’s helmet. Because 2D drawings

24


require additional means for transport, extra attention is required. Within the complex 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65

and dangerous construction environment, there is a risk users may trip or fall while carrying the bulky drawings and climbing heights for construction purposes. Fourthly, lower vulnerability to physical damage and deterioration because the device is protected by its individual factory made cover shield. This is contrary to 2D drawings, which do not possess these inbuilt industrial advantages, and will tend to tear, discolor, or fade. Next, the system is capable of instantly sharing and displaying information with corresponding 3D rendering images, and even has an animation to highlight the important part of the images. 2D drawings fail to provide this kind of advantage and require many supporting associated drawings to interpret the information. Finally, with more innovative approaches possible with technological advancements, this tool may serve voice recognition systems or hand remote systems to deliver information, while 2D drawings are unable to serve this prompt delivery system.

LIMITATIONS AND SUGGESTED FUTURE STUDIES Despite its many advantages over 2D drawings, the iHelmet has some limitations that could be overcome in future studies. More Complete Hardware: There are many limitations from the available hardware. They include projector brightness and resolution, the extra weight and size burden on the head and neck from the additional hardware, and safety issues. In addition, the projector inside the helmet could affect the user’s comfort and attentiveness during prolonged use. Due to the projector’s continuous close contact with the head, users may have some negative effects from the electronic device’s magnetic field. As the

25


helmet gets much heavier (40% more of its original weight) when the whole 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65

arrangement is affixed, users may not feel as comfortable as normally, having to constantly wear this heavier helmet on site. Another concern is that the hole drilled into the helmet, the integrity and certification of the safety device may not be valid. Therefore, hardware that is more complete is required. A More Sophisticated Input Method: From the user test, we found that the user sometimes needs to take out the iPod Touch from the helmet at the initial file opening stage, which does not reflect the real value of iHelmet. Controlling the view without seeing the screen of the mobile device is one of the major advantages of using iPod Touch, a gesture-controlled device, in this system. It allows users to control the views with larger gestures instead of precise motions to hit/drag the button. Having to troubleshoot problems while wearing the iHelmet is an important consideration for the future design of the tools, which require frequent manipulations to change the view unit. In the future, a more sophisticated input method can be introduced through further research and development. Usability in construction sites: While projecting the information, a display surface (such as a screen or wall) is always needed onsite to view the information, which may sometimes be difficult to find on site. Moreover, enlarging the information image makes the display figures blurred and decreases the visibility and clarity of the information.

POTENTIAL APPLICATIONS OF IHELMET If properly developed from its prototype stage, the AEC industry may ultimately benefit from the iHelmet’s improved efficiency in browsing information and its offer

26


of more visualized information to the user. The following are possible applications of 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65

the iHelmet: Group communication: The iHelmet displays information through a projection, which offers the opportunity for all members of a group to browse the same information at the same time. Project review: The iHelmet offers more visualized information to the users based on the location of the users. When reviewing a project, users can find the required information more efficiently. Moreover, they can easily understand the information even though they may not possess the specific professional domain knowledge exemplified in the different construction drawing schemes. Construction lofting: The iHelmet provides a full-scale projection to the users. Users can directly exploit the projection to perform construction lofting, which is more convenient for users. Maintenance: When conducting maintenance, a maintenance worker can easily find out detailed information of specific equipment using the iHelmet, as it integrates all the information about the building. It offers more visualized information based on the location of the users and provides them with manipulation that is more intuitive.

CONCLUSIONS The main contribution of this research is the development of a wearable device, the iHelmet, to work towards solving some of the common problems in information retrieval within construction sites. The four modules, the information integration module, the display module, the positioning module and the manipulation module, were implemented within the iHelmet to allow engineers to search and obtain required

27


information in a more efficient and intuitive way. The iHelmet allows engineers to 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65

input their current location at the site, and automatically retrieve the related information in an image format. iHelmet processes the images in real time and then projects them at the construction site with correct scaling. Because the iHelmet is equipped with a multi-touch screen and an accelerometer, it allows engineers to control the system with gestures. From the user test (N=34) conducted as part of this research, we compared the users’ behavior in retrieving onsite information using the iHelmet against their behavior using traditional 2D drawings. We found that users obtained the information more efficiently and accurately due to the augmented information from using the iHelmet. This research provides evidence that using the gesture control and augmented reality in the iHelmet can help significantly in reducing the difficulties in retrieving information on actual jobsites. We have leveraged on the construction helmet, which is compulsory safety equipment at construction sites, as an ideal candidate for storing and displaying onsite building information.

REFERENCES Adjei-Kumi, T., and Retik, A. "Library-based 4D visualization of construction processes." Presented at Proceedings of the 1997 International Conference on Information Visualization, London, England. Aouad, G., Wu, S., and Lee, A. (2006). "N dimensional modeling technology: Past, present, and future." Journal of Computing in Civil Engineering, 20(3), 151-153. Azhar, S., Hein, M., and Sketo, B. "Building Information Modeling (BIM): Benefits, Risks and Challenges." Presented at Proceedings of the 44th ASC Annual Conference (on CD ROM), Auburn, Alabama, USA. Behzadan, A. H., and Kamat, V. R. (2007). "Georeferenced registration of construction graphics in mobile outdoor augmented reality." Journal of Computing in Civil Engineering, 21(4), 247-258.

28


Bouchlaghem, D., Shang, H., Whyte, J., and Ganah, A. (2005). "Visualisation in 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65

architecture, engineering and construction (AEC)." Automation in Construction, 14(3), 287-295. Chau, K. W., Anson, M., and De Saram, D. D. (2005). "4D dynamic construction management and visualization software: 2. Site trial." Automation in Construction, 14(4), 525-536. Chau, K. W., Anson, M., and Zhang, J. P. (2004). "Four-dimensional visualization of construction scheduling and site utilization." Journal of Construction Engineering and Management, 130(4), 598-606. Davis, D. (2007). "LEAN, Green and Seen." Journal of Building Information Modeling, Fall 2007, 16-18. Dawood, N., and Sikka, S. (2006). The Value of Visual 4D Planning in the UK Construction Industry, Ascona, Switzerland: Springer Berlin / Heidelberg, UK. Dawood, N., and Sikka, S. (2008). "Measuring the effectiveness of 4D planning as a valuable communication tool." Electronic Journal of Information Technology in Construction, 13, 620-636. Dossick, C. S., and Neff, G. (2010). "Organizational divisions in bim-enabled commercial

construction."

Journal

of

Construction

Engineering

and

Management, 136(4), 459-467. Dunston, P. S., Wang, X., Billinghurst, M., and Hampson, B. "Mixed reality benefits for design perception." Presented at Proceedings of the 19th International Symposium on Automation and Robotics in Construction, Gaithersburg, Maryland, USA. Eastman, C., Teicholz, P., Sacks, R., and Liston, K. (2008). BIM Handbook: A guide to building information modeling for owners, managers, designers, engineers, and contractors, Hoboken, NJ, USA: John Wiley and Sons. Eastman, C., Teicholz, P., Sacks, R., and Liston, K. (2008). BIM Handbook: A Guide to Building Information Modeling for Owners, Managers, Designers, Engineers and Contractors, John Wiley and Sons, Hoboken, USA. Eisenberg, A. (2010). “Bye-Bye Batteries: Radio Waves as a Low-Power Source,� The New York Time, July-17. http://www.nytimes.com/2010/07/18/business/18novel.html

29


Goedert, J. D., and Meadati, P. (2008). "Integrating construction process 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65

documentation into building information modeling." Journal of Construction Engineering and Management, 134(7), 509-516. Golparvar-Fard, M., Pea-Mora, F., Arboleda, C. A., and Lee, S. (2009). "Visualization of construction progress monitoring with 4D simulation model overlaid on time-lapsed photographs." Journal of Computing in Civil Engineering, 23(6), 391-404. Hadikusumo, B. H. W., and Rowlinson, S. (2002). "Integration of virtually real construction model and design-for-safety-process database." Automation in Construction, 11(5), 501-509. Howard, R., and BjÜrk, B.-C. (2008). "Building information modelling- Experts' views on standardisation and industry deployment." Advanced Engineering Informatics, 22(2), 271-280. Jernigan, F. (2008). BIG BIM little bim, 2nd ed. 4Site Press, USA. Jordani, D. (2008). "BIM: A Healthy Disruption to a Fragmented and Broken Process." Journal of Building Information Modeling, Spring 2008, 24-26. Kam, C., Fischer, M., Hänninen, R., Karjalainen, A., and Laitinen, J. (2003). "The product model and fourth dimension project." Electronic Journal of Information Technology in Construction, 8, 137-166. Kamat, V. R., and El-Tawil, S. (2007). "Evaluation of augmented reality for rapid assessment of earthquake-induced building damage." Journal of Computing in Civil Engineering, 21(5), 303-310. Kaner, I., Sacks, R., Kassian, W., and Quitt, T. (2008). "Case studies of BIM adoption for precast concrete design by mid-sized structural engineering firms." Electronic Journal of Information Technology in Construction, 13, 303-323. Kang, J. H., Anderson, S. D., and Clayton, M. J. (2007). "Empirical study on the merit of web-based 4D visualization in collaborative construction planning and scheduling." Journal of Construction Engineering and Management, 133(6), 447-461. Khanzode, A., Fischer, M., and Reed, D. (2008). "Benefits and lessons learned of implementing Building Virtual Design and Construction (VDC) technologies for coordination of Mechanical, Electrical, and Plumbing (MEP) systems on a large Healthcare project." Electronic Journal of Information Technology in Construction, 13, 324-342. 30


Koo, B., and Fischer, M. (2000). "Feasibility study of 4D CAD in commercial 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65

construction." Journal of Construction Engineering and Management, 126(4), 251-260. Korman, T. M., Fischer, M. A., and Tatum, C. B. (2003). "Knowledge and reasoning for MEP coordination." Journal of Construction Engineering and Management, 129(6), 627-634. Ku, K., Pollalis, S. N., Fischer, M. A., and Shelden, D. R. (2008). "3D model-based collaboration in design development and construction of complex shaped buildings." Electronic Journal of Information Technology in Construction, 13, 458-485. Lee, A., Betts, M., Aouad, G., Cooper, R., Wu, S., and Underwood, J. (2002). “Developing a Vision for an nD Modelling Tool.” Proceedings of CIB W78-Distributing Knowledge in Building Conference, Aarhus, Denmark. Leinonen, J., and Kähkönen, K. "New construction management practice based on the virtual reality technology." Presented at Proceedings of Construction Congress VI: Building Together for a Better Tomorrow in an Increasingly Complex World, Orlando, Florida. Lipman, R. R. (2004). "Mobile 3D visualization for steel structures." Automation in Construction, 13(1), 119-125. Manning, R., and Messner, J. I. (2008). "Case studies in BIM implementation for programming of healthcare facilities." Electronic Journal of Information Technology in Construction, 13, 446-457. Maruyama, Y., Iwase, Y., Koga, K., Yagi, J., Takada, H., Sunaga, N., Nishigaki, S., Ito, T., and Tamaki, K. (2000). "Development of virtual and real-field construction management systems in innovative, intelligent field factory." Automation in Construction, 9(5), 503-514. McKinney, K., and Fischer, M. (1998). "Generating, evaluating and visualizing construction schedules with CAD tools." Automation in Construction, 7(6), 433-447. Murray, N., Fernando, T., and Aouad, G. "A virtual environment for building construction." Presented at Proceedings of the 17th International Symposium on Automation and Robotics in Construction, Taipei, Taiwan.

31


NIBS National BIM Standard Project Committee. (2006). “National BIM Standard.� 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65

National Institute of Building Sciences and BuildingSMART buildingSMART Alliance, Research Presentation, Nov. Reinhardt, J., Garrett Jr, J. H., and Akinci, B. (2005). "Framework for providing customized data representations for effective and efficient interaction with mobile computing solutions on construction sites." Journal of Computing in Civil Engineering, 19(2), 109-118. Rekimoto, J. "GestureWrist and GesturePad: Unobtrusive wearable interaction devices." Presented at 5th International Symposium on Wearable Computers (ISWC 2001), ETH Zurich. Saidi, K., Haas, C., and Balli, N. (2002). "The value of handheld computers in construction"Proceedings of the 19th International Symposium on Automation and Robotics in Construction. City: Gaithersburg, Maryland, USA. Starner, T., Auxier, J., Ashbrook, D., and Gandy, M. (2000). "The Gesture Pendant: a self-illuminating, wearable, infrared computer vision system for home automation control and medical monitoring." International Symposium on Wearable Computers, Digest of Papers, 87-94. Staub-French, S., and Khanzode, A. (2007). "3D and 4D modeling for design and construction coordination: Issues and lessons learned." Electronic Journal of Information Technology in Construction, 12, 381-407. Stumpf, A., Liu, L. Y., Kim, C.-S., and Chin. S. (1998). "Delivery and Test of the Digital Hardhat System at U.S. Army Corps of Engineers Fort Worth District Office." USACERL ADP Report 99/16, 18-66. Tanyer, A. M., Tah, J. H. M., and Aouad, G. "An integrated database to support collaborative urban planning: The n-Dimensional modeling approach." Presented at ASCE International Conference on Computing in Civil Engineering, Cancun, Mexico. Teizer, J. and Vela, P. (2008). "Workforce Detection and Tracking on Construction Sites using Video Cameras." European Group for Intelligent Computing in Engineering 2008 Conference, Plymouth, Great Britain. Teizer, J., Mantripagada, U. and Venugopal, M. (2009). "Automated Obstacle Detection and Safe Path Planning using Ultra Wideband." Proceedings of the Construction Research Congress, Seattle, Washington.

32


Tsukada, K., and Yasumura, M. (2002). "Ubi-finger: gesture input device for mobile 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65

use"Proceedings of 5th Asia Pacific Conference on Computer Human Interaction. City: Beijing, China. Wang, X., and Dunston, P. S. (2006). "Compatibility issues in Augmented Reality systems for AEC: An experimental prototype study." Automation in Construction, 15(3), 314-326. Wang, X., and Dunston, P. S. (2008). "User Perspectives on Mixed Reality Tabletop Visualization for Face-to-Face Collaborative Design Review." Automation in Construction, 17(4), 399-412. Wang, X., Gu, N., and Marchant, D. (2008). "An empirical study on designers' perceptions of augmented reality within an architectural firm." Electronic Journal of Information Technology in Construction, 13, 536-552. Williams, T. P. (1994). "Applying portable computing and hypermedia to construction." Journal of Management in Engineering, 10(3), 41-45. Williams, T. P. (2003). "Applying handheld computers in the construction industry." Practice Periodical on Structural Design and Construction, 8(4), 226-232.

33


LIST OF FIGURES 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65

Fig. 1. System architecture of the iHelmet Fig. 2. Hardware settings of the iHelmet: (a) A small holder frame and aperture on the outside of the helmet. (b) A platform and an acrylic sheet inside the helmet. (c) A data cable to connect the mobile device to the projector. Fig. 3. Information retrieval procedure Fig. 4. Procedure for positioning Fig. 5. Two instructions: distance value and projection range Fig. 6. The relationship of parameters Dv, Dr, Si, Sp in equation (1) and (2) Fig. 7. Test of image quality from varied projected distances. (a) testing field (b) images projected from 1.5m, 2.0m and 2.5m. Fig. 8. Four control types for the manipulation module Fig. 9. Demo case: (a) A photo of the research building. (b) The BIM model of the research building. Fig. 10. User test procedure Fig. 11. The test environment Fig. 12. Test completion times: The iHelmet versus 2D-drawings Fig. 13. Test success rates: The iHelmet versus 2D-drawings

34


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65

LIST OF TABLES Table 1. Statistical analysis of the t-test

35


Table 1. Statistical analysis of the t-test 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65

Questions

Mean

Std.

(sec)

Deviation

6.89

2.33

Device

iHelmet Q1 2D-drawings

23.68

18.39

iHelmet

14.03

8.83

Q2 2D-drawings

22.81

13.55

iHelmet

38.85

13.32

Q3 2D-drawings

117.25

78.55

iHelmet

114.25

44.36

Q4 2D-drawings

230.86

80.13

(* means significant)

36

t

Sig.

5.254

0.000*

3.309

0.002*

5.621

0.000*

8.234

0.000*


Figure 1 Click here to download Figure: Fig 1.pdf

Display Module Visual Feedback

Project Full-Scale Projection

Information Integration Module User Image-Based Model

Integrate

BIM Model

Positioning Module Projection Range Distance Value

Direct Manipulation

Manipulation Module Multi-Touch Gesture Control

User Interface Layer

Data Process Layer

Fig. 1. System architecture of the iHelmet

1

Data Storage Layer


Figure 2 Click here to download Figure: Fig 2.pdf

A small frame to place iPod-Touch

A small hole use for project information

(a)

An acrylic sheet

A platform (b)

An electric wire

(c)

Fig. 2. Hardware settings of the iHelmet: (a) A small holder frame and aperture on the outside of the helmet. (b) A platform and an acrylic sheet inside the helmet. (c) A data cable to connect the mobile device to the projector.

1


Figure 3 Click here to download Figure: Fig 3.pdf

BIM model

Floor plan

View unit

Detailed information

Fig. 3. Information retrieval procedure

1


Figure 4 Click here to download Figure: Fig 4.pdf

Step 1: Select a floor

Step 2: Select an view unit

Step 4: Obtain detailed information

Step 3: Specified user’s location

Fig. 4. Procedure for positioning


Figure 5 Click here to download Figure: Fig 5.pdf

Distance value

Projection range

Fig. 5. Two instructions: distance value and projection range

1


Figure 6 Click here to download Figure: Fig 6.pdf

Fig. 6. The relationship of parameters Dv, Dr, Si, Sp in equation (1) and (2)


Figure 7 Click here to download Figure: Fig 7.pdf

(a)

1.5m

2.0m (b)

2.5m

Fig. 7. Test of image quality from varied projected distances (a) testing field (b) images projected from 1.5m, 2.0m and 2.5m


Figure 8 Click here to download Figure: Fig 8.pdf

Control Type

Purpose

Description

Select a floor Tap right upper side

Single Tap

Tap right lower side

Select a view unit Tap and drag

Lock/Unlock the view unit Tap right upper side

Tap right lower side

Enter next view

Slide from left to right

Slide Return to previous view

Slide from right to left

Double Tap

Obtain detailed information Double tap

Gesture control

Change vertical view range Look up

Look down

Fig. 8. Four control types for the manipulation module


Figure 9 Click here to download Figure: Fig 9.pdf

(a)

(b)

Fig. 9. Demo case: (a) A photo of the research building. (b) The BIM model of the research building.

1


Figure 10 Click here to download Figure: Fig 10.pdf

Information Reference

Four Level of Questions Exploration

iHelmet

Location 2D-drawings

Legend Spatial Information

Fig. 10. User test procedure

1


Figure 11 Click here to download Figure: Fig 11.pdf

Researcher Participant

Pre-positioning seat

Fig. 11. The test environment

1


Figure 12 Click here to download Figure: Fig 12.pdf

BIM-AR

2D-drawing

400 Maximum 350

Completion Time (sec)

300

3rd Quartile Median 1st Quartile

250

Minimum

200 150 100 50 0

Q1

Q2

Q3

Fig. 12. Test completion times: The iHelmet versus 2D-drawings

1

Q4


Figure 13 Click here to download Figure: Fig 13.pdf

BIM-AR

2D-drawing

100.0%

Success Rates (%)

80.0%

60.0%

40.0%

20.0%

0.0% Q1

Q2

Q3

Fig. 13. Test success rates: The iHelmet versus 2D-drawings

1

Q4


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.