56 (2010) 11
Platnica SV-JME 11-2010_1k.ai 1 11.11.2010 10:39:21
C
M
Y
CM
MY
CY
CMY
Journal of Mechanical Engineering - Strojniški vestnik
K
11 year 2010 volume 56 no.
Platnica SV-JME 11-2010_5k_tisk.pdf 2 16.11.2010 13:37:19
Strojniški vestnik – Journal of Mechanical Engineering (SV-JME) Aim and Scope The international journal publishes original and (mini)review articles covering the concepts of materials science, mechanics, kinematics, thermodynamics, energy and environment, mechatronics and robotics, fluid mechanics, tribology, cybernetics, industrial engineering and structural analysis. The journal follows new trends and progress proven practice in the mechanical engineering and also in the closely related sciences as are electrical, civil and process engineering, medicine, microbiology, ecology, agriculture, transport systems, aviation, and others, thus creating a unique forum for interdisciplinary or multidisciplinary dialogue. The international conferences selected papers are welcome for publishing as a special issue of SV-JME with invited co-editor(s).
Editor in Chief Vincenc Butala University of Ljubljana Faculty of Mechanical Engineering, Slovenia Co-Editor Borut Buchmeister University of Maribor Faculty of Mechanical Engineering, Slovenia Technical Editor Pika Škraba University of Ljubljana Faculty of Mechanical Engineering, Slovenia
C
M
Y
CM
MY
CY
CMY
K
Editorial Office University of Ljubljana (UL) Faculty of Mechanical Engineering SV-JME Aškerčeva 6, SI-1000 Ljubljana, Slovenia Phone: 386-(0)1-4771 137 Fax: 386-(0)1-2518 567 E-mail: info@sv-jme.eu http://www.sv-jme.eu Founders and Publishers University of Ljubljana (UL) Faculty of Mechanical Engineering, Slovenia University of Maribor (UM) Faculty of Mechanical Engineering, Slovenia Association of Mechanical Engineers of Slovenia Chamber of Commerce and Industry of Slovenia Metal Processing Industry Association
55 YE ARS
year
no. 11 2010 56
volume
Cover: Measurement of the turbulence flow at the nozzle exit; PIV image with velocity vectors in case of nozzle exit (above) and PIV image of nozzle flow (below). Image courtesy: Institute for Power, Process and Environmental Engineering, Faculty of Mechanical Engineering, University of Maribor
ISSN 0039-2480 © 2010 Strojniški vestnik - Journal of Mechanical Engineering. All rights reserved. SV-JME is indexed / abstracted in: SCI-Expanded, Compendex, Inspec, ProQuest-CSA, SCOPUS, TEMA. The list of the remaining bases, in which SV-JME is indexed, is available on the website. The journal is subsidized by Slovenian Book Agency.
President of Publishing Council Jože Duhovnik UL, Faculty of Mechanical Engineering, Slovenia International Editorial Board Koshi Adachi, Graduate School of Engineering,Tohoku University, Japan Bikramjit Basu, Indian Institute of Technology, Kanpur, India Anton Bergant, Litostroj Power, Slovenia Franci Čuš, UM, Faculty of Mech. Engineering, Slovenia Narendra B. Dahotre, University of Tennessee, Knoxville, USA Matija Fajdiga, UL, Faculty of Mech. Engineering, Slovenia Imre Felde, Bay Zoltan Inst. for Mater. Sci. and Techn., Hungary Jože Flašker, UM, Faculty of Mech. Engineering, Slovenia Bernard Franković, Faculty of Engineering Rijeka, Croatia Janez Grum, UL, Faculty of Mech. Engineering, Slovenia Imre Horvath, Delft University of Technology, Netherlands Julius Kaplunov, Brunel University, West London, UK Milan Kljajin, J.J. Strossmayer University of Osijek, Croatia Janez Kopač, UL, Faculty of Mech. Engineering, Slovenia Franc Kosel, UL, Faculty of Mech. Engineering, Slovenia Thomas Lübben, University of Bremen, Germany Janez Možina, UL, Faculty of Mech. Engineering, Slovenia Miroslav Plančak, University of Novi Sad, Serbia Brian Prasad, California Institute of Technology, Pasadena, USA Bernd Sauer, University of Kaiserlautern, Germany Brane Širok, UL, Faculty of Mech. Engineering, Slovenia Leopold Škerget, UM, Faculty of Mech. Engineering, Slovenia George E. Totten, Portland State University, USA Nikos C. Tsourveloudis, Technical University of Crete, Greece Toma Udiljak, University of Zagreb, Croatia Arkady Voloshin, Lehigh University, Bethlehem, USA Print LITTERA PICTA d.o.o., Barletova 4, 1215 Medvode, Slovenia General information Strojniški vestnik – The Journal of Mechanical Engineering is published in 11 issues per year (July and August is a double issue). Institutional prices include print & online access: institutional subscription price €100,00, general public subscription €25,00, student subscription €10,00, foreign subscription €100,00 per year. The price of a single issue is €5,00. Prices are exclusive of tax. Delivery is included in the price. The recipient is responsible for paying any import duties or taxes. Legal title passes to the customer on dispatch by our distributor. Single issues from current and recent volumes are available at the current single-issue price. To order the journal, please complete the form on our website. For submissions, subscriptions and all other information please visit: http://en.sv-jme.eu/ You can advertise on the inner and outer side of the back cover of the magazine. We would like to thank the reviewers who have taken part in the peer-review process.
Strojniški vestnik - Journal of Mechanical Engineering is also available on http://www.sv-jme.eu, where you access also to papers’ supplements, such as simulations, etc.
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11 Contents
Contents Strojniški vestnik - Journal of Mechanical Engineering volume 56, (2010), number 11 Ljubljana, November 2010 ISSN 0039-2480 Published monthly Editorial
681
Papers Dag Raudberget: Practical Applications of Set-Based Concurrent Engineering in Industry Michael Abramovici, Fahmi Bellalouna, Jens Christian Göbel: Adaptive Change Management for Industrial Product-Service Systems Roberto Raffaeli, Maura Mengoni, Michele Germani: A Software System for “Design for X” Impact Evaluations in Redesign Processes Tristan Barnett, Elizabeth Ehlers: Cloud Computing for Synergized Emotional Model Evolution in Multi-Agent Learning Systems Okba Hamri, Jean Claude Léon, Franca Giannini, Bianca Falcidieno: Computer Aided Design and Finite Element Simulation Consistency Ronald Poelman, Zoltán Rusák, Alexander Verbraeck, Leire Sorasu Alcubilla: The Effect of Visual Feedback on Learnability and Usability of Design Methods Christophe Merlo, Nadine Couture: A User-Centered Approach for a Tabletop-Based Collaborative Design Environment Bart Gerritsen, Imre Horváth: The Upcoming and Proliferation of Ubiquitous Technologies in Products and Processes
765
Instructions for Authors
784
685 696 707 718 728 744 754
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 681-684
Editorial
Special Issue: Placing Enablers in Context in Industrial Product Development It has been recognized that creativity, capabilities and efficiency of the work of designers and engineers can be increased if the methods and tools they use are tailored to the context of application and the tasks at hand. Nevertheless, the overwhelming majority of current computer orientated enablers (methodologies, systems, tools, methods, rules) have been developed as neutral support means, without paying attention to the features of the users and use environments. Towards optimal utilization of design and engineering enablers, the dialectic relationship (interaction) between their functional manifestation and the supported product creation process should receive sufficient attention. From the perspective of developing enablers it means that their conceptualization and implementation should be based on a broad consideration of the needs of the users and the applications. In addition, some level of context sensitivity and adaptability should be assumed. On the other hand, the application processes should be rationalized and reengineered in order to allow optimal utilization of the affordances of enablers. This is especially challenging when multiple, functionally and outwardly heterogeneous enablers are to be used in practical processes. The currently experienced gradual shift from artifact-focused product development through artifact-service combinations to servicefocused product innovation also challenges the development of computer based design and engineering tools. There have been several papers submitted to the Eighth International Tools and Methods of Competitive Engineering Symposium, 12 to 16 April 2010, Ancona, Italy, that not only recognized the above issues, but also addressed them from various aspects. One of the most frequently visited issues was adaptation of enablers to process and environmental changes. Other popular topics of interest were maintaining consistency among various agents of product development processes and consideration of the practical influences of current advanced and possible future tools. The goal of this special
issue is to demonstrate these efforts through a selected set of papers, and to show alternative approaches for researchers and developers who intend to be active in these domains of interest in the coming years. Obviously the set of papers included in this special issue cannot offer an exhaustive overview of all related areas. Nevertheless, they cast light on a wide spectrum of strategies that can be applied to tailor or adapt computer-based design and engineering enablers to contexts. Thematically, the first four papers are related to the consideration of the process features and adaptation to the process changes, while the second four papers address the relationships between users, tasks, applications and environments, respectively. The paper submitted by Dag Raudberget, entitled ‘Practical applications of set based concurrent engineering in industry’, deals with the ever-green topic of efficient concurrent engineering. The discussed set-based approach means simultaneous and systematic development of solution alternatives and examining their tradeoffs. It has a project management perspective, in which the author relied on three pragmatic principles, which are important for achieving solutions with the least overheads: (1) a wide search for possible solutions without taking other departments´ needs or opinions into account, (2) integration of the different solutions by eliminating those that are not compatible with the main body of solutions, and (3) commitment to develop solutions that both matches the other sets and fulfils current specifications. Elimination of the not-promising solutions is done by repeated development and tightening of specifications, and by application of the second principle. Four case studies have been conducted to see the effects on the incurred costs, the use of resources and on the product characteristics, and to forecast the estimated effects on the development process. The opportunities and barriers of the set based concurrent engineering are objectively discussed. The second paper entitled ‘Adaptive change management for industrial product-service systems’, contributed by Michael Abramovici, 681
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 681-684
Fahmi Bellalouna and Jens Christian Göbel, starts out of the observation that industrial productservice systems are characterized by a very high degree of dynamic changes not only during their planning, but also throughout their entire life cycle. Current change management methods and standards can only weakly support management of these changes and cannot fulfill the requirements of dynamic engineering change management. Therefore, authors propose adaptive change management, a concept that is based on a goal-oriented process modeling and management method. This defines business processes in terms of loosely coupled intelligent agents. Within the process, the tasks or activities are defined as intelligent agents. They represent the adequate road to the (sub) goal appropriately, independently and subject to the rules. They also provide the persons involved in the process with recommendations for decision-making in view of the occurrence of further process steps. The agents described above are modeled as modular, intelligent services according to the BeliefDesire-Intentions principle. The paper presents the concept in a comprehensive manner, but a widely-based validation is still to come. The research presented in the paper of Roberto Raffaeli, Maura Mengoni and Michele Germani, ‘A software system for “Design for X” impact evaluations in redesign processes’, focuses on supporting designers and engineers to rapidly configure alternative solutions under varying requirements. They propose both a methodology and a software tool supporting its application. Essential concept is a multi-level product representation framework, which covers both the properties and the relations. When some properties need to be modified, the proposed change propagation mechanism gives a list of the components or units that need to be modified. The introduction of “Design for X” principles as rules allows the designers to consider all aspects that have influence on the product design and to evaluate the impact of changes at three levels. The approach and the proposed platform have been applied in the field of refrigerators with regard to the company’s formalized requirements. This research is very promising, but it needs further investigations, development and tests to arrive at a fully fledged solution. In the paper entitled ‘Cloud computing for synergized emotional model evolution in multi-
682
agent learning systems’, Tristan Barnett and Elizabeth Ehlers address the most potent form of adaptation that is rooted in machine learning. This technology is considered to be paramount for enhancing the adaptability of agent-based systems. Their concrete proposal involves an architecture for multi-agent learning through distributed artificial consciousness (MALDAC). It offers a scalable approach to developing adaptable systems in complex, believable environments. They adapted the artificial consciousness theory to cope with complex environments. Additionally, they employed the paradigm of cloud computing in the design of the architecture to enhance system scalability. The goal of MALDAC is to provide a scalable implementation for highly adaptive agents, particularly in partially observable, stochastic and dynamic environments. To enhance scalability, MALDAC uses a cloud-based service agent. To cope with adaptability, the architecture uses emotional models. It is a remarkable value of this research work that it intends to establish a cognitive architecture by integrating the robustness of multi-agent learning and adaptability of emotional learning. Computer aided design and computer aided engineering systems are typically built around different shape model representations. If the geometric model is proper, finite element models can be generated by a semi- of fully automatic procedure called meshing. Sometimes however it is not possible due to completeness and representation conversion issues. Therefore, Okba Hamri, Jean Claude Léon, Franca Giannini and Bianca Falcidieno propose a mixed shape representation in their paper entitled ‘Computer aided design and finite element simulation consistency’. Their essential construct is a high level topology, which connects the specific geometric models in context. Representation conversion issues combined with the issue of maintaining meaning of high level modeling entities come along with feature based models. The paper proposes an innovative approach for finite element simulation model preparation based on a mixed representation that helps reduce the time needed by the model preparation process and maintain the consistency between the CAD and simulation models. The new methodology enables the analyst to selectively choose and extract desired geometric entities from several sources of
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 681-684
input shapes (CAD models, form features models, and pre-existing meshes) for creating a FEA model. The authors claim that the proposed approach can be implemented in any commercial packages. Moving into the era of 3D airborne design and visual representations raises new research questions and new research problems for scientists, developers and users. In the paper entitled ‘The effect of visual feedback on learnability and usability of design methods’, Ronald Poelman, Zoltán Rusák, Alexander Verbraeck and L. Sorasu Alcubilla investigate the cognitive aspects of designing free form shapes using different visualization devices and techniques. The study explored a strong correlation between the personal expertise and the learning efforts needed. Interestingly, authors found that the studied experimental technologies were not expected by the participants to show up in industrial design environments within 5 years. Another noteworthy observation is that the participants of the experiments believed that head mounted displays have more potential in 3D design than holographic displays. Finally, it is concluded that further quality improvements are needed in order to not overload the perception and cognition of 3D images with biases originating in not-fully matured technologies. The paper of Christophe Merlo and Nadine Couture, entitled ‘A user-centered approach for a tabletop-based collaborative design environment’, deals with the issue of direct interaction between team members. The major research question addressed is how collaboration of a co-located engineering design team can be supported by digital technologies. Despite the academic efforts, collaborative work is not yet supported sufficiently in most companies. The goal of the authors is to propose new tools that allow handling 2D/3D objects in three dimensions and both physical and virtual interactions. The
proposed solution was tested in user studies based on various scenarios. Though the proposed tabletop technology does not stretch the current knowledge and technology boundaries, and suffers from multiple limitations, according to the observation of the authors, the new way of interacting have lead to performance improvements. The last paper, entitled ‘The upcoming and proliferation of ubiquitous technologies in products and processes’ and co-authored by Bart Gerritsen and Imre Horváth, discusses the paradigm and essentials of ubiquitous computing, as well as the latest generation of these technologies. The main assumption of the authors is that ubiquitous technologies can revolutionize not only user products, but also the support tools for design and engineering. The objective is to provide an optimal support for designers and engineers in their daily creative and analytic activities in a cooperative, proactive, adaptive, context sensitive and personalized manner. The paper gives an almost exhaustive survey of the current state of the art in terms of using ubiquitous technologies in smart products, ambient environments and creative processes. The authors argue that wireless sensor networks are and will most probably be the strongest enabler, together with bio-engineering, multifunctional materials, smart agent technologies, and semantic information elicitors. They propose that further research is needed into truly cognitive environments, which reduce the information overload and intellectual complexities imposed on human beings. Finally, we would like to express our and gratitude to all authors and reviewers who contributed to this special issue either with original papers or with constructive peer reviews, respectively. Their cooperation during the review and revision process is very much appreciated.
Regine W. Vroom1, Imre Horváth1, Ferruccio Mandorli2 1
Faculty of Industrial Design Engineering, Delft University of Technology, the Netherlands 2 Faculty of Engineering, Università Politecnica delle Marche, Italy
683
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 681-684
Regine W. Vroom received her M.Sc. diploma in industrial design engineering (1986) from the Delft University of Technology in The Netherlands. She was working for Volvo Car between 1985 and 1987 as a computer aided design analyst. In 1987 she became assistant professor at the Delft University of Technology. She earned a Ph.D. title (2001) from the Delft University of Technology. She has more than 25 years of experience in educating industrial design engineering to bachelor and master students and was the first one who was awarded with the “Best Design Teacher Award” of the faculty (1994). In 1996 she became a member of the Faculty board at the faculty of industrial design engineering for several years. Thereafter she was the Educational Quality Manager. Being employed by the University, she has also been working as a consultant in information and knowledge handling within product development for the SME. She was research reviewer for the European Union and she is experienced in chairing visitation committees for bachelor and master design educations. Her primary research interests are in conceptual design engineering, information and knowledge capturing, organisation, handling and communication within product design, methodological and managerial aspects of product development, design tools, and ubiquitous design engineering. Imre Horváth Imre Horváth (1954) received his M.Sc. diploma in mechanical engineering (1978) and in engineering education (1980) from the Technical University of Budapest, Hungary. He was working for the Hungarian Shipyards and Crane Factory between 1978 and 1984. He had various faculty positions at the Technical University of Budapest between 1985 and 1997. He earned a dr.univ. title (1987) and a Ph.D. title (1994) from the TU Budapest, and a C.D.Sc. title from the Hungarian Academy of Sciences (1993). Since 1997 he is a full professor of Computer Aided Design and Engineering at the Faculty of Industrial Design Engineering of the Delft University of Technology. From 1st January 2005 until May 2007 he was the director of Research of the Faculty. He initiated the Tools and Methods of Competitive Engineering (TMCE) International Symposiums and has been its general chairman for 14 years. He served in various positions on the Executive Committee of the CIE Division of the ASME. He obtained Doctor Honoris Causae title from the Budapest University of Technology and Economics, Hungary in 2009, and Professor Honoris Causae title from the University of Miskolc, Hungary in 2010. His primary research interests are in philosophical and theoretical aspects of design research, computer support of experiential design, design support tools based on ubiquitous technologies, formalisation and structuring of design knowledge, and product innovation based on technological affordances in social contexts. As educator he is currently interested in computer application in conceptual design, integrating research into design education, and heterogeneous platform-based learning environments. Ferruccio Mandorli received his M.Sc. diploma in Computer Science (1990) from the University of Milan and the Ph.D. title in Engineering of Industrial Production (1995) from the University of Parma. In 1992 he was guest researcher at the University of Tokyo (Kimura Laboratory, Department of Precision Machinery Engineering, Faculty of Engineering). Between 1990 and 1998 he worked as a consultant in the field of Information Technology applied to Engineering for several Italian companies. In 1998 he became researcher at the Faculty of Engineering of the Università Politecnica delle Marche and in 2006 he got the full professor position in the same institution. He teaches Mechanical Drawing, Computer Aided Design and Virtual Prototyping. He is member of the board of the Ph.D. school on Engineering Science of the Università Politecnica delle Marche. His primary research interests are related to the tools and methods to support engineering design, with particular reference to design automation applications, lifecycle assessment and product lifecycle management.
684
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 685-695 UDC 658.5:303.433.2
Paper received: 27.04.2010 Paper accepted: 19.10.2010
Practical Applications of Set-Based Concurrent Engineering in Industry Dag Raudberget* Department of Mechanical Engineering, School of Engineering, Jönköping University, Sweden Set-Based Concurrent Engineering is sometimes seen as a means to dramatic improvements in product design processes. In spite of its popularity in literature, the number of reported applications has so far been limited. This paper adds new information by describing implementations of Set-Based Concurrent Engineering in four product developing companies. The research took a case study approach, with the objective to investigate if the principles of Set-Based Concurrent Engineering can improve the efficiency and the effectiveness of the development process. The study shows that set-based projects can be driven within an existing organization, if given proper support. The participants claim that a set-based approach has positive effects on development performance, especially on the level of innovation, product cost and performance. The improvements were achieved at the expense of slightly higher development costs and longer lead time. However, the positive effects are dominating and the companies involved intend to use Set-Based Concurrent Engineering in future projects when appropriate. ©2010 Journal of Mechanical Engineering. All rights reserved. Keywords: set-based concurrent engineering, set-based design, design method verification, lean product development, case study, industrial collaboration. 0 INTRODUCTION Set Based Concurrent Engineering, SBCE, is a product development methodology that has been subject to considerable interest [1] to [9]. Some authors claim SBCE to be four times more efficient than traditional phase-gate processes [4] to [6]. In spite of the vast body of related research, the published applications of SBCE have so far been limited to the primary study at Toyota Motor Corporation where the methodology was developed and discovered [1]. According to literature [1] to [11], SBCE has many organisational implications, requiring most processes and working methods to be changed. It is considered a highly integrated part of the “Lean Product Development System” [4] to [6]. The lean development system uses different means for management of staff and projects and decision making. In the lean development context, “set-based” is synonymous to working with multiple solutions simultaneously, systematically exploring trade-offs between different alternatives and the use of visual knowledge. In the view of traditional development, a Set-based approach can be considered inefficient [7] which requires a shift in the view of design: SBCE starts with multiple design alternatives, but opposed to traditional design, it allows more than
one the design to proceed concurrently. Decisions are made by eliminating the weakest designs, allowing the process to narrow in slowly on a solution. Since SBCE is usually considered incompatible to traditional models [4] to [6], practical applications of SBCE in companies using phase-gate model could be challenging. 1 RESEARCH APPROACH AND RELATED LITERATURE The framework for research was based on six pilot cases in four firms. An active research strategy was used with workshops at the participating companies, studying the development costs and use of resources, the characteristics of the resulting products and development process metrics. The purpose was to improve the product development processes of the companies. The objective was to investigate if the Set-Based Concurrent Engineering principles could be implemented successfully in different environments, and to observe the effects on the development process. 1.1 Research Approach The research approach was inspired by DRM; Blessings framework for Design Research
*
Corr. Author's Address: School of Engineering, Jönköping University, Box 1026, 551 11 Jönköping, Sweden, dag.raudberget@jth.hj.se
685
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 685-695
Methodology [12]. In the study, the first three steps of DRM were already carried out through the discovery and formulation of SBCE. One difficulty was that the original success criteria were not known. However, in DRM most criteria are formulated in order to support the initial research process: the descriptive study was already published by Ward [1] and [8], and the prescriptive study resulted in the formulation of the principles of SBCE [7]. The remaining criteria are formulated to verify the findings in the prescriptive study. In this case this meant to find criteria that show to what extent the three principles lead to more successful products, and/or a better development process, and at what expenses. 1.2 Related Applications Concurrent Engineering
of
Set-Based
The research process started with a survey of academic and other literature, concluding that the SBCE- principles are not widely used. Except from the original case [1] to [3], only the use of single principles has been recorded in industry. The first theoretical description of Set-based design was made by Ward [13]. Through studies of Toyota’s product development processes, the term SBCE was established [1]. Later, the research diverged into different directions and the term “set-based” now has a different meaning for different authors. A common application is the mathematical modelling or optimisations of some aspects of product development [14], sometimes in an industrial collaboration [10]. Another application is the management of product development. According to Morgan [5], SBCE is an important component of the lean framework. Ward [4] and Kennedy [6], characterise SBCE as one of four components of Lean Product Development. Different branches of “Lean development” have evolved, based mainly on results from the primary SBCE research, and on the work of Womack et al. [15] and [16]. There are also prior scientific studies of related industrial implementations in the “Lean Aerospace Initiative” [17], but is does not address the same questions as the SBCE principles. The focus in these projects has been to re-engineer the development process by identifying waste and maximizing value-adding activities, or improving
686
the information flow and the reduction of engineering cycle times. So far we have not found any scientific studies of SBCE implementations in companies except from the original case [1] to [3]. However, one article [10] uses input from industry to model and optimise a product and hereby apply parts of SBCE in the form of multiple solutions and broad specifications. A comprehensive survey of Lean engineering in industry is written by Baines et al. [18]. 1.3 Project Management Concurrent Engineering
of
Set-Based
Project management in SBCE is different from the practices of Concurrent Engineering, using a strict functional organisation [1] to [6]. The author would like to emphasize that this is a contradiction to the Concurrent Engineering approach [2] and [19]. Here, constraints from all departments are considered at the beginning of the process and projects take over responsibilities traditionally owned by the functional units. The Set-based development process converges step-wise towards a solution acceptable by all stakeholders through a series of “integration events”. These are decision points equivalent to traditional gate-reviews, however the purpose is not to report and act on project status but rather to trade-off and to eliminate solutions by using available data and knowledge of the product. If there is not enough information available to exclude a solution confidently, it will remain in the set to be further investigated. Contrary to gate-reviews, integration events can be held at different times or locations for different subsystems. 1.4 The Three Principles SBCE relies on three principles [7]. This implies that SBCE needs to be adapted to each individual firm. Even though the principles are simple, they provided a useful guideline for the practical adaption of the firms design processes: 1. Map the design space: • define feasible regions, • explore trade-offs by designing multiple alternatives, • communicate sets of possibilities. 2. Integrate by intersection:
Raudberget, D.
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 685-695
Time
• look for intersections of feasible sets, • impose minimum constraint, • seek conceptual robustness. 3. Establish feasibility before commitment: • narrow sets gradually while increasing detail, • stay within sets once committed, • control by managing uncertainty at process gates. The first principle implies a wide search for possible solutions without taking the needs or opinions of other departments into account. The second principle integrates different solutions by eliminating those that are not compatible with the main body of solutions.
Fig. 1. The principles of Set-Based Concurrent Engineering (adapted from[9]) The last principle is a commitment to develop solutions that both, matches the other sets and fulfils current specifications. Elimination of remaining solutions is done by repeated development, tightening of specifications and application of the second principle. In Fig. 1, a “set” represents a palette of different solutions to a specific function or problem and can be seen as a family of design proposals. This is opposite to the widely used traditional “Point-based” [1] development methodology, where the selection and approval of one “best” specific product solution is done early when the knowledge of the product is shallow.
1.5 The Decision and Specification Management Process of Set-Based Concurrent Engineering SBCE uses a different approach than traditional point-based selection: instead of using different methods for ranking and selecting one or a few concepts for further development, the SBCE decision process is based on a rejection of the least suitable solutions. Rather than making an educated guess of the performance of a future design, SBCE carries forward all implementations that cannot yet be eliminated. This is a robust process since the consequences of an incorrect choice are fairly small. Rejecting the third worst solution instead of the worst is less critical compared to the magnitude of failure if the third best alternative is picked for development instead of the best. An industrial case study [11] described the SBCE decision process and concluded that the method gives different results compared to a traditional Pugh method of controlled convergence. Another aspect is the efficiency of the SBCE decision process. Contrary to the selection of alternatives, the elimination of alternatives can be done confidently from incomplete information, as long as it is based on facts of what is not possible. The management of specifications is also an important distinction from traditional development [2]. This approach aims at an optimal system design, rather than an optimization of components under fixed constraints. In SBCE, the individual requirements are not fixed numbers but rather a range of upper and lower limits representing the design space. This extra degree of freedom allows designers to compromise on different aspects. The requirements are gradually narrowed down to a final value, but are flexible during the process. 2 CASE STUDY SETUP The study was a three year joint-venture between Swedish industry, the School of Engineering in Jönköping and the Swerea IVF research institute. Information was collected from semistructured interviews with managers and design engineers, through studies of documents, and by a
Practical Applications of Set-Based Concurrent Engineering in Industry
687
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 685-695
questionnaire at the end of the projects. Other data sources were the working meetings at individual companies between project members and researchers. 2.1 Company Characteristics The companies have different sizes and represent different types of businesses, as seen in Table 1. One common characteristic is that all of them have a large proportion of engineers compared to other types of employees. The companies also have a small share of outsourcing of design and none of them was using Lean Product Development tools to a significant extent. Company A designs, sells and produces electronic equipment for traffic monitoring and control. The systems consist of mass produced units placed in vehicles plus an infrastructure that collects data for invoicing. The customers are cities or governments and the systems are tailor made for each application. All manufacturing is outsourced, but most of the design and software development is done within the company. Company B designs, sells and produces equipment for paper mills and graphic industry on
the international market. The products are customized and manufactured in low volumes, where local suppliers make components and modules. The final assembly, programming and testing is done in-house prior to shipping. Installation at the customer sites is usually done by the same people that assembled the product at home. Company C is a first tier automotive supplier with in-house production and design capabilities. The products are built to manufacturer specifications around a core technology. The customers select suppliers by quoting and usually the lowest bidder wins the deal. Most of the production is highly automated with manual final assembly according to Lean Production principles. Company D is a manufacturer of heavy trucks and engines and the majority of the design work is done on one central site. The company is refining its product development methods and has continuously invested resources in different improvement projects. Manufacturing is carried out with Lean Production practices in plants around the world.
Table 1. Company characteristics and project goals Company A Business
Electronic systems
Business size
Small Orig. Equipm. manuf. Tailor-made
Company B Graphic industry products Small Orig. Equipm. manuf. Modular design
High
Low
High
High
Large High Mass produced, individually programmed
Large Low
Low Low
Low Low
No
No
Modular adaption
1
2
1
2
Develop product Product on the market -
Develop product Product on the market Product knowledge
Pre-study
Develop subsystem
Product knowledge
Under development
-
Under development
Type Customer adaption Manufacturing volume Outsourced mfg vol. Business tech. speed End user adaption Number of pilot projects Pilot project purpose Project 1 results Project 2 results
688
Raudberget, D.
Company C
Company D
Automotive supplier
Heavy trucks
Medium
Large
First tier supplier
Orig. Equip. manuf.
Tailor-made
Modular design
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 685-695
2.2 Project Setup For each participating company, a strategy for implementing SBCE was developed based on the current process. The projects were supported from upper level management, and the teams were allowed to bypass the ordinary development processes whenever appropriate for the project. A core team of managers and engineers from across the organization was given an introduction to SBCE. This created a broad acceptance for the methodology and served as a platform for finding appropriate pilot cases. Based on the company’s current development practice an outline for the implementation of SBCE was suggested by the researchers’. In cooperation with the firms, the outline was further refined and adjusted to individual cases. The researchers also introduced the companies to tools and methods for different tasks in the projects, and information of practices from the parallel cases was spread between the firms. The current development practices were already documented in design manuals or in the project models. It was found that the participating companies used phase gate development models, freezing the concept or product structure at early stages of development. This indicated a commitment to an early design which did not fit the SBCE principles, and changes to the project models were made. 2.3 Development Metrics To be able to evaluate the effects of SBCE with a reasonable effort, it was decided to explore readily available information. In our case, the companies were interested in two questions: 1. Is SBCE improving the chances of creating successful products compared to the current development model? 2. Is SBCE improving the efficiency of the design process compared to the current development model? The intentions were that the metrics should cover different aspects. Some metrics were already used in current development, such as project cost, product cost and project lead time. Other familiar aspects were the risk of project failure and the performance of the product in relation to requirements.
Two metrics were found in literature [20]. These were the number of unwanted engineering changes and warranty costs. The two last metrics were suggested by the researchers, evaluating the robustness to specification changes and the level of innovation. The future expectations on the methodology were also investigated. The purpose was to understand why the participating firms believe that SBCE will affect the product development performance in subsequent projects. 3 IMPLEMENTATIONS AND RESULTS In order to apply the principles of SBCE on the pilot cases, changes were made to the product development processes. The purpose was to match the intentions of the principles to the different models of development in each company. The implementation also included changes of practical working methods and decision-making. 3.1 Adapting Current Design Processes to the Set-Based Concurrent Engineering Principles Map the design space: The first two steps, ”Define feasible regions” and “Explore trade-offs by designing multiple alternatives”, was not new to the designers. Innovative exploration of the design space was seen as the natural start for any design work. However, in SBCE this extends to the design space of all technical disciplines. Exploring trade-offs is not a specific SBCE tool but is used systematically. Nevertheless, it was not a common practice to systematize projectspecific results of tests and simulations into general graphs. Communicating the sets of possibilities by sharing unfinished designs was somewhat awkward in the sense that designers are used to present one well-founded technical solution. Integrate by intersection: Looking for intersections of feasible sets was a straightforward task in most cases. In some pilot projects the sets were developed by different organizational functions such as electronics, software or mechanics. When the sets of different functions were brought together it was possible to identify solutions incompatible with the main core. Imposing minimum constraint meant to start with setting a broad target for the most important
Practical Applications of Set-Based Concurrent Engineering in Industry
689
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 685-695
specifications of each set and leave the rest unconstrained. The conceptual robustness was achieved by promoting solutions to sub-systems that were insensitive to changes in other subsystems. Establish feasibility before commitment: Narrowing the sets is a key feature of SBCE, which can be carried out in different ways depending on the maturity of the evolving product. This corresponds to the “screening” and “scoring” events found in textbooks on product development. In the projects, narrowing was typically done by the results of tests and by the adjustment of specifications. When the number of solutions in the sets decreases, more effort is put on each solution to increase the level of detail. When many solutions can satisfy the specifications, more constraints are added and alternatives eliminated. Staying within the committed sets ensures that all solutions match the sets developed by other departments, and that no expansion outside the specifications will occur. Control of the development process was done by a gradual review of specifications, continuously or at the process gates. Since requirements are tightly integrated with each other, and evolve with the product, there is no way of knowing all the final values in advance. In one pilot project, there were three final designs and the choice was made after producing three different series of products and evaluating their performance. 3.2 Data Acquisition and Processing Information gathered from interviews and the questionnaire were compared to the company standard development. The respondents were senior designers or managers with extended experiences of product development. On each parameter, the results were given 1 point if SBCE created a better result than current development practice, 0 points if the output was equal and -1 point for an inferior output. The average and individual values were plotted in diagrams, and the average value will be one point if all companies are experiencing improvements on a parameter. In some cases, parameters had to be estimated since not all projects have yet reached the market. The estimates were made by
690
experienced engineers and managers based on the project results thus far, assuming that the remaining part of the project would follow the same path as the first. 3.3 Effects on Costs and the Use of Resources The effects on project lead time, development costs and product cost is displayed in Fig. 2.
Fig. 2. Costs and use of resources; a positive result indicates an improvement compared to the current development practice Lead time: The pilot projects had slightly longer lead times than comparable projects. There could be many reasons for this, but one reason mentioned by the engineers is that they are not used to this way of working. An extensive documentation also took more time than usual. Development Costs: In all projects extra resources were allocated to enable thorough exploration of parallel solutions in early phases of development. This caused the pilots having slightly higher average development costs compared to standard projects. One manager commented that the budget increase was surprisingly small compared to the increased knowledge of the product. Product cost: In all but one project, the product costs were reduced. One of the companies reported a 40% decrease in product cost compared to the initial calculated cost. This was achieved by having high-risk product architecture with inexpensive components in parallel to safer alternatives. The thorough
Raudberget, D.
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 685-695
evaluation of multiple combinations proved that the low-cost solution was good enough.
manufactured. The firms estimated the project risk, warranty costs and the number of engineering changes in ongoing production.
3.4 Effects on the Product Characteristics The effects on product performance, the robustness to changes of requirements and on the level of innovation are displayed in Fig. 3.
Fig. 4. Estimated development process metrics
Fig. 3. Product characteristics Product performance: In all but one case, the performance was improved compared to the output of their regular methods. In the case where performance was not improved, the company responded that once the specifications were met, the focus was to decrease the cost of the product. Robustness to changes in specifications: In the study, the robustness was improved by the SBCE decision principle, and by the knowledge of key characteristics gained through the evaluation of different solutions. Level of innovation: All companies responded that the innovation level of their projects was improved. Again, the main reason mentioned was the parallel solutions. Carrying a safe option in the set as it makes it easier to also develop an innovative version. 3.5 Estimated Effects on the Development Process Based on the project results thus far, the effects of SBCE on the development process were evaluated (Fig. 4). These parameters are a result of the product development process, but are not available until the product has been produced and used. Therefore, estimations had to be made in the cases where the products were not yet
Project risk: The estimated risk of project failure was considered to be improved in all but one case. In the last case, the manager argued that SBCE does not improve the level of risk in cases where previous knowledge cannot be used, so the project risk is equal to the risk of the current practice. Warranty costs: Based on the confidence in technical solutions created in the projects, the participants answered that their warranty costs would improve moderately in the future. Number of engineering changes: The amount of unwanted design changes to ongoing production was also estimated to be better than current practice. One comment was that all the solutions were evaluated from different points of view, rather than selecting one alternative for review. 3.6 Future Expectations Another way of investigating the usefulness of Set-based principles is to see if the companies intend to use SBCE in future projects. The view is optimistic (Fig. 5), and all participants will use the methodology in upcoming projects where appropriate. Lead time: Most companies expect that SBCE will give them shorter lead-times in the future. One reason for this is the improvement in failure rate seen in the pilot projects, and the fact that the organization will learn the different practices and, therefore, work faster. Most
Practical Applications of Set-Based Concurrent Engineering in Industry
691
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 685-695
companies also argued that the increased amount of reused experiences from prior projects would also speed up the projects, but none of the pilot companies had implemented new routines for capturing design knowledge.
documenting process was ad hoc, depending on individual incentives. There is, therefore, no support for a higher degree of knowledge reuse, or more knowledgeable employees. A Set-based strategy is recommended for complex systems [2], and a Point-based strategy for stable, well-understood environments. In the study, the designers state that a Set-based approach is preferable at most times, not just for complex problems. This gap in opinion needs to be resolved in order to find reliable criteria for choosing between these different development strategies. 4 RECOMMENDATIONS FOR THE INTRODUCTION OF SET-BASED CONCURRENT ENGINEERING
Fig. 5. Future expectations on SBCE Development costs: Future development costs are also considered to be lower than today. One company commented on how SBCE gave stability to the product development process that helped them to avoid expensive mistakes. Competitiveness: All companies expected their competitiveness to increase in future projects. Proficient personnel: The companies believe that SBCE in the future will create more proficient engineers. This optimistic view is based on the assumption that the engineers will somehow capture more useful knowledge form the exploration of parallel solutions. 3.7 Comparison to Earlier Studies A study introducing Concurrent Engineering in industry [19] arrived at the conclusion that a change in development practices must be backed up by the management. This statement is also found in other literature [4] to [6], [9] and [18], and our projects arrived at the same conclusion. No firms had Set-based development processes before the pilot projects. This observation is well-aligned with earlier surveys [9], concluding that Set-based practices are not well established in industry. Another feature of SBCE is the reuse of design knowledge [1] to [6]. In our study, the
692
At the beginning of the study, there were six participating companies. Two of them were not successful. The reasons could be that the implementation of SBCE did not give expected results, but in the opinion of the author, the main reason was that none of the companies had ensured commitment from the management. Hence, to introduce SBCE, a prerequisite is an appropriate support from within the organization. A full scale introduction in a firm is more complicated than a pilot study: it requires fundamental changes in development- and business processes as well as training for all personnel. A summary of the recommendations for introducing SBCE is found in Table 2. 4.1 When Should Set-Based Engineering Be Used?
Concurrent
In the opinion of the designers, a Set-based approach could, and should, be used at most times. The exceptions are very small projects, with an obvious solution and tight schedules. The participants mentioned applications where SBCE would be extra valuable compared to the processes used today: Projects containing unproven or new technology were seen to have a large potential for SBCE. Two main reasons were mentioned: The parallel development of members in the different sets reduces the statistical rate of failure. The robust method of selection increases the chances of developing the right alternative. Another suitable application is the case of unstable market requirements, or when the
Raudberget, D.
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 685-695
customer provides unclear specifications. Also, for situations where there is a potential for innovation was mentioned: SBCE enables the introduction of innovations in products by eliminating the trade-off between risk and innovation. 4.2 Barriers for the Implementation of SetBased Concurrent Engineering One of the barriers found in the study are the tight schedules from current parallel projects, resulting in focus on short term necessary activities. Allocating time and resources on implementing new methods is not sufficient to achieve success; priority from management must also exist. The current design processes can also hinder the implementation. The first implementation of SBCE in Company C by a reengineering of the design process was not successful. The approach was to add checklists and tasks derived from the SBCE principles to the standard development process. In the next attempt the designers only used methods that seemed to suit the tasks at hand, with satisfying results. Another barrier is the attention to the wastes occurring in the design of many parallel alternatives. Designing one fair solution is
enough for most applications, however, it is based on the assumption that it is possible to find and select the correct solution before actual development. The case of not controlling the specifications is a hard barrier for the introduction of SBCE. For company C the value of the SBCE project was to identify the knowledge gaps in their core technology. Knowing the limits of their technology will make it possible to have an appropriate set of solutions ready for future offers. Also, human barriers were identified: some people may not be interested in changing the way they work, for example to communicate unfinished designs. 5 CONCLUSIONS This study shows that Set-based projects can be implemented within an organization practicing traditional product development with phase-gates. The participants claim that SBCE has positive effects on product development performance and on the resulting products. The improvements are especially dominant on the level of innovation, product cost and product performance.
Table 2. Recommendations for the introduction of Set-Based Concurrent Engineering in pilot projects Recommendation Description Sidestep current Allow teams to bypass the standard development processes whenever appropriate. dev. Practices Avoid freezing concepts or product structures at early stages of development. Train engineers Create a broad acceptance for the methodology by training a core team of managers and managers and engineers. Only select individuals that are willing to participate. Adapt and use the Match the intentions of the principles to the tasks at hand, without taking any shortthree principles cuts. Allow flexibility Set broad target initially for the most important specifications and leave the rest in specifications unconstrained. Use loosest possible constraints to create flexibility. Narrow sets step- Gradually reduce the size of the sets as soon as information is available. vice Decisions by Reject solutions on tangible reasons only. Base decisions on results of tests, elimination simulations, technical data, trade-off curves or other knowledge. Use back-up solutions for innovative or low-cost members of a set. Include a lowrisk member in each set Avoid process Postpone the formulation of a new development process until the experiences of design SBCE are clarified.
Practical Applications of Set-Based Concurrent Engineering in Industry
693
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 685-695
The improvements were achieved at the expense of slightly lower efficiency, measured in terms of development costs and lead time. However, for future projects, the firms are also anticipating SBCE to create more proficient engineers. This opinion cannot be supported by this study: no firm has implemented new means for capturing and reusing knowledge and experiences. At one point the project results differ substantially from the results in literature: some authors claim that SCBE and other practices from the Toyota product development system can quadruple the productivity of the design process [4] to [6]. Even though improvements of this magnitude were not seen, the companies will continue to use SBCE in the future. At present, one of the firms has started to implement SBCE with the goal to put it into practice in the whole of the organization. Future studies will show if SBCE can live up to the high expectations for projects to come. 6 ACKNOWLEDGMENTS The author would like to express his gratitude to Professor Staffan Sunnersjö and the participating companies for sharing their experiences, especially Dr. Peter Andersson and Tech. Lic. Ingvar Rask of the SWEREA IVF Research institute. The funding was shared between the participating industry and the Swedish Governmental Agency for Innovation Systems (VINNOVA) under the program “Innovativ Produktutveckling” grant number 2005-02880. 7 REFERENCES 1]
Ward, A.C., Liker, J.K., Sobek, D.K., Cristiano, J.J. (1994). Set-based concurrent engineering and Toyota. ASME Design Theory and Methodology, p. 79-90. [2] Sobek, D.K. (1997). Principles that shape product development systems, - a ToyotaChrysler comparison. PhD thesis, University of Michigan, Ann Arbor. [3] Morgan, J.M. (2002). High performance product development: A systems approach to a lean product development process. PhD thesis, University of Michigan, Ann Arbor.
694
[4] Ward, A.C. (2007). Lean product and process development. Lean Enterprise Institute, Cambridge. [5] Morgan, J.M., Liker, J.K. (2006). The Toyota product development system: integrating people, process, and technology. Productivity Press, New York. [6] Kennedy, M., Harmon, K., Minnock, E. (2008). Ready, set, dominate: implement Toyota's set-based learning for developing products and nobody can catch you. Oaklea Press, Richmond. [7] Sobek, D.K., Ward, A., Liker, J. (1999). Toyota's principles of set-based concurrent engineering. Sloan Management Review, vol. 40, no. 2, p. 67-83. [8] Ward, A., Liker, J.K., Cristiano, J.J., Sobek, D.K. (1995). The second Toyota paradox: How delaying decisions can make better cars faster. Sloan Management Review, vol. 36, p. 43-61. [9] Madhavan, K., Shahan, D., Seepersad, C.C., Hlavinka, D.A., Benson, W., Center, S.L.P. (2008). An industrial trial of a set-based approach to collaborative design. Proceedings of the ASME IDETC, paper 49953. [10] Bernstein, J.I. (1998). Design methods in the aerospace industry: looking for evidence of set-based practices. Massachusetts Institute of Technology, Cambridge. [11] Raudberget, D. (2010). The decision process in set-based concurrent engineering, an industrial case study. Proceedings of the 11th Design Conference, p. 937-946. [12] Blessing, L.T.M. (2002). What is this thing called design research? Annals of lnternational CiRP Design Seminar, vol. 52, no. 1. [13] Ward, A.C. (1989). A theory of quantitative inference for artifact sets, applied to a mechanical design compiler. PhD thesis, Massachusetts Institute of Technology, Cambrigde. [14] Rao, S.S., Lingtao, C. (2002). Optimum design of mechanical systems involving interval parameters. Journal of Mechanical Design, vol. 124, no. 3, p. 465-472. [15] Womack, J.P., Jones, D.T., Roos, D. (1990). The machine that changed the world. Rawson Associates, New York.
Raudberget, D.
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 685-695
[16] Womack, J.P., Jones, D.T. (1996). Lean thinking : Banish waste and create wealth in your corporation. Simon & Schuster, New York. [17] McManus, H.L., Haggerty, A., Murman, E. (2007). Lean engineering: a framework for doing the right thing right. Aeronautical Journal, vol. 111, no. 1116, p. 105-114. [18] Baines, T., Lightfoot, H., Williams, G.M., Greenough, R. (2006). State-of-the-art in lean design engineering: a literature review on white collar lean. Proceedings of the Institution of Mechanical Engineers, Part B:
Journal of Engineering Manufacture, vol. 220, no. 9, p. 1539-1547. [19] Kušar, J., Bradeško, L., Duhovnik, J., Starbek, M. (2008). Project Management of Product Development. Strojniški vestnik Journal of Mechanical Engineering, vol. 54, no. 9, p. 588-606. [20] Haque, B., Moore, M.J. (2004). Measures of performance for lean product introduction in the aerospace industry. Proceedings of the Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture, vol. 218, no. 10, p. 1387-1398.
Practical Applications of Set-Based Concurrent Engineering in Industry
695
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 696-706 UDC 658.5:338.4
Paper received: 27.04.2010 Paper accepted: 21.10.2010
Adaptive Change Management for Industrial Product-Service Systems Michael Abramovici - Fahmi Bellalouna - Jens Christian Göbel* Information Technology in Mechanical Engineering (ITM), Bochum Ruhr-University, Germany Compared to single physical products or services, Industrial Product-Service Systems (IPS²) are characterized by a very high degree of dynamic changes not only during their planning, but throughout their entire life cycle. These changes have to be managed, tracked and documented by a change management process and supported by a change management system. As IPS² changes and change processes are very difficult to plan during the development phase of IPS², existing static and deterministic change management solutions are not appropriate to be used for IPS². This paper describes a new concept of an adaptive change management for IPS². The concept described allows for appropriate redesigning, adaption and execution of change processes of IPS² throughout its entire life cycle to carry out an IPS² change most efficiently with regard to time and costs. ©2010 Journal of Mechanical Engineering. All rights reserved. Keywords: adaptability, adaptive processes, goal-oriented process modeling, intelligent agents, engineering change management (ECM), industrial product service systems (IPS²) 0 INTRODUCTION Industrial Product-Service System is defined as “an integrated industrial product and service offering that delivers value in use” [1]. This paper describes the concept of an adaptive change management for IPS². The development of this concept is based on a goaloriented process modeling method that defines a business process by using loosely coupled intelligent agents. These agents are coupled during the process execution, in real time and depending on the main process issues. The developed concept has been prototypically implemented und validated by means of a case study. The concept presented in this paper has been developed in the research project Transregio 29 “Industrial Product-Service Systems – Dynamic Interdependency of Product and Service in the Production Area”. 1 CURRENT SITUATION Competitive IPS² providers must be able to adapt their share of products and services within an integrated IPS² to quickly respond to unforeseeable changes in their environment throughout its lifecycle. Various factors - the so called change drivers -, can cause such changes. They can be technological (e.g. emergence of 696
new technologies), environmental (e.g. increasing shortage of resources), political (e.g. legislation amendments), social (e.g. new customer demands), or economic (e.g. decrease in customer demand due to the current economic downturn). Hence the prompt reaction to these unforeseeable changes along the overall IPS² lifecycle has a significant impact on the economic success of the companies involved in the IPS² network. This challenge can only be met by an adaptive engineering change management. In the last decades, several methods and standards such as part 4 of DIN 199 (Technical Product Documentation), ISO 10007:2003 (Guidelines for configuration management and Release Management) and Recommendation VDA 4965 (Engineering Change Management) have been developed for the management of technical changes. The focus of these methods and standards is the management of technical changes of physical products (not of services), which is why they are strongly geared towards the life cycle processes of these products. However, the life cycle processes of IPS² are much more complex than those of technical products. For instance, IPS² producers are as a rule also responsible for the delivery of IPS² and have to optimize and dynamically and promptly adapt them to customer needs during the use phase. Therefore, current change management methods and standards can only consider specific
*Corr. Author's Address: Mechanical Engineering (ITM), Ruhr-University Bochum, Universitätsstraße 150, 44801 Bochum, Germany, jenschristian.goebel@itm.rub.de
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 696-706
characteristics of IPS² to a small extent and can only support an efficient carrying out of IPS² change management processes in a limited way. The following points serve as examples for the most important weaknesses of current change management methods and standards with regard to IPS² change management: • Existing change management methods focus exclusively on the development and manufacturing phases and neglect the delivery and use phases of the product lifecycle. • Existing change management methods cannot sufficiently consider the complexity of IPS², which arises from the networking and mutual influence of technical products and services as well as from change dynamics during the delivery and use phases. • Previous change management methods do not provide a fast reaction to changes that occur within the delivery and use phases of IPS². • Existing change management methods only support or provide static and deterministic change processes. • Existing change management methods do not provide an appropriate adaptation during the process runtime. • Existing change management methods do not allow an integrated view of the product and service share of an IPS². • Existing change management methods limit corporate innovation skills and the responsiveness to unforeseeable changes. In contrast to today’s solutions, the required new change management approaches for IPS² should be applicable to the delivery and use phases as well. Furthermore, these new approaches should not prevent but facilitate the adaptability and changeability of IPS² throughout the entire life cycle. 2 REQUIREMENTS Current obstacles to IPS² adaptability are deterministic and fix-planned static change management processes, which serve as a basis for the implementation of all activities within Engineering Change Management (ECM). Such processes limit corporate innovation skills as well as the responsiveness to unforeseeable changes. Within a given scope, adaptive change processes can make an important contribution to IPS² adaptability. They
automatically respond and immediately adjust to new conditions. Thus ECM process build-up and implementation priorities of the process activities must be determined automatically in real time and must be adequate to and in accordance with the conditions that apply at that time. Adaptive change processes should enhance IPS² adaptability via: • an integrated consideration and analysis of the technical products and services as a hybrid performance package (IPS²) during ECM process implementation, • a continuous and prompt optimization of the various interacting IPS² modules (i.e. technical product and service) to grant the best possible customer benefit, • enhancement of the real-time responsiveness of IPS² providers (fast and adequate to the situation) to the unpredictable and permanently changing customer requirements during the IPS² delivery and use phases, • real-time definition of executable ECM process activities and their execution priorities depending on ECM contents, context, objectives and the current conditions (i.e. adaptive process design and management), • prompt configuration and immediate startup (e.g. continuous real-time plan-and-execute rather than static plan-the-execution) [2], • the management of changes during IPS² development and delivery phases, • taking into account all aspects of the complexity of IPS² during the change management, which arise through strong networking and the interdependency of IPS² modules and which are thus not directly visible to IPS² developers, • taking into consideration the great uncertainties which arise in the IPS² development and delivery phases during the ECM process execution, • ascertaining the effects and determining the spread of the IPS² module change on the whole IPS² and its environment throughout its lifecycle, • auto-ascertaining ECM process variations in the implementation of IPS² changes and adaptations, • integration and close interaction of all IPS² network partners during IPS² change (product manufacturers, service providers, IPS² providers, IPS² customers, etc.).
Adaptive Change Management for Industrial Product-Service Systems
697
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 696-706
3 RELATED WORKS AND METHODS 3.1 Related Works In recent years, numerous pieces of work have been carried out with a focus on change management. In the following, the outcomes of the most relevant research activities that are related to this work project are listed and briefly discussed. The work of Burmeister et al. [3] is one of the important works that deal with the issue of developing agile process modeling methods. For this, they combined agent technology and the goal-oriented modeling method to model and implement agile business processes. The developed approach was applied and validated in line with a case study in the domain of engineering change management of technical products. The demands arising due to the high complexity and the permanent changeability of IPS² were not taken into account in this work. In several research projects Eckert et al. [4] to [6] addressed the question of how designers can be made aware of the impact of a proposed change before they implement it. The main result of these projects is the implementation of a tool to evaluate change proposals during ongoing design processes in which the state of the development of parts is taken into account. This presents an extension of the Cambridge Change Prediction Method, which assesses the risk of changes propagating between two parts. The research works only concentrated on the change of technical products arising during the design phase and they did not consider those that arise during the delivery and use phases of technical products relating to the added services. Conrad et al. [7] also propose an approach to support the process of analyzing and assessing the effects of changes in the product development process. This approach is based on the CPM/PDD theory (Characteristics-Properties Modeling / Property-Driven Development) and the FMEA method (Failure Modes and Effects Analysis). The proposed approach only deals with the effect of changes in the CAD models during the design phase of technical products. Amaral et al. [8] suggest an NPD model (New Product Development) named PDP Net, the singular characteristic of which is the integration between a business process reference, a maturity
698
model and a change management model in order to support the full product development change cycle. This work also focuses on the development phase of technical products and does not consider the special requirements of IPS² change management. In order to investigate the behavior of change management processes in practical work and to develop a close-to-practice change management approach, several case studies [9] to [11] have been conducted in various industrial sectors. However, the aspects of the adaptive change management for IPS² were not in the focus of these approaches. The research works presented on the topic of Engineering Change Management only focus on technical product design and do not consider in particular the needs of the IPS² in the delivery and use phases. This paper aims at addressing these issues. 3.2 Current Process Management Methods Since the 1990s, several process management approaches and methods (e.g. ARIS, SA, OMEGA, SADT) have been developed to design, administer, execute and control ECM processes and related data. They are commonly known as BPR (Business Process Reengineering), BPM (Business Process Management) or CPI (Continuous Process Improvement). These methods have been integrated into various IT tools (e.g. ARIS Design Platform, PAVONE Process Modeler, Bonapart) to support companies with the carrying out of companyspecific ECM processes. In view of ECM process adaptability and thus also of company adaptability, current process management methods, however, show the following weaknesses among others: • The processes can only be mapped as fixed and static sequences or as a concatenation of activities. • Existing Continuous Process Improvement (CPI) methods aim at improving the adaptation of process models to changing boundary conditions in order to optimize business processes [12]. Still, these methods also only allow the presentation of fixed sequences of activities. The adaptation to changed boundary conditions must thus be made a priori. This is a very time-consuming
Abramovici, M. – Bellalouna, F. – Göbel, J.C.
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 696-706
process, as the process manager does not receive systematic support in their solution determination. • The automation of processes by workflow management systems allows in principle for an adaptation of these processes. However, this adaptation is limited with more complex process models (both run and design time). This leads to processes that can be executed efficiently, but possess limited reactivity towards changes. This makes it even more difficult for a company to respond quickly to new standards and market situations [13]. • Objectives and goals of a business process are not defined and modeled explicitly and are thus not always visible. Hence, adaptations and changes to processes evoke the necessity to ascertain that, in the end, the original aim of the process is indeed reached. • Process flexibility can only be achieved by defining additional process variants. That way processes become ever more extensive, unclear and complex. • In most cases the executed processes do not correspond to those planned a priori, as the implementation of change processes is rather elaborate and expensive (“shadow processes” may arise). • For the most part, defined and mapped processes only serve as information material (process wallpapers) and not as a working basis, because these processes only represent the ideal state of business activities. They are not designed to illustrate the real state. Today, Work Flow Management Systems (WFMS) are used to execute modeled ECM processes. They are made available as additional modules to process management tools. The usual WFMS cannot support the adaptability of ECM processes. Their weaknesses include: • WFMS can only apply process activities that have a fixed, predefined flow. • Ad-hoc workflows and unspecified caserelated workflows must be fully defined prior to process initiation. The Work Flow user must be aware of all alternatives for each individual process step. They must be able to interpret them independently and without IT support to define further process steps [14]. • ECM processes that are modeled by standard BPM tools cannot be directly adopted and executed by WFMS. As a rule,
implementation or programming work is required. Therefore, ECM process startup is very time-consuming. • ECM process changes and adaptation cannot be implemented promptly during runtime. Adaptive ECM processes are important prerequisites for IPS² adaptability. To design ECM processes adaptively, new process management methods are required. These methods shall warrant systematic process design and control. On the other hand, they shall leave enough space for creativity and permanent changeability. ECM processes shall thus be designed and executed in a way that ensures continuous IPS² adaptability to new and unpredictable situations. The new process management methods shall be goal-oriented, not activity-oriented. Thus, ECM processes are to be defined and modeled in real time, i.e. during process runtime, whilst taking into account newly occurring conditions and requirements. They should no longer be defined a priori as fixed processes. This will also render the entire process clearer and more intelligible, which is a prerequisite for a fast adaptation, change and transformation of ECM processes. 4 GOAL-ORIENTED PROCESS MANAGEMENT METHOD This section introduces a new goaloriented process management method for modeling adaptive processes. The method is based on the Business Process Management approach by [3], which was defined by Daimler AG and Whitestein Technologies. The aim of this new goal-oriented management method is to replace those processes that are planned fixed, sequential and a priori with dynamic, adaptive processes. When executed, the latter allow for near-independent and real-time responses in specified situations. In order to reach these goals, the processes are defined and modeled by the new method according to the following principles (Fig. 1). First and foremost, the processes shall capture and characterize the defined business goal independently of the solution. Goals can contain further sub goals.
Adaptive Change Management for Industrial Product-Service Systems
699
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 696-706
Process 2 Process n Process 1
<Rule 1>: Process 1 <Rule 2>: Process 2 <Rule n>: Process n <based on> <based on>
<based on> Legend: (Sub-)Goal Activities (achieved by intelligent Agents or by people „process responsible“)
Process-Ontology
Fig. 1. Goal-oriented process management method
700
Belief
Current activity
Intention
Each goal is assigned a generic implementation plan, which is merely made of independent tasks or activities without any predefined execution sequence or priorities. • The specifications of tasks or activities and the order in which they are carried out are determined during process execution, in real time and depending on the main process issues and the current situation (rules) of the process. Within the process, the tasks or activities are defined as intelligent agents. They represent the adequate road to the (sub) goal appropriately, independently and subject to the rules. They also provide the persons involved in the process with recommendations for decision-making in view of the occurrence of further process steps. The agents described above are modeled as modular, intelligent services according to the BDI principle (Belief-Desire-Intentions) [15]. The services are fitted with assumptions about their environment (Belief), knowledge of the target issue (Desire), and the purposes of how that issue can be reached (Intentions). To possess the required knowledge (Belief, Desire and Intentions) during process runtime, the intelligent services shall have access to process ontology. The process ontology is to serve as a common basis for process data management and as a source of knowledge generation for real-time process control (Fig. 2).
Desire
•
Decision supporting for the process responsible
Decision Supporting / Automatic Decision Next activity Automatic identification and Invoking of the next activity
Fig. 2. Intelligent agents (process activity) according to the BDI principle The main differences between current process management methods and the new goaloriented method are summarized in Table 1. Based on the newly defined process management method, process execution and control resemble a GPS system: once the goals and sub goals as well as any further boundary conditions have been entered, the route is calculated dynamically and in real time taking into account all possible disturbances. Divisional routes are chosen autonomously or they are presented to the driver’s assistance. 5 ADAPTIVE CHANGE MANAGEMENT APPROACH FOR IPS² This section introduces an approach to adaptive change management that supports and enhances IPS² changeability and adaptability. At the core of this approach stands an adaptive ECM
Abramovici, M. – Bellalouna, F. – Göbel, J.C.
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 696-706
process that maps the activities of an IPS² change order based on the goal-oriented principle (as described in section 5). In addition, this paper defines a top level IPS² ontology as a knowledge source of real-time ECM process execution and control. Table 1. Comparison of current process management methods and the goal-oriented method Current Methods
Implementation Level
Operational Level
Characteristics
New Method
Process modelling
fixed sequence of activities
goals, activities, rules
Process optimization
in the design phase
in the execution phase (real time)
Process control central
decentralized / autonomous
Separation Process Definition/ Execution
yes
no
IT technology
rigid Workflow adaptive ServiceManagement Oriented System Architecture
Sequence of Process Events
several sub goals (Fig. 3). First of all, the requirements or wishes for an IPS² change are registered and their relevance and priority are checked: “IPS²_ECM_Inquired.” Based on the results of the preceding goal, a Change Request is made which specifies all the details of the change: “IPS²_ECM_Created.” The next ECM process goal is a comprehensive and profound technical, logistical and economic analysis of IPS² change: “IPS²_ECM_Analysed.” Subsequently, IPS² change is evaluated and commented on by various experts (e.g. development, production, logistics, service): “IPS²_ECM_Commented”. Finally, based on the evaluation and comments, a decision regarding the execution of IPS² change is made: “IPS²_ECM_Decided”. IPS²_ECM_Managed Schedule_complied
IPS²_ECM_Created IPS²_ECM_Inquired
fixed (in the design phase: fixed process chain)
adaptive (in the execution phase: real time sequence definition)
5.1 Adaptive ECM Process for IPS² The adaptive ECM process for IPS² has been developed by means of the defined goaloriented management method. The basis for ECM process definition is VDA recommendation VDA 4965 Part 1 ECM. This recommendation supplies a standard and a generic description of change processes of products along the entire supply chain within the automotive industry [16]. In this work the ECM process of VDA 4965 has been extended by further IPS²-specific aspects in order to enable integrated change management of a whole IPS². The main goal of the adaptive ECM process “IPS²_ECM_Managed” is the management, i.e. the execution and control of an IPS² change order. The accomplishment of this main goal presupposes the accomplishment of
IPS²_ECM_Commented
IPS²_ECM_Analysed
IPS²_ECM_Decided
Fig. 3. Overview of the adaptive Engineering Change Management (ECM) for IPS² Table 2. Excerpts from the activities to achieve the goal “IPS²_ECM_Inquired” Activities
Description
change registration
register requirements and wishes for IPS² change
change classification and prioritization
classify changes to be performed (e.g. product change, service change, technical change) and prioritize changes to be performed (e.g. high, medium, low)
condition for ECM creation checking
check requirements for the initiation of an ECM process (e.g. customer relevance, manufacturer relevance, security relevance, competition relevance)
concerned parties determination
determine areas or parties concerned by the changes (e.g. development, service, manufacturer, customer, logistics service provider).
change responsible determination
determine areas or parties responsible for the execution of the changes (e.g. development, service, manufacturer, customer, logistics service provider)
next goal determination
based on the results of the individual activities the next ECM process goal is determined
Adaptive Change Management for Industrial Product-Service Systems
701
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 696-706
At the beginning of an ECM process, organizational restrictions, e.g. the maximal processing time, can be defined: “schedule complied”. To accomplish this organizational goal, ECM process control automatically determines the optimal process path. The above goals are accomplished by invoking intelligent, modular services, which are mapped as modular, isolated tasks or activities in this ECM process. Tables 2 and 3 show excerpts as examples of the activities to achieve the goals “IPS²_ECM_Inquired” and “IPS²_ECM_ Analysed”, respectively. Table 3. Excerpts from the activities to achieve the goal “IPS²_ECM_Analyzed” Activities
Description
new IPS² requirements identification
identify change-related new IPS² requirements
new IPS² function identification
identify change-related new IPS² functions
concerned IPS² function identification
identify change-related concerned IPS² functions
concerned IPS² modules identification
identify change-related concerned IPS² modules (products and services)
next goal determination
based on the results of the individual activities, the next ECM process goal is determined
Several process parameters have been defined for real-time execution and control of the ECM process. These can be set at the beginning of or during process execution on the (sub) goal and activity level. Operators (e.g. AND, OR, XOR, If, Then) define a priori and real-time rules. By use of these rules the required (sub) goals and activities, as well as their execution sequence and process runtime can be determined to ascertain an optimal ECM process flow. Table 4 shows excerpts from the process parameters defined in this work including their possible parameter values. 5.2 Top Level IPS² Ontology In order to manage the entire IPS² life cycle, a top level ontology has been developed within the research project Transregio 29. It is based on the STEP reference model “AP 214” 702
and consists of several classes and relations mapping and describing the various IPS² modules (e.g. technical product, service, function, requirement) and the entire IPS² structure as well as IPS² life cycle management processes (e.g. change management, release management). Table 4. Excerpt from the ECM process parameters Parameter ECM_Reason Change_Complexity Design_Relevance Cost_Relevance Safety_Relevance Quality_Relevance IPS²_Use_Model Necessity Technical_Risk Financial _Risk Schedule_Risk
ECM_Activator
Parameter Value IPS²_Provider Product_Manufacturer IPS²_Customer Service_Provider IPS²_Enhancement IPS²_Optimization Product_Optimization Service_Optimization Customer_Wish Quality_Problems Legislation Amendment High Medium Low Product_Design Service_Design IPS²_Design No Product_Change_Cost Service_Change_Cost IPS²_Change_Cost No Product_Safety Service_Safety IPS²_Safety No Product_Quality Service_Quality IPS²_Quality No Result_Oriented Function_Oriented Availability_Oriented High Medium Low High Medium Low High Medium Low High Medium Low
In addition, the top level IPS² ontology has been augmented by axioms that automatically
Abramovici, M. – Bellalouna, F. – Göbel, J.C.
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 696-706
generate knowledge, conclusions, and relations based on IPS² data. This ontology has been used throughout this work as a source of knowledge to provide all the necessary information regarding process parameters and rules of real-time ECM process execution and control. A comprehensive description of all IPS² ontology elements and the respective opportunities for generating knowledge, conclusions and relations has already been presented in a previous paper [17]. 6 IMPLEMENTATION OF THE CHANGE MANAGEMENT APPROACH FOR IPS² The standard process description language WS-BPEL4People (Web Services – Business Process Execution Language for People) has been used for the modeling and prototypical implementation of the developed change management approach for IPS². WSBPEL4People is an XML-based language that describes business processes whose individual activities are implemented by modular, isolated web services [18]. Today WS-BPEL4People is implemented into many IT tools, the so called BPEL editors. By use of these BPEL editors, a process and its activities can be described and mapped graphically. However, this can also be done using different workflow modeling techniques. Unlike
other techniques, it can generate an executable XML process code from the graphically modeled business process directly and in real time. Fig. 4 shows the prototypical realization of the goal “IPS²_ECM_Inquired” and the related activities, which have been implemented as modular web services. This realization was created by using the BPEL editor and engine “ActiveVOS”, which permits a graphic modeling of goal-oriented processes and an automatic generation of executable process codes. Fig. 5 shows the list of implemented process parameters for real-time ECM process execution and control by using “ActiveVOS”. 7 CASE STUDY A case study has been carried out to validate the approach developed in this work. The IPS² treated in this case study is an Electrical Discharge Machine (EDM) [19]. These machines are mostly used in the manufacturing of microstructured work pieces by using electro erosion techniques. The customer (IPS²_Customer) who owns the EDM machine gives its supplier (IPS²_Provider) the task to upgrade the existing machine, so it can also manufacture rotationalsymmetric µm work pieces such as, for instance, clock spindles [20] and [21]. While executing these changes, the following boundary conditions must be adhered to:
Modelling of the goal “IPS²_ECM_Inquired” by using of the WS-BPEL4People
Process code of the goal “IPS²_ECM_Inquired”
Receive_ECM_Parameter
Process code generate
Legend:
Web Service (process activity)
(sub) Goal Receive activity (web service)
Reply activity (web service)
Fig. 4. The Prototypical realization of the goal “IPS²_ECM_Inquired” and the related activities Adaptive Change Management for Industrial Product-Service Systems
703
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 696-706
ECR-Input-Parameter
Implemented ECR-Process Parameters
Fig. 5. The list of implemented process parameters for real-time ECM process execution and control •
Additional customer employees training is necessary to produce µm parts. • Both, customer and IPS² provider employees must be deployed to produce µm work pieces. • The change estimate must not exceed €100,000. • The upgraded machine must possess a minimal technical availability of 90%. • The change must be implemented within a maximum of 6 months. • Annual maintenance by the IPS² provider must not exceed €10,000 and must not take more than 4 working days. • The entire ECM process must be carried out within a maximum of 4 weeks. In view of the description of the change and the boundary conditions, the ECM process parameters (see Table 5) have been defined. These parameters were partly concluded from the IPS² ontology and were partly input (acquired from the change context or from experience) by the process user. In this case study, the execution of the ECM process has lead to the following main decisions [20] and [21] (Fig. 6): • enhancement of the EDM machine by an additional portable rotary spindle, • this rotary spindle is mounted on the machine table of the EDM system by means of an adaptive clamping system, • this rotary spindle is incorporated into the production process by an integrated IT control system,
704
• •
the entire maintenance concept of the EDM machine is adjusted, the entire training concept of the EDM machine is adjusted.
Table 5. Excerpt from the ECM process parameter for EDM machine enhancement Parameter
Parameter Value
ECM_Activator ECM_Reason Change_Complexity Design_Relevance
IPS²_Customer IPS²_Enhancement High (basic IPS² structure is changed) Product_Design Service_Design IPS²_Change_Cost ≤ 100,000 € No IPS²_Quality Availability_Oriented ≥ 90% High High High High
Cost_Relevance Safety_Relevance Quality_Relevance IPS²_Use_Model Necessity Technical_Risk Financial _Risk Schedule_Risk
8 CONCLUSION AND OUTLOOK IPS² has been developed to permanently meet the demands of the customer through the synergy of technical products and industrial services. Its prerequisites are integrated planning, development, delivery and the use of both, service quotas as well as their dynamic adaptability throughout the entire IPS² life cycle.
Abramovici, M. – Bellalouna, F. – Göbel, J.C.
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 696-706
drive motor rotary spindle spindle
drive belt
drive belt
guide gu pillar pil wire
rotary spindle
wire
EDM workspace
graphit brush
work piece work piece
hit brush
machine machinebed bed
Fig. 6. Illustration of the enhancement of the EDM machine To support this method, the paper has dealt with the development of a new goal-oriented process management method. Contrary to the classical process management methods, it allows for an adaptive process design. By applying this new method, an adaptive change management approach for IPS² has been developed. This approach enables adaptive responses in the ECM process. This means that during their execution, ECM processes can, to a certain degree, respond and adapt to specific situations autonomously and in real time. In the course of this paper, the developed adaptive change management approach for IPS² has been prototypically implemented by means of the standard process description language WSBPEL4People. A case study of an IPS² (EDM machine) change management has validated the approach. In the future, further case studies will be conducted in various other business sectors. Their aims are to enhance ECM process parameters, rules and the axioms of the IPS² ontology to increase the self-adaptivity of ECM processes and the transferability of the solutions to other branches. 9 ACKNOWLEDGEMENTS We express our sincere thanks to the Deutsche Forschungsgemeinschaft (DFG) for financing this research within the Collaborative Research Project SFB/TR29 on Industrial
Product-Service Systems-Dynamic Interdependency of Products and Services in the Production Area. 10 REFERENCES [1] Datta, P.P., Roy, R. (2009). Cost modelling techniques for availability type service support contracts: a literature review and empirical study. Proceedings of the 1st CIRP IPS² Conference, Cranfield, p. 216-223. [2] Kernland, M., Hoeffleur, O., Felber, M. (2008). The agility challenge in business process management. Product Data Journal, vol. 1, p. 40-42. [3] Burmeister, B., Arnold, M., Copaciu, F., Rimmassa, G. (2008). BDI-agents for agile goal-oriented business processes. Proceedings of the 7th International Conference on Autonomous Agents and Multiagent Systems AAMAS, Estoril, p. 3744. [4] Ariyo, O., Keller, R., Eckert, C.M., Clarkson, P.J. (2007). Predicting change propagation on different levels of granularity: an algorithmic view. Proceedings of the International Conference on Engineering Design, ICED, Paris, p. 655656. [5] Jarratt, T., Eckert, C., Clarkson, P.J. (2004). The benefits of predicting change in complex products: application areas of a DSM-based prediction tool. Proceedings of
Adaptive Change Management for Industrial Product-Service Systems
705
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 696-706
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
706
the International Design Conference, Dubrovnik, p. 303-308. Eger, T., Eckert, C.M., Clarkson, P.J. (2007). Engineering change analysis during ongoing product development. Proceedings of the 16th International Conference on Engineering Design ICED. Paris, p. 28-31. Conrad, J., Deubel, T., Köhler, C., Wanke, S., Weber, C. (2007). Combining the CPM/PDD theory and FMEA-methodology for an improved engineering change management. Proceedings of the 16th International Conference on Engineering Design ICED, Paris, paper 549. Amaral, D.C., Rozenfeld, H. (2007). Integrating new product development process references with maturity and change management models. Proceedings of the International Conference on Engineering Design ICED, Paris, paper 127. El Hani, M.A., Rivest, L., Fortin, C., (2007). Propagating engineering changes to manufacturing process planning: does PLM meet the need? Proceedings of the International Conference on Product Lifecycle Management (PLM), Milan. Pulkkinen, A., Riitahuhta, A. (2009). On the relation of business processes and engineering change management. Proceedings of the International Conference on Product Lifecycle Management (PLM), Bath. Tavčar, J., Duhovnik, J. (2004). Product life cycle management in a serial production. Proceedings of the International Design Conference (DESIGN), Dubrovnik, p. 925930. Scheer, A.-W. (2002). Vom Geschäftsprozess zum Anwendungssystem, 4. Auflage, Springer Verlag, Berlin. (in German) Kramberg, V. (2008). Zielorientierte Geschäftsprozesse mit WS-BPEL, Diplomarbeit, University of Stuttgart, Stuttgart. (in German)
[14] Daniel, K. (2008). Workflowmanagement: methodische Grundlagen und Anforderungen des Collaborative Engineering, Prozessmanagement: Praktische Anwendung und weiterführende Ideen, Schriftreihe des bdvb, Band 5, Berlin. (in German) [15] Roa, A.S., Georgeff, M.P. (1995). BDI agents: from theory to practice. Proceedings of the 1st International Conference on MultiAgent Systems, San Francisco, p. 312-319. [16] Verband der Automobilindustrie (2006), VDA 4965: Engineering Change Management (ECM) – Part 1 Engineering Change Request (ECM), Frankfurt. [17] Abramovici, M., Neubach, M., Schulze, M., Spura, C. (2009). Metadata reference model for IPS² lifecycle management. Proceedings of the 1st CIRP IPS² Conference, Cranfield, p. 268-272. [18] Abramovici, M., Bellalouna, F. (2008). Service oriented architecture for the integration of domain-specific PLM systems within the mechatronic product development. Proceedings of the 7th International Symposium on Tools and Methods of Competitive Engineering, (TMCE), vol. 2, Izmir, p. 941-953. [19] Abramovici, M., Neubach, M., Fathi, M., Holland, A. (2009). Knowledge-based feedback of product use information into product development. Proceedings of the 17th International Conference on Engineering Design ICED, Stanford, p. 227238. [20] Richter, A., Sadek, T., Steven, M., Welp, E.G. (2009). Use-oriented business models and flexibility in industrial product-service systems. Proceedings of the 1st CIRP Industrial Product-Service Systems (IPS²) Conference, Cranfield, p. 186-192. [21] Sadek, T. (2008). Ein modellorientierter Ansatz zur Konzeptentwicklung industrieller Produkt-Service Systeme, PhD thesis. RuhrUniversity Bochum, Bochum. (in German)
Abramovici, M. – Bellalouna, F. – Göbel, J.C.
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 707-717 UDC 658.512.2:004.8
Paper received: 27.04.2010 Paper accepted: 21.10.2010
A Software System for “Design for X” Impact Evaluations in Redesign Processes Roberto Raffaeli* – Maura Mengoni – Michele Germani Mechanical Department, Polytechnic University of Marche, Italy Market always asks for new products in short time. It requires an introduction of new tools and methods for managing continuous product changes while evaluating their impact in terms of design efforts, manufacturing costs, time to market, etc in real time. The present research work aims at developing a design platform to support the creation, visualization and navigation of a multilevel product representation where functions, modules, assemblies and components are strictly interrelated. The introduction of “Design for X” principles as rules to relate all aspects contributing to product design, allows evaluating the impact of changes at three levels. The approach and the proposed platform have been applied in the field of refrigerators in order to support both designers and engineers in rapidly configuring the optimal design solution in respect of the company’s requirements formalized by the implemented “Design for X” rules. ©2010 Journal of Mechanical Engineering. All rights reserved. Keywords: functional analysis and modularity, “Design for X”, change management, impact of design changes 0 INTRODUCTION Competition in the world markets forces manufacturing companies towards “leanness” of all product development processes, from ideation to manufacturing. An effective lean product design process requires the development of new suitable methods and tools able to guarantee rapidity in evaluating alternative solutions, high flexibility in making product changes, effectiveness in estimating impact in terms of cost, manufacturability, etc. Generally, the product development process of mass-production goods, like cars and household appliances, can be classified in two main typologies: the ideation of a new product or the evolution of an existing one. Hence, design process can be divided in original design and redesign activities. This last case strongly weights on the time spent in the company design department. The redesign process of complex products is based on several difficult decision making activities. Introducing new product features as well as modifying existing ones can deeply impact not only on functional aspects, but also in many life cycle aspects that a designer could easily neglect. It refers to production resources, assemblability, product disposal, material reuse, life cycle consumptions, costs, aesthetics,
marketing, etc. From the designer’s point of view, these aspects are often not so clearly linked to the desired product change. In order to avoid unnecessary iterations, costs and long and unacceptable time to market, it is necessary to enable product developers to understand the influences of changes in all product lifecycle related aspects. By including “design for X” principles already at the beginning of the process, the designer is enabled to take into account different “X contexts” in which his/her actions can indirectly have an impact. In this context, an important problem is the definition of suitable methodologies and related tools able to support the rapid evaluation of design alternatives and product changes by taking into consideration the impact of choices over the whole product life cycle [1]. This requires a complete product structure representation where the heterogeneous product definitions are properly interrelated at different levels of detail. In order to support change management according to several “designs for X” perspectives, it is necessary to capture product in a knowledge intensive way. That means functions, implementation principles, specific components and the like, should be collected in a unique framework where flows of information and data correlate the different levels. Moreover, product architecture representation should include a
* Corr. Author's Address: Mechanical Department, Polytechnic University of Marche, Brecce Bianche, Ancona, Italy, r.raffaeli@univpm.it
707
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 707-717
sufficiently wide database of knowledge that enables performance of broad evaluation of design change impacts. The aim of the present work is to illustrate the cited framework by focusing on product representation and evaluation impact of change for several design perspectives. After a review of the state of the art, the approach is reported in section 2. Afterwards a description of the preliminary software solution is described in order to operatively implement the approach. An experimental test case illustrates an exemplification of the tool application in section 4. 1 STATE OF THE ART The term “Design for X” (DfX) stands for all the methods, strategies and tools, which enable product developers to consider different important aspects and influences from different product life cycle phases at the same time. DfX is understood as a knowledge system, in which realizations about attaining individual characteristics of technical systems are collected and arranged [2]. The task of product developers exceeds the fulfilment and conversion of function. For the development of technically high-quality and innovative products it is indispensable to consider parameters of systems “X” in early phases of product development (e.g. manufacturing, assembly, recycling, etc.). By considering aspects regarding DfX at the beginning of the design process, the analysis of different X-systems can be made available to product developers. In order to implement DfX aspects, a methodological support as well as adequate software tools to store and manage information [3] are necessary The DfX can be successfully adopted during redesign if a product has been thought of according to specific criteria: modular, configurable, adaptable and flexible. The need to improve the efficiency of design processes has led researchers to the definition of formal approaches and methodologies to apply such criteria when modifying existing products. For example the definition of proper modular product architectures can increase design productivity in terms of product variety management [4]. Product architecture is the information about how many components the product consists of, how these
708
components work together, how they are built and assembled, how they are used, and how they could be disassembled [5] and [6]. Product flexibility, which is conceived as the degree of adaptability to any future change in product design, is strictly correlated to the product’s architecture. In their work Palani Rajan et al. [7] present an empirical study foundation aiming at developing a method to evaluate flexibility of product design and derive a set of guidelines to guide product architecture to a desired state of flexibility. Product functional structure represents the first step to generate the right product architecture ([8] to [11]); the link between functional domain and architecture domain is represented by functional modules, drawn from the clustering of functions on the basis of energy, material and signal flows [12] and [13]. Since levels of product representation (functions, modules, and components) depend on each other [14], it is necessary to maintain the link among them in order to enable designers to analyse the whole design rationale and the change impact. If, on the one hand, product architecture is fundamental in order to easily realise the modifications, it is, on the other hand important to early estimate the effects of such modifications. Literature reports many methods to assess the consequences of engineering changes, as reviewed in [15], especially for softwareintensive systems in aerospace industry. Basically they can be classified in three groups: traceability, dependency and experiential. Traceability impact analysis uses the mapping of product requirements to their respective detail specification and designs. Dependency impact analysis is based on linkages between parts composing a system mainly focusing on specific low level changes. In several research works [16] and [17] examples of this class of methods, called CPM, are found for the mechanical design field on the basis of probabilistic models derived from risk analysis approaches. However, the product change process and redesign phase have been focusing on the final product structure without considering the functional aspects. Such an approach has proved to provide acceptable results for decision making activities but it lacks a robust product representation in terms of information and design knowledge that is crucial for DfX evaluations.
Raffaeli, R. - Mengoni, M. - Germani, M.
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 707-717
Finally, experiential impact analysis refers to the method using individual or collective understanding and experience to estimate the consequence of changes. It is often applied freely discussing a modification within design teams. Regarding practice in real industrial context, experiential impact analysis is much diffused. But by facilitating the use of traceability or dependency impact analyses, the company’s capability to manage modifications can be largely improved while unexpected change propagation can be decreased. With regards to the systems currently available, commercial Product Lifecycle Management software packages allow only circumscribed impact analyses since they do not take into account functional aspects and are unable to predict the extended effects of changes and their implications. In fact PLM modules neglect the product’s functional structure and the way functions are arranged into physical architecture. In such a context, there are no concrete software tools for management of engineering changes in several life cycle aspects. The aim of this research work is to support the redesign process focusing on knowledge availability to designer to perform his/her choices. Rather than assessing change impact on the basis of probabilistic components relations, the focus is set on the integration of a dependency change management engine in a multilevel product structure and including a more extend knowledge intensive definition of product parts linkages. 2 APPROACH The proposed methodology and the related software tool enable designers to get an estimation of the engineering change impact taking into account different aspects of product life cycle. Implications of the redesign activities are assessed for each design context through a quantitative evaluation of change impact indices and design activities. In this section, the research approach is outlined describing the phases of the work that have led to the definition, implementation and testing of a supporting system. As the first step, a review of the state of the art on product functional modelling, functional basis, modularization and product
architecture representation led to the selection and elaboration of the best suited methodologies. In the previous section, some of them are described.
Fig. 1. Stages and outcomes of the research approach Secondly, data structures to represent functions, flows, modules, product’s physical components and assemblies have been conceived and reworked in order to permit both graphical representations and elaboration through standard and dedicated algorithms. Three levels, namely functional structure, modular structure and product architecture, represent the main layers of a multilevel framework in which graph networks are used as a means through which changes can spread [18]. The functional structure domain firstly generates a representation according to the ontology of functional concepts. Subsequently, the modular analysis generates the product modular configuration starting from the most detailed level of the functional structure. Then, the product architecture domain proposes how these modules and their functions can be implemented. The multilevel product representation framework can be read and modified either topdown or bottom-up, depending on which analysis the designer needs to carry out. Traversed downward the structure highlights how functions and sub-functions are implemented by physical
A Software System for “Design for X” Impact Evaluations in Redesign Processes
709
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 707-717
components while moving upwards the design intent is elicited in terms of parts functions and product requirements. The link between the first level and the second one, i.e. the modular configuration, is basically a content-container relationship. On the contrary, the step from the second level to product architecture is more complex. The linkages have been found in Pahl and Beitz [19] solution principles, which give concrete form to a function. Parts, in particular components or assemblies, are mapped to modules on the basis of the functions they implement. In an ideal modular structure, functional modules are perfectly overlapped to physical modules. Normally, as Fig. 2 shows, also for believed modular structures this is not completely true. Parts that refer to the same functional modules may belong to different product assemblies.
Fig. 2. Linkage between modular structure and architecture; physical structure arrangement is mapped to functional structure because they do not show perfect correspondence for believed modular products either The central part of the research focuses on the definition of a way to represent product structure in terms of properties and relations in order to capture design knowledge. Several product design guidelines have been selected and indexes defined to capture the influence of components on each of them as described below. The list of “X” contexts is not fixed and can be arranged by an industrial user on the basis of the specific product under development. An example of possible “X” processes can be found in [1]. A change propagation algorithm has been developed to traverse the properties graph giving an estimation of the impact of modification on
710
several defined design contexts. The work continued with the definition of a suitable software graphical user interface and the implementation of the system. Efforts have been focused on usability in order to minimise data input complexity and knowledge maintenance. The validation of the method and the system was carried out on a fridge test that was studied and represented in the software system. Modification scenarios were used to assess the quality of the result in terms of correspondence to the actual design activities, correctness of decisions and time savings. 2.1 Approach Representation
to
Product
Architecture
Products are represented by a data structure based on trees and graphs. A component tree, whose nodes are the assemblies, subassemblies down to all significant components, represents the product architecture. Screws, bolts, fixtures, wirings can be neglected and their functions represented by a sub-set of the relations between parts. The choice is left to the user on the basis of its experience. Each block, i.e. each tree node, is defined by a name, some properties, an image and implements some functions out of the functional level and links to some documentation files. Block properties store geometrical and mainly non-geometrical characteristics of the part. They form the nodes of an oriented graph structure, which is superimposed to the block tree structure. Relations connect these nodes and are grouped in bundles between blocks couples (see Fig. 3). Each property basically represents a geometrical parameter or a material, a specification and a characteristic of a block. Property data are name, value, units of measurement, data type and express the contents of a certain parameter. In addition, fields such as category, notes and suggestions are provided for the designer’s convenience and as a way to store information and knowledge about the product. In particular, category is used as a way to filter properties for specific design domains as mechanics, electronics, material, and similarly. Finally, a list of couples design context/impact is provided to store the impact of changing such property for different life cycle aspects such as
Raffaeli, R. - Mengoni, M. - Germani, M.
StrojniĹĄki vestnik - Journal of Mechanical Engineering 56(2010)11, 707-717
environment, cost, production, noise, vibrations and the like.
modules that are selected on the basis of the needed requirements. 2.2 Assessing the Change Impact
Fig. 3. Product structure representation in terms of properties and relations Relation is the connection between a property on one hand and one or more properties on the other. In this way, it is possible to manage one to many connections such as dimensioning formulas. Relations capture both design choices and constrains between product parts. Each block can be included in the structure with multiple variants definition. This supports the common situation of different alternatives for parts such as motors, actuators, sensors and
Product change processes can be various. For instance, a component parameter can be changed as a consequence of a new product specification. In this case, the modification is bound to the specific part and its interfaces are maintained constant. On the other hand, other changes can originate from product functional structure [20] and [21]. For example, some subfunctions are removed or new requirements lead to additional new modules. The proposed change propagation analysis starts from a property which is to be updated. The aim is to browse the structure in order to find all the properties which are connected to the modified property. The propagation algorithm comes out with the list of the components and assemblies to be redesigned and the properties affected. Relation directionality has been introduced in the data structure and it can be either monodirectional or bi-directional. In the first case, a certain property variation will cause a change to the connected property. For instance, motor rotational speed depends on the frequency of the electricity supplied and not vice versa.
Fig. 4. Change propagation mechanisms: modification spreads through properties graph considering relations directionality
A Software System for â&#x20AC;&#x153;Design for Xâ&#x20AC;? Impact Evaluations in Redesign Processes
711
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 707-717
In the second case, properties are symmetrically linked. For instance, an increase of flow rate in a pipe leads to a bigger diameter at constant fluid speed and an increase of pipe dimension will allow more fluid to be transported. Property and relations graph is traversed according to the following rules: - modification is iteratively spread from a property to another through relations; - only relations of selected categories may be considered; - direction of propagation is evaluated towards relation directionality. When a relation is traversed coherently to its own directionality, the next property needs to be updated as a consequence of a change in the previous one. On the contrary, this is not correct in the opposite case: the property changing needs to be assessed towards the next property in order to verify the change compatibility (see relation 3 in Fig. 4) which means that the relation constraints the modification. If the modification is compatible with the constraining property the propagation does not spread further. Otherwise, the algorithm needs to also go through the modification of the constraining property. In these cases, the designer can make some considerations and neglect unnecessary branches of propagation. The above-described method requires a definition of suitable impact indices in order to compute an estimation of the consequences of the modifications for each product life cycle context. Design guidelines are listed in a database and can
easily be added or removed by the user. As mentioned above, separate impact indices are input for each design context. In a formal way the total impact for each context can be assessed as: I
Ip , b
p
where: • I is the total impact of the modification. For instance, it can express the environmental cost or the assemblability after the modification; • Ip is the impact of the property p of the block b; • b is the index of the block and p the index of the property. In the test case section some examples of computation of Ip indices are presented. 3 “DESIGN FOR X” SUPPORTING SYSTEM The approach described in the previous section is currently under implementation into a software system. The system’s structure is quite complex and organized in software modules as shown in Fig. 5. Four main parts can be recognized in the system: • the core data structures and relative algorithms that store the information on product functional structure, modular structure and product architecture; • a user interface to input and visualize product views for each level of abstraction;
Fig. 5. Software system architecture to support DfX change management
712
(1)
Raffaeli, R. - Mengoni, M. - Germani, M.
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 707-717
•
data import modules from other company’s IT systems, such as CAD, ERP (Enterprise Resource Planning), PLM, etc.; • an output module to graphically visualize the outputs of the propagation analysis and compute impact indices. Graph engine is the system’s core, since it stores product data. The implementation has followed the approach outlined in the second section. The first level, product functional decomposition, is represented by multiple sublevels of detail, starting from a “black box” model. For each sub-level of detail – usually three on the basis of the common experience of not introducing outlooks on implementing principles – sub functions are connected by flows of material, energy and signal. Well-known functional base vocabulary [10] is employed.
associated with the block. Then some lists are defined: design properties, implemented functions and linked files such as CAD models, spreadsheets or text data. Fig. 7 shows what a property definition window looks like. On the right side the available design contexts are listed and the user can associate an impact index when changing the considered property. The list of the DfX processes is previously defined by the user on the basis of the company’s interest. Finally, properties are connected by relations.
Fig. 7. Form to input data for a property; DfX impact indices are listed on the right
Fig. 6. An example of form used to input data for a block The second main level is a modular structure, which is drawn from the last sub-level of the functional analysis. Finally, the third level represents the physical structure through a blocks tree. Fig. 6 shows the graphical form to input data for a block. First of all, a name and an image are
The system’s main graphical user interface is shown in Fig. 8. The main part of the interface is made of a tabbed area on the right hand side in which the user can alternate the three different views of the product. In this area, nodes and linkages are respectively represented by rectangles and lines. Product architecture definition requires inputting a large quantity of information in the system. As knowledge on product increases, results are more precise and useful. Filling in this information can be quite time consuming. On the other hand, part of it is already stored in the company’s IT systems. It is then useful to import as much data as possible in order to save time and make the system efficient in its use. Three software modules and related import rules are under development to interface the software tool with the other company’s systems such as CAD, PLM and ERP. The initial investment in building the model can be fruitful if products are kept updated with incoming design changes.
A Software System for “Design for X” Impact Evaluations in Redesign Processes
713
StrojniĹĄki vestnik - Journal of Mechanical Engineering 56(2010)11, 707-717
Fig. 8. Software main user interface: the functional level; material, energy and signal flows are respectively represented by black, blue and dotted green arrows 3.1 Propagation Output Module The propagation output module aims to give the designer some feedback on the impact of the introduction of a modification into the product. The algorithm outlined in section 2 has been implemented. Standard graphs analysis techniques have been used to traverse the data structures computing impact indices, node grades, counting elements and lengths of paths. In Fig. 10, an example of possible output is presented for the impact of a single modified
block property. Basically, the system comes out with a composite output made of: â&#x20AC;˘ A tree representing how modification spreads into architecture properties. The root node is the property originating the change. Then, each node of the tree represents a relation that is traversed. A coloured arrow illustrates crossing directionality: red for a constraining relation and blue for the relation propagating the modification. The user can exclude branches of the tree through a check box in order to eliminate meaningfulness propagation paths.
Fig. 9. Assignment of weights at identified product life cycle aspects for each functional module 714
Raffaeli, R. - Mengoni, M. - Germani, M.
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 707-717
•
•
•
A table reporting the list of the blocks and the properties to be updated. For each block, the impact index for a selected design context is computed. Moreover, suggestions and links to documentation files are sorted from the input data; Impact indices are computed and showed in a separate section in order to have a global measurement of the redesign work. They aim to provide a global impact measurement in order to assess and choose different design solutions. In particular, the total impact expressed as the sum of modified properties indices can be computed for each aspect of product life cycle; A graphical output of the impact of modification in the product structure is also available. The portion of relation graphs and blocks is highlighted in order to show the propagation path. 4 TEST CASE
The proposed approach has been applied in the study of a family of refrigerators in order to
come out with new product variants. The research has been carried out in collaboration with Indesit Company Spa, Italian leader in household appliances. In particular, the method was applied in the redesign activity of a combined model made of separate fridge and freezer compartments. The aim of the study is to assess the impact of the introduction of additional specification or the introduction of new parts. The work moved from a functional and modular analysis of the product. Stone heuristics [13] were used to gather functions in 25 modules [20]. Fridge architecture structure in terms of component and assemblies were retrieved from production Bill of Material. Such BOM is oriented on production and assembly process and product is arranged in 11 physical modules that were mapped to functional modules. The fridge was analysed in collaboration with the company’s designer and 15 life cycle aspects emerged. Weights were provided for each of the 25 functional modules identified for the specific product (see Fig. 9). Indices were automatically attributed by the system to the blocks and then to the properties.
Fig. 10. Example of DfX analysis output: fan change is assessed for impact on consumption
A Software System for “Design for X” Impact Evaluations in Redesign Processes
715
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 707-717
Fig. 9 shows the results of the analysis. Weights were assigned choosing from 4 values: 0: No importance, 1: Minimum importance, 2: Medium importance, 3: Maximum importance. The effectiveness of the system was tested on a limited portion of the product bound to the generation and distribution of cold air to the fridge and the freezer compartments. Air is pushed by a fan into the evaporator and then to the storage compartment through some ducts. Fan performance in terms of prevalence and flow rate depends on its blade geometry and its diameter. A change of the fan model and fluid dynamical properties can make it necessary to review many aspects concerned with heat removal and design of parts managing air circulation. Fig. 10 shows how modification to fan propagates to adjacent parts, such as air ducts and evaporator. It is important to understand potential impacts on important fridge requirements such as the “climatic class”, the “capacity” or the “consumptions”. The propagation paths are used by the system to compute the impacts on these aspects. For example, the impact on consumption is shown in the table in the lower part of Fig. 10. Changes in the evaporator can have a big impact on consumption since it highly influences the refrigeration cycle. Also, changes on motor and fan are sources of significant consumptions variations, due to their intrinsic efficiency. 5 DISCUSSION OF RESULTS At the moment, the proposed system is still under development and has been tested only on household appliances (fridge, washing machines, etc.). The tests have shown positive results for this kind of medium complex product category. The generalization of the approach to other fields is under investigation. The preliminary results have shown that the tool can be mainly employed both by product designers and project managers. The first find it useful as a means of storing information, knowledge and design resource. Moreover, the system provides a check list of the design activities to be accomplished once a certain change has been chosen. On the other hand, project managers regard the approach as a
716
decision making supporting system that helps to select the best engineering change without having to dive into deep technical details. As product complexity increases, it becomes difficult to input the necessary product knowledge in the system and above all to maintain it. In this case, activities are carried out by specialists in specific design fields and the proposed tool is more conveniently used by managers. Conversely, in small sized design departments the necessity to manage product knowledge is more perceived. 6 CONCLUSIONS This paper has described an approach in order to support change management process in different design aspects of product life cycle. In particular, the main effort was put on the development of a method and a concrete tool dedicated to the evaluation of design alternatives when carrying out an engineering change. The ongoing research work has revealed many points that need further investigation. First of all, a more detailed definition of product change factors, obtained thanks to verification through an extended number of case studies, is necessary. A generalization of the approach needs to be further investigated. Then, the software usability needs to be improved in order to facilitate data input. Software modules to retrieve information from the other company’s IT system and a database of more recurrent components can be useful for this purpose. Finally, to extend the approach in more significant product innovation activities, it is necessary to identify approaches to cope with modifications originating at functional and modular level. Here, functionalities to support functional analysis and modularization are desirable. 7 REFERENCES [1]
[2]
Lindemann, U. (2007). A vision to overcome “chaotic” design for X processes in early phases. Proceedings of the 16th International Conference on Engineering Design, Paris, vol. 1, p. 231-232. Kuo, T.C., Huang, S.H., Zhang, H.C. (2002). Design for manufacture and design for ‘X’: concepts, applications, and perspectives.
Raffaeli, R. - Mengoni, M. - Germani, M.
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 707-717
Computers & Industrial Engineering, vol. 41, no. 3, p. 241-260. [3] Krehmer, H., Eckstein, R., Lauer, W., Roelofsen, J., Stöber, C., Troll, A., Weber, N., Zapf, J. (2009). Coping with multidisciplinary product development – A process model approach. Proceedings of the 17th International Conference on Engineering Design, Stanford, vol. 1, p. 241-252. [4] Eppinger, S.D., Chitkara, A.R. (2006). The new practice of global product development, MIT Sloan Management Review, vol. 47, no. 4, p. 22-30. [5] Fixson, S.K. (2005). Product architecture assessment: a tool to link product, process, and supply chain design decisions. Journal of Operations Management, vol. 23, p. 345-369. [6] Van Wie, M.J., Rajan, P., Campbell, M.I., Stone, R.B., Wood, K.L. (2003). Representing product architecture; Proceedings of ASME Design Engineering Technical Conferences, Chicago, p. 731-746. [7] Palani Rajan, P.K., Van Wie, M., Campbell, M., Otto, K., Wood, K. (2003). Design for flexibility – measures and guidelines. Proceedings of 16th International Conference on Engineering Design, ICED Stockholm, p. 19-21. [8] Chakrabarti, A., Bligh, T.P. (2001). A scheme for functional reasoning in conceptual design. Design Studies, vol. 22, p. 493-517. [9] Otto, K., Wood, K. (2005). Product design: techniques in reverse engineering, systematic design and new product development, Prentice-Hall, New York. [10] Hirtz, J., Stone, R., McAdams, D., Szykman, S., Wood, K. (2002). A functional basis for engineering design: reconciling and evolving previous efforts. Research in Engineering Design, vol. 13, no. 2, p. 65-82. [11] Szykman, S., Racz, J., Sriram, R. (1999). The representation of function in computer-based design. Proceedings of the ASME Design Theory and Methodology Conference, Las Vegas. [12] Höltta, K.M.M., Salonen, M.P. (2003). Comparing three different modularity methods. Proceedings of DETC ASME Design Engineering Technical Conferences
[13]
[14]
[15]
[16]
[17]
[18]
[19] [20]
[21]
and Computers and Information in Engineering Conference, Chicago, p. 533541. Stone, R., Wood, K., Crawford, R. (1998). A heuristic method to identify modules from a functional description of a product. Proceedings of ASME Design Engineering Technical Conference, Atlanta, paper 5642. Zadnik, Ž., Karakašić, M., Kljajin, M., Duhovnik, J. (2009). Function and functionality in the conceptual design process. Strojniški vestnik - Journal of Mechanical Engineering, vol. 55, no. 7-8, p. 455-471. Kilpinen, M., Eckert, C., Clarkson, P.J. (2009). Assessing impact analysis practice to improve change management capability. Proceedings of the 17th International Conference on Engineering Design, Stanford, vol. 1, p. 205-216. Eger, T., Eckert, C.M., Clarkson, P.J. (2007). Engineering change analysis during ongoing product development. Proceedings of 16th International Conference on Engineering Design, ICED, Paris. Koh, E.C.Y, Clarkson, P.J. (2009). A modelling method to manage change propagation. Proceedings of the 17th International Conference on Engineering Design, Stanford, vol. 1, p. 253-264. Germani, M., Mengoni, M., Raffaeli, R. (2007). Multi-level representation for supporting the conceptual design phase of modular products. In: Krause, F.L. (ed.), The future of product development, Berlin, p. 209-224. Pahl, G., Beitz, W. (1988). Engineering Design. A systematic approach, The Design Council, Springer-Verlag, London. Germani, M., Graziosi, S., Mengoni, M., Raffaeli, R. (2009). Approach for managing lean product design. Proceedings of the 17th International Conference on Engineering Design, Stanford, vol. 1, p. 73-84. Wyatt, D.F, Eckert, C.M., Clarkson, P.J. (2009). Design of product architectures in incrementally developed complex products. Proceedings of the 17th International Conference on Engineering Design, Stanford, vol. 1, p. 167-178.
A Software System for “Design for X” Impact Evaluations in Redesign Processes
717
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 718-727 UDC519.6:004.85
Paper received: 27.04.2010 Paper accepted: 22.10.2010
Cloud Computing for Synergized Emotional Model Evolution in Multi-Agent Learning Systems Tristan Barnett* ̶ Elizabeth Ehlers Academy for Information Technology, University of Johannesburg, South Africa Machine learning is a technology paramount to enhancing the adaptability of agent-based systems. Learning is a desirable aspect in synthetic characters, or ‘believable’ agents, as it offers a degree of realism to their interactions. However, the advantage of collaborative efforts in multi-agent learning systems can be overshadowed by concerns over system scalability and adaptive dynamics. The proposed Multi-Agent Learning through Distributed Artificial Consciousness (MALDAC) Architecture is proposed as a scalable approach to developing adaptable systems in complex, believable environments. To support MALDAC, a cognitive architecture is proposed which applies emotional models and artificial consciousness theory to cope with complex environments. Furthermore, the cloud computing paradigm is employed in the architecture’s design to enhance system scalability. A virtual environment implementing MALDAC is shown to enhance scalability in multi-agent learning systems, particularly in stochastic and dynamic environments. ©2010 Journal of Mechanical Engineering. All rights reserved. Keywords: multi-agent learning, cognitive architecture, emotional models, scalability, intelligent agent, cloud computing 0 INTRODUCTION Multi-agent systems (MAS) have the innate advantage of using the collaborative efforts of multiple, interacting agents to satisfy goals and solve problems. The application of MAL to complex environment is important for application domains such as embodies agents in the form of real-world robotics, computer-human interaction, social simulation, synthetic characters and agents situated within virtual environments. However, the dynamics in MAL systems pose several challenges. Such dynamics are exasperated in complex environments that are analogous to the real-world since agents may be imbued with features such as reinforcement learning, affective guidance and cognition. Efficient cooperation mechanisms are therefore critical for multi-agent learning (MAL) systems to cope with the dynamics of these environments. Consequent to having multiple learning and interacting agents, major problems are presented by MAL, including adaptive dynamics, scalability and problem decomposition [1]. These problems must be considered in MAL design in order to realize its functional advantage over single-agent learning systems, particularly in complex task environments where the agent program is more computationally demanding. 718
Adaptive dynamics refers to the fact that multiple agents not only adapt according to their own agent state but also in reference to the state of other agents. Agents may select goals greedily or inappropriately as a result, hindering the entire team of agents. Scalability is another concern in a MAS, as computational and storage requirements increase exponentially with the increased burden of interactions with other agents. Problem decomposition refers to the process of solving a complex problem by dividing it into a set of smaller problems. An important feature of dynamic environments is that it is often the case that multiple goals are to be satisfied and goals must be selected with other agents taken into consideration. Hence, problems must be divided efficiently between agents. Cognitive architectures can provide a means of coping with complex environments which demand arbitrary, rather than domainspecific, task completion [2]. However, currently available cognitive architectures are not typically well-oriented toward multi-agent learning, where adaptive dynamics, scalability and problem decomposition are a concern. The proposed Multi-Agent Learning through Distributed Artificial Consciousness (MALDAC) Architecture to cope with the critical aspects of adaptability and cooperation in large
* Corr. Author's Address: University of Johannesburg, PO Box 524, Auckland Park, 2006, Johannesburg, South Africa, tdd.barnett@gmail.com
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 718-727
teams of learning agents situated in complex environments which exhibit properties of the real world. MALDAC proposes the Context-based Adaptive Emotions Driven Agent (CAEDA), which uses two processing algorithms, Adaptive Consciousness Layering and Adaptive Impulse Modeling (ACLAIM), to contextualize perceptual knowledge and learn cooperatively with other cognitive agents. 1 SURVEY An analysis of the current state of the art in MAL as well as its relation to reinforcement learning, affective computing and cognition will be discussed in Section 1. Section 1.1 discusses multi-agent learning and the relationship between the communication and cooperation. Emotional models and sociability discussed in 1.2 can be employed to improve goal selection and learning. Section 1.3 identifies some novel approaches devised to increase the adaptability and realism of agent learning. Finally, Section 1.4 discusses appropriate computer architectures for agentbased systems. 1.1 Multi-Agent Learning Multi-agent learning (MAL) consists of several agents attempting to solve a machine learning problem through cooperative or competitive interaction [1]. Learning is a desirable aspect for agents deployed in virtual environments as it offers better adaptability and a degree of realism paramount to the simulation of agent sociability. A number of developments seek to improve the learning capacity of agents. Features such as heterogeneity and scalability are increasingly becoming a concern in modern MAS applications. The distributed nature of multi-agent systems introduces several design issues into their development, including problem decomposition, cooperation and communication [1], [3] and [4]. MAL issues can be strongly interrelated and hence can be handled or counteracted in unison. Cooperation and communication in particular have a large impact on the success of the system [1] and are also strongly connected since cooperation can be handled by appropriate communication [3].
RL techniques typically applied to singleagent systems may be invalid in the multi-agent domain due to individual agents’ inconsideration of the actions of other agents [1] and [5]. Effective cooperation is hence necessary for successful MAL implementations. Communication is a viable means of attaining cooperation. Synthetic characters also require some degree of interaction for realistic simulations of interaction, thus communication is unavoidably necessary and important in most MAL systems. Minimizing communication overhead, however, is an important consideration to enhance MAL computational efficiency. 1.2 Emotional Modeling and Socialization Emotional models are used to cope with goal priority by warranting motivations for actions. Emotional agents automatically adapt goal priority based on goals. Optimal agent behavior can be enhanced by emotional models in scenarios where multiple goals must be satisfied [6]. Currently, few cognitive systems take into account the effects of emotion and its role in reasoning [2]. However, appraisal theory has been used by Marinier and Laird to enhance traditional reinforcement learning algorithms to indicate that emotion offers a more flexible goal selection routine in learning applications [7]. Although the research was restricted to a single-agent domain and simplified task environment, it indicates that cognitive learning approaches may benefit from the use of emotion. Emotional models in multi-agent systems require special attention [6] and [7]. Motivationbased approaches typically imbue agent state data with an emotional association to guide action selection. This approach can be extended to multi-agent systems by requiring that agents satisfy a social component to their emotional model. The social component ensures that communication takes place so that agents can cooperate with one another. Breazeal uses temporally-bound, goal-related drive processes to influence emotion [8]. To cope with interaction, however, a “social drive” was used, where the agent becomes obliged to either interact or cease interaction. A better approach is to allow the agent to be directly motivated by the intrinsic advantage of exchanging information, as
Cloud Computing for Synergized Emotional Model Evolution in Multi-Agent Learning Systems
719
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 718-727
communication offers the opportunity to refine on current knowledge. Communicating emotional states enhances the realism of interactions between agents, particularly within synthetic character simulations. Tomlinson and Blumberg present a framework to allow emotional human-agent socialization which influence agents to form relationships with other agents [9]. The framework is highly appropriate for synthetic character development due to the interactions being more realistic to the user. Emotion can be modeled using either the categorical approach, which uses fixed categories of emotion, or the dimensional approach. The dimensional approach represents emotion as a vector on a dimensioned space which offers flexible permutations of emotion and is easier to model mathematically. Tomlinson and Blumberg use the dimensional approach to enhance learned state-action pairs, effectively enhancing agents’ interactions with other synthetic characters [10]. Although this research supported social learning, emotion was not considered as an integral cognitive component of learning and reasoning. 1.3 Cognition and Multi-Agent Learning Motivational and cognition-based approaches are often applied to synthetic characters and robotics due to the demand of choosing between changing goals in a real-world or similarly dynamic virtual environment. Cognitive architectures offer increased realism and adaptability in situated agents and enhance the generality of their application. A cognitive architecture can be defined as a model with a structural definition of an intelligent agent’s behaviour based on the mental processing of humans or animals [2]. Cognition refers to the synergised effects of such faculties as learning, motivation, emotion and reasoning. Hence, the representation, organisation, utilization and acquisition of knowledge are the focal points in cognitive architecture design and are typically based on a physiological model. However, the approaches can suffer from adaptive dynamics, scalability and problem decomposition problems when applied to multiagent scenarios. Furthermore, emotion as
720
discussed in Section 1.2 is often omitted from cognitive architectures. CAEDA proposes the use of artificial consciousness (AC), which attempts to transform percepts into subjective and contextualized components of information. It is assumed that this will give rise to metaphorical semantics and causal associations. One approach to AC is to augment sensory data as it traverses through various layers of perceptual preprocessing [11]. An important distinction in successful models of consciousness is that they comprise of interacting but specialized functional modules from which consciousness can emerge [12]. MALDAC, discussed in Section 3, autonomously integrates distributed cognitive modules of different agent. Hence the cognitive architecture is essentially distributed across multiple, interacting agents for enhanced robustness. 1.4 Web Services and Agents To provide a scalable communication infrastructure, computational burdens of MAS can be distributed over the Internet. Agents can both utilize and act as a provider of a web service to enhance the scalability of intelligent systems [13] to [15]. Through web services and Internetwide infrastructures it may be possible to handle communication in MAS environments in a scalable manner [13] to [16]. Agent-based technologies also lend themselves well to grid technologies due to common goals such as robustness, service-orientation and automation [17]. Cloud computing is a novel paradigm which improves on grid computing and serviceorientation by providing a scalable and virtualized communication infrastructure with dynamic resource allocation [18]. 2 PURPOSE The purpose of MALDAC is to address cooperation of agents in MAL implementations through a structured, cognition-based communication approach. The CAEDA cognitive architecture is proposed to enable the deployment of MALDAC in complex, arbitrary task environments. Furthermore, the cloud computing paradigm is proposed to enhance the system’s scalability. The motivational state data of agents is communicated to allow agents to interpret the
Barnett, T. ̶ Ehlers, E.
StrojniĹĄki vestnik - Journal of Mechanical Engineering 56(2010)11, 718-727
goals and intentions of other agents. MALDAC thereby supports cooperative learning behaviour by ensuring goal selection is aligned across agents. The focus application domain for MALDAC is that of a synthetic character simulation to illustrate the functional advantage of emotional models for coping with the dynamics of MAL systems. The model also provides an enhanced understanding of cloud computing through diversification of its applications. 2.1 Cognitive Models of Consciousness MALDAC utilizes the adaptability of cognition to devise a scalable collaborative learning system with a flexible application domain. The flexibility of MALDAC is achieved by accommodating for the adjustment of the number of simultaneous processes used by each agent while learning. The success of an artificial consciousness system can be evaluated through continuous replacement of functional modules [12]. This measure of success makes the development of cognitive MAL systems well suited to the cloud computing paradigm, where system components can be imported from web services over the Internet. MALDAC exploits the opportunity of using on-demand functional modules, massively improving system scalability and adaptability. 2.2 Web Service and Communication Bargelis et al. propose an intelligent interfacing module of process capability (IIMPC) to improve activity integration and mitigate the necessity to introduce new solution processes [19]. Knowledge integration provided improved support for modeling in virtual environments in the manufacturing domain. However, the system was constrained in terms of its application domain. There is hence a need to direct future research at generality of application. MALDAC is a multi-agent system, which is distributed by nature and hence must consider knowledge integration. To ensure generality of application, knowledge integration is supported by emotion and cognitive processes. As previously mentioned, agents should be directly motivated by the knowledge exchange that takes
place during interaction. To mitigate the cost of communication, agents should only communicate because they require information, not because they feel obliged to. MALDAC agents hence only send requests for interaction when (1) current homeostasis functions remain unsatisfied and no action plan has yet been discovered and learned by the agent, or (2) when the MALDAC web service agent was previously unable to provide adequate assistance to the agent. Communication to the web service will hence only take place when the various modules of cognition are inadequate for a given problem or when the agent believes that the web service agent would benefit from information the agent has discovered. The amount of functional attention the server gives to an agent is dependant on the relative urgency of that agentâ&#x20AC;&#x2122;s needs or on the relevance of the data in relation to a currently unresolved problem. 3 DEVELOPMENT AND APPLICATION The goal of MALDAC is to provide a scalable implementation for highly adaptive agents, particularly in partially observable, stochastic and dynamic environments. Section 3 discusses the architecture in detail and explains its experimental application to synthetic agents behaving rationally in a simulated 3D environment. The CAEDA Architecture is proposed as a cognitive architecture to allow for flexible deployment of multi-agent learning services. CAEDA maintains three core assumptions. Firstly, that there is a relationship between motivation, emotion and learning. Secondly, that there is a functional advantage in a duality of conscious (deliberate) and unconscious (automatic) behavior and that this differentiation is continuous. Finally, that the interaction of longterm explicit (deliberate), long-term implicit (reflexive) and short-term (working) memory are vital to effective general-purpose learning. 3.1 Adaptive Emotional Models and States To cope with adaptability, the architecture uses emotional models. Section 3.1 discusses the systemâ&#x20AC;&#x2122;s emotional model implementation. The dimensional approach to emotional models uses a vector in n-dimensional space to represent a multitude of emotional combinations. The well-
Cloud Computing for Synergized Emotional Model Evolution in Multi-Agent Learning Systems
721
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 718-727
known Pleasure-Arousal-Dominance (PAD) model used by Tomlinson and Blumberg in “AlphaWolf” [10] consists of a vector defined on a three-dimensional plane with the axes ‘pleasure’, ‘arousal’ and ‘dominance’. With these axes one can program approach-avoidance, fearconfidence and positive-negative valence responses to environmental stimuli. Goal selection is based on the selective balancing of emotional indicators, hence keeping the overall emotional state of the agent in homeostasis. The CAEDA agent adopts the PAD dimensional approach by using multiple regulatory emotional state vectors (see Fig. 1) and associating vectors with the CAEDA agent’s knowledge elements. In Fig. 1, circles indicate emotional states of interest, whilst their size represents their relative intensity. The vector V will cause the strongest drive in the agent, since it is furthest from homeostasis points and, in particular, the dominant homeostasis point U.
Fig. 1. Conceptualized view of the PAD emotional model used by CAEDA CAEDA adapts the concept of “drive functions” used by Breazeal [8]. Drive functions are regulatory controls that govern agent behavior. Drive functions represent low level needs (goals) of the agent, such as the need for sustaining energy and completing its designed purpose. The CAEDA agent uses a drive function to regulate emotion with drive values derived from an exponential function of the homeostatic deviation. The intensity at which the agent selects to satisfy emotions that are out of homeostasis is based on homeostasis drive functions. The value of homeostasis is determined by continuous analysis of related drive functions and adjusted 722
automatically. In a synthetic character, for example, the discomfort (pain) emotion is triggered by “hunger” percepts. The sight of food is associated with “hunger satisfying” percepts and therefore triggers a pleasure-anticipation (hopeful) emotion. Finally, an opponent nearby the food item may trigger a submission (hesitation) or dominance (aggression) emotion based on if the opponent is perceived as stronger or weaker. A change in a drive function results in a mapped affect on the agent’s emotions. This mapping is initially based on default settings by the designer, such as the emotion of discomfort being experienced during low energy levels or as the result of a potentially damaging collision on the agent’s body. These mappings adapt as the result of learned emotional associations during ACL processing discussed in Section 3.2. A submissive emotion may become associated with collisions as the result of experiencing typical pain and an anticipation emotion when a source of energy is found. In this scenario, submissive behavior causes a fear of moving fast in difficult to navigate areas – not because speed is related to the pain caused by a collision itself, but because of the learned association that collisions at high speed yield greater pain. Therefore the walls are not avoided altogether as it would be with ordinary pain-association, but simply treated more cautiously as the result of submission emotion. The activity of CAEDA agents is motivated by rectifying deviations from homeostasis drive functions (see Fig. 2). The function that evaluates to the greatest drive value will take precedence when presented with associated stimuli. Drive is evaluated as: f(x) = d · |d|s, where d = x – h .
(1)
In Eq. (1), |d| takes the absolute deviation from homeostasis and is used in an exponential function such that the deviation increases the overall drive exponentially. The absolute deviation is raised to the exponent constant s, the stability factor, which determines the strictness that the drive function be maintained. Lower values will yield less drastic variations of drive at low homeostasis deviations, as seen in Fig. 3. Emotional stability can hence be maintained using restrictive homeostasis functions. This is
Barnett, T. ̶ Ehlers, E.
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 718-727
important as it prevents extreme emotional responses that are undesirable for the application domain. For example, a synthetic character that is only slightly hungry should only opportunistically pursue food. Only when very hungry should it aggressively seek to satisfy this need.
Fig. 2. Homeostasis drive function of the CAEDA agent Drive function are defined to specify the agent’s functional needs, which in turn affect emotion and hence goal selection of the agent. Agents develop disposition to environmental queues based on the configuration of homeostasis vectors, hence obtaining rational goal selection.
Fig. 3. CAEDA Drive Function with Homeostasis h=-0.4 and Stability s=1.5 3.2 Cognition and Knowledge Representation Section 3.2 discusses how percepts in CAEDA are filtered and transformed into learned knowledge with contextualized semantics and how this knowledge is retrieved from the agent’s memory. The percept filtering and transformation process is important as it determines which percept sequences to transmit to other agents and why such data will contribute to effective cooperation in learning efforts.
Gielingh proposes that knowledge can be continually refined in an iterative cognitive cycle [20]. In the cycle, knowledge is developed through an impression of the environment and thereafter a hierarchical action selection structure is developed as a solution. The system confirmed efficiency gains in continual task improvement. However the system was supported by human input. CAEDA requires that learning methods need no human input or parameterization. Hence, CAEDA has to independently determine which information is relevant. Most perceptual information sensed from the environment is irrelevant to the agent’s current goals. Determining which sequences of percepts are of interest to the agent requires placing percept data in context with what the agent already knows – a process which will be called contextualization. This allows noise to be ignored and associations between novel percepts and recognized percepts to be built. The approach to knowledge representation the CAEDA agent uses is inspired by the principal of “reconstructive evidence” used by forensic investigators. The intention is to relate data to the point of being able to identify causality. The types of reconstructive evidence the system should consider include: Temporal data to limit the life-span of percepts and place events in sequence relative to one another. Relational data to associate percepts and hence build percept sequences. Functional data to identify the percepts’ relation to the agent’s goals and intentions. Contextualization takes place by traversing percepts through various layers of cognitive processing, adding and modifying temporal, relational and functional data (see Fig. 4). Nodes are classified based on their priority and are terminated, acted upon or undergo AIM processing. AIM involves (a) adding temporal data, (b) determining relevance to drive functions and (c) creating emotional associations. Drive function values and emotional state vectors are hence updated in AIM. AIM also updates node priority values. Nodes of intermediate priority that neither result in action nor in termination are subject to communication. Artificial consciousness approaches often use a layered approach to perceptual processing [11], where different layers deal with either
Cloud Computing for Synergized Emotional Model Evolution in Multi-Agent Learning Systems
723
StrojniĹĄki vestnik - Journal of Mechanical Engineering 56(2010)11, 718-727
reflexive or more deliberate perceptual tasks. The shortfall of the layering technique is the vagueness behind determining which percepts are addressed by which functional module. Langley et al. propose that new research in cognitive architectures should allow for knowledge utilization to be able to dynamically change between deliberate and reflexive modes based on the situation [2]. The ACL model the hence follows this suggestion as it may better support learning in dynamic environments. In the ACL approach, percepts traverse between levels of consciousness continuously based on the subjective experiences of the agent (see Fig. 5). Precepts are contextualized based on consciousness rating and priority. Nodes such as D(t) are terminated due low priority relative to time. Learning can occur on varying levels to address learned reflexive responses or deliberate behavior.
Fig. 4. Model of the CAEDA agent with ACL and AIM processing methods In ACL, input percepts result in the creation of perceptual elements, called nodes. Nodes consist of a priority and consciousness rating, an emotional tag and links to other nodes. Links between nodes have a strength value and an elapsed time value. Priority increases when emotional imbalances are triggered. Nodes with high priority are more likely to be acted on and are committed to memory. Low priority nodes
724
decay over time and are eventually terminated. Consciousness rating is increased over time and determines the amount of contextualization a node is subjected to, which in turn allows longer action sequences to be developed.
Fig. 5. Precepts subjected to the ACL model Consciousness rating hence affects the level (reflexive or deliberate) cognitive attention. Unconscious perceptual nodes result from sensory input and form low-level habituated responses which guide reflexive behavior. Reflexive actions are triggered first because reflexive behavior is subjected to the earliest processing. Nodes with emotional values that are strongly out of homeostasis in reference to the agentâ&#x20AC;&#x2122;s current emotional state are shifted closer to the conscious layer. Actions associated with nodes are initially random but are refined when action sequences are determined to have higher utility in solving the imbalance of homeostasis functions. With more deliberate actions, sequences of reflexive behaviors can be chained. The chaining process allows action sequences to be learned. ACL also maintains a separation of long-term memory from short-term memory. The continued contextualization of percepts increases their lifespan and thereby commits them to longterm memory. Consciousness rating determines whether long-term memories are explicit or implicit, allowing for differentiation of reflexive or deliberate learned action sequences. 3.3 Motive for Communication As previously stated, emotional models are typically applied to agent-based systems to enhance goal selection. Communication acts are also viable goals that allow agents to cooperate with others [3]. However, in order to minimize communication burdens on the system, the MAS
Barnett, T. Ěś Ehlers, E.
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 718-727
system designer must ascertain as to what constitutes as an appropriate communication act. It is possible to use a social drive in which agents are obliged to communicate periodically [8]. It is also possible to implement procedures that force agents to communicate when knowledge is of relevance to themselves or others [3]. In the case of learning systems however, knowledge exchange itself should be a significant goal of the agent. Learning is a more difficult goal to realize because the reward is intangible. To realize this objective, emotional state data is directly involved in communication acts in MALDAC. Contextualized percepts with high consciousness and high priority ratings are transferred to other agents (indicated as “feedback” in Fig. 4). They contain motivational data through emotional tags and are sent as groups of linked percepts which allow causal relations to be built by a receiving agent. These received percepts are processed by the recipient using the ACLAIM method and are therefore prioritized and memorized in a typical fashion. Following processing, the receiving agent will “empathize” with the other agent, understanding its emotions and intentions and hence acting with other agents’ motives in mind. Agents manage conflicting goals through an evaluation of relative purpose. Furthermore, evidence for need satisfiers can be shared between agents. For example, assuming that two agents needed to cross the same single-lane bridge, the agent with the lowest emotional imbalance will yield. 3.4 Cloud-Based Agent To enhance scalability, MALDAC uses a cloud-based service agent through which CAEDA agents can leave behind information for others to access, with the web service agent acting as a middleman. This indirect communication is less of a burden to multi-agent systems and promotes scalability [1]. Furthermore, web services allow agents to access and integrate diverse aspects of cognitive function, allowing agents to themselves adapt to specialized tasks. Agents with differing functional requirements can be augmented by subscribing the agent to web services with the appropriate cognitive modules. Figs. 6 and 7 illustrate MALDAC with a web service agent communicating with a CAEDA agent. The web service agent searches its
knowledge repository and returns nodes pertaining to the problem. The response nodes may contain a solution action sequence or simply additional contextualization data that the querying agent might be able to utilize to develop a solution.
Fig. 6. Client-side MALDAC architecture
Fig. 7. Server-side MALDAC architecture An interesting feature of MALDAC is the communication of motivational states of agents to align agent goal selection, as agent goal selection is based on the agent’s emotional state as well as being influenced by others agent’s emotional states. Any agents connected to MALDAC will inadvertently synchronize their goal selection to support team members and take action when expected to by the team. Communication is hence mitigated, with only data relevant to agent goal selection being transmitted. This optimization is a result of ACLAIM, where reflexive knowledge is left to individual agents but deliberate knowledge is candidate to communication. The MALDACbased web service agent stores knowledge gathered by other agents to build a shared repository of knowledge. This repository serves to collect and provide successful, learned behaviors in an on-demand manner to CAEDA agents and allows continuous improvement of learned patterns of behavior.
Cloud Computing for Synergized Emotional Model Evolution in Multi-Agent Learning Systems
725
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 718-727
4 RESULTS AND DISCUSSION The HIVE simulator has been developed as a complex virtual environment for CAEDA agents and as a test bed to MALDAC. Robotic entities are presented with a maze, randomly generated using a recursive backtracking algorithm (see Fig. 8). Goals are placed throughout the maze. Agents must collaboratively learn the locations and times at which goals are accessible and learn a compromise on the allocation of the limited resources. The simulator is designed to recreate the ever-changing needs and daily demands of synthetic characters.
Fig. 8. Exploration robots in the HIVE simulator To simulate extreme dynamics in the environment, a weather system that presents threats to the agent has been developed. The input of a human controller contributes additional dynamics to the scenario. Agents must achieve their own defined purpose by keeping their subsystems in homeostasis. The multiple and dynamic goals include collision avoidance; exploration and sample gathering tasks; stored energy conservation and solar recharging; and internal temperature regulation. The CAEDA API has been developed and controls robotic entities. The simulator has been integrated with a MALDAC hosted on a private computing cloud, with a specialized CAEDA agent acting as the web service agent. Enhanced agent activity is acquired via interactions with the web service agent using ACLAIM. Heterogeneity is supported, as agents need only maintain learned knowledge that directly pertains to their own behavior. The system implementation is shown to be robust, as
726
agents maintain their own copy of learned knowledge. Web service agent modification immediately affects and benefits the client implementation without client’s software needing changes due to the standardized communication mechanisms. MALDAC has improved on alternate cognitive architectures by integrating support for cooperative multi-agent learning. Furthermore, the scalability often lacking in cognitive architecture has been carefully considered in MALDAC. Finally, MALDAC has employed emotion as an integral mechanism of rationality and learning, which is often neglected in cognitive architecture [2]. However, current human interaction in MALDAC has extended only to environmental manipulation. Human interaction could be introduced to support the training of agents. A fundamental part of the client agent is choosing when to access the web service. Should multiple web services that support this architecture be available to the agent, it would be advantageous for the agent to be capable of learning the reliability of various web services for solving specific types of problems that the agent encounters. A more standardized communication scheme will also benefit the deployability of MALDAC. A further investigation of the agent’s cognitive module selection process is also needed. 5 CONCLUSIONS MALDAC provides a scalable solution for cognitive multi-agent learning architecture, suitable for partially observable and dynamic environments. MALDAC offers promise for a scalable cognitive MAL system, in which trained modules of cognitive processing could be distributed across geographically separate systems. Furthermore, MALDAC advances research of cognitive architectures by integrating the robustness of multi-agent learning and adaptability of emotional learning. 6 ACKOWLEDGMENTS The financial assistance of the National Research Foundation (NRF) of South Africa towards this research is hereby acknowledged. Opinions expressed and conclusions arrived at,
Barnett, T. ̶ Ehlers, E.
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 718-727
are those of the authors and are not necessarily to be attributed to the NRF. 7 REFERENCES [1] Panait, L., Luke, S. (2005). Cooperative multi-agent learning: The state of the art. Autonomous Agents and Multi-Agent Systems, vol. 11, p. 387-434. [2] Langley, P., Laird, J.E., Rogers, S. (2009). Cognitive architectures: Research issues and challenges. Cognitive Systems Research, vol. 10, no. 2, p. 141-160. [3] Vlassis, N. (2007). A Concise introduction to multiagent systems and distributed artificial intelligence. Synthesis Lectures on Artificial Intelligence and Machine Learning, vol. 1, p. 1-71. [4] Sycara, K.P. (1998). Multiagent systems. AI magazine, vol. 19, p. 79-92. [5] Shoham, Y., Powers, R., Grenager, T. (2004). Multi-agent reinforcement learning: a critical survey. AAAI Fall Symposium on Artificial Multi-Agent Learning Conference Proceedings, p. 89-95. [6] Maria, K.A., Zitar, A.Z. (2007). Emotional agents: A modeling and an application. Information and Software Technology, vol. 49, p. 695-716. [7] Marinier, R.P., Laird, J.E. (2008). EmotionDriven Reinforcement Learning. Proceedings of the 30th Annual Conference of the Cognitive Science Society. Austin, TX: Cognitive Science Society, p. 115-120. [8] Breazeal, C. (2003). Emotion and sociable humanoid robots. International Journal of Human-Computer Studies, vol. 59, p. 119155. [9] Tomlinson, B., Blumberg, B. (2002). Social Synthetic Characters. Focus: Education, vol. 36, p. 5. [10] Tomlinson, B., Blumberg, B. (2003). Alpha Wolf: Social learning, emotion and development in autonomous virtual agents. Lecture Notes in Computer Science, p. 35-45. [11] Arrabales, R., Ledezma, A., Sanchis, A. (2009). CERA-CRANIUM: A test bed for machine consciousness research. International Workshop on Machine Consciousness, Hong Kong, p. 1-20.
[12] Arrabales Moreno, R., Sanchis de Miguel, A. (2008). Applying machine consciousness models in autonomous situated agents. Pattern Recognition Letters, vol. 29, p. 10331038. [13] Richards, D., Sabou, M., van Splunter, S., Brazier, F.M.T. (2003). Artificial intelligence: A promised land for web services. The Proceedings of the 8th Australian and New Zealand Intelligent Information Systems Conference ANZIIS, p. 10-12. [14] Wijngaards, N.J.E., Overeinder, B.J., Van Steen, M., Brazier, F.M.T. (2002). Supporting internet-scale multi-agent systems. Data & Knowledge Engineering, vol. 41, p. 229-245. [15] Gaspari, M., Dragoni, N., Guidi, D. (2004) Intelligent web servers as agents, from: http://citeseerx.ist.psu.edu/viewdoc/downloa d?doi=10.1.1.1.1642&rep=rep1&type=pdf, accessed on 2009-08-08. [16] Ermolayev, V., Keberle, N., Plaksin, S., Kononenko, O. (2004). Towards a framework for agent-enabled semantic web service composition. International Journal of Web Services Research, vol. 1, p. 63-87. [17] Foster, I., Jennings, N.R., Kesselman, C. (2004). Brain meets brawn: Why grid and agents need each other. Proceedings of the 3rd International Joint Conference on Autonomous Agents and Multi-Agent Systems, p. 8-15. [18] Vaquero, L.M., Rodero-Merino, L., Caceres, J., Lindner, M. (2009). A break in the clouds: towards a cloud definition. SIGCOMM Comput. Commun. Rev., vol. 39, p. 50-55. [19] Bargelis, A., Kuosmanen, P., Stasiskis, A. (2009). Intelligent interfacing module of process capability among product and process development systems in virtual environment. Strojniški vestnik - Journal of Mechanical Engineering, vol. 55, no. 9, p. 561-569. [20] Gielingh, W. (2008). Cognitive product development: a method for continuous improvement of products and processes. Strojniški vestnik - Journal of Mechanical Engineering, vol. 54, no. 6, p. 385-397.
Cloud Computing for Synergized Emotional Model Evolution in Multi-Agent Learning Systems
727
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 728-743 UDC 658.512.2:004.896:004.94
Paper received: 27.04.2010 Paper accepted: 22.10.2010
Computer Aided Design and Finite Element Simulation Consistency Okba Hamri 1,* - Jean Claude Léon 2 - Franca Giannini 3 - Bianca Falcidieno 3 1 Department of Mechanical Engineering, University of Bejaia, Algeria 2 Laboratoire G-Scop, INP-Grenoble, France 3 Institute for Applied Mathematics and Information Technologies, Italy Computer Aided Design (CAD) and Computer Aided Engineering (CAE) are two significantly different disciplines, and hence they require different shape models representations. As a result, models generated by CAD systems are often unsuitable for Finite Element Analysis (FEA) needs. In this paper, a new approach is proposed to reduce the gaps between CAD and CAE software's. It is based on new shape representation called mixed shape representation. The latter simultaneously supports the B-Rep (manifold and non-manifold) and polyhedral representation, and creates a robust link between the CAD model (B-Rep NURBS) and the polyhedral model. Both representations are maintained on the same topology support called the High Level Topology (HLT), which represents a common requirement for simulation model preparation. An innovative approach for the Finite element simulation model preparation based on the mixed representation is presented in this paper, thus a set of necessary tools is associated to the mixed shape representation. They help to reduce the time of model preparation process as much as possible and maintain the consistency between the CAD and simulation models. ©2010 Journal of Mechanical Engineering. All rights reserved. Keywords: CAD-CAE consistency, FE simulation model preparation, mixed shape representation, simplification feature, High Level Topology 0 INTRODUCTION One of the main problems found in the passage from CAD to CAE is the lack of intersecting application space between these two categories of applications [1] to [3]. CAD models are typically generated to create a product shape satisfying functional requirements without prior knowledge of their effects on downstream CAE applications like FE mesh generation. This configuration is originated by the fact that most frequently, simulation software is not integrated with the CAD software environment. Hence, a model exchange is necessary and engineers in this process have different skills: design and engineering if they use CAD and FE simulation otherwise. To generate a FE model, the CAD geometry has to be adapted and often simplified to suit the hypotheses of the needed mechanical model. This task cannot be performed solely on the basis of geometric data, but requires also engineering expertise to supply the necessary additional information, such as Boundary Conditions (BCs). Therefore, a direct automatic transition from a CAD model to a FE model is not feasible [4]. *
728
Moreover, considering that due to time pressure and the newly available technologies, the various activities are not carried out sequentially anymore but the so-called concurrent engineering approach is more and more adopted. The sequential approach is subjected to back and forth cycles between CAD and FEA, thus requiring longer time for product development. Whereas the concurrent engineering paradigm assumes that when possible, the activities are carried out in parallel to provide results evaluation as early as possible. Therefore, a simulation analysis might be carried out at different design stages, bringing to new consistency issues as described below. Therefore, at a given stage of the design process (time T0) some preliminary analysis can be executed (see Fig. 1). At this time, mechanical hypotheses and simulation objectives are inserted to generate a domain of study compatible with the simulation requirements (see Fig. 1). As a result, the model M, called the case of study, is obtained and enriched with BCs.
Corr. Author's Address: Department of Mechanical Engineering, University of Bejaia, Route de Targua Ouzemour, 06000 Bejaia Algeria, okba_enp@yahoo.fr
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 728-743
From M, a FE mesh M' is derived to form the basis of structural analysis. T1 reflects the duration of the overall analysis process. Finally, at time (T0 + T1) the analysis results AM are obtained from mesh M’ enriched with the proper mechanical parameters. Such workflow illustrates the standard operations required to perform an analysis. The results AM are mandatory to validate this analysis process. As a consequence, a new analysis at T2 can be initiated only after this process has been validated (T2 > (T0+T1)). Now, considering that a modification of the initial CAD model has taken place to fit new product requirements or to derive a new version of the component, this implies that the case of study has to be updated (M'', at T2). Here, the question is raised whether or not mesh M', derived from the model M'', is still acceptable to model the behavior of the structure of M'' or if a new mesh M''' needs to be produced to perform a new analysis. If mesh M' can be derived from M'', then the performed modification on the initial CAD model M has no impact on the analysis results. Otherwise, a new mesh M''' is required to perform a new analysis. Hence, the consistency between design and simulation views is achieved through models M, M', M'', M''' if their coherence is preserved during the product development process, i.e. M' is attached to M'' for the first simulation objectives or M''' is attached to M'' for the second analysis objectives. Such a configuration of consistency is
called a one way consistency between simulation and CAD models. As depicted through the above configuration, the consistency between CAD and simulation models strongly depends on the adaptation process, i.e. the process which ranges from the generation of the case of study (models M and M'') to the FE mesh (models M' and M'''). In structural analyses, several mechanical models may be necessary to evaluate the behavior of components. These models are based on different meshes corresponding to different datasets according to the analysis objectives, e.g. structural analysis and thermal analysis. Currently, the models involved cannot be maintained coherent because there is no strong model link between the case of study models and FE meshes. Establishing such a link triggers also the consistency between the design and simulation views [5]. This paper attempts: • To provide flexibility in combining different shapes with the same model representation called High Level Topology (HLT), for FEA model preparation purposes (see section 2.2). • To develop a new methodology and tools which enables the analyst to selectively choose and extract the desired geometric entities from a several source of input shapes (CAD models, form features models, preexisting meshes, ...) for the purpose of creating the FEA model (see section 2.3).
Fig. 1. A consistency configuration: shape modification of a CAD model
Computer Aided Design and Finite Element Simulation Consistency
729
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 728-743
•
•
To address the above problems and to bridge the gap between CAD and CAE models, but also to maintain consistency between all the input models (see section 2.1). To reduce the complexity of FEA model preparation; then the concept of simplification features will be proposed for referring to a relevant concept to reduce the time of the adaptation process (see section 2.3). 1 RELATED WORKS
Among the various research areas covering the CAD-CAE consistency, essentially two address some of the aspects related to product views: 1. The form feature approaches are based mainly on the feature information identified on the B-Rep model or attached to it during its generation. Using this type of information, the resulting object description is composed of a set of form features, which can be simplified directly on the CAD model, according to the simulations objectives. Authors assume that the initial B-Rep model is consistent [6] and [7], which is true when considering an integrated feature-based software environment. One such simple configuration is exemplified with industrial CAD software using a construction tree with featurelike primitives. However, this hypothesis is not valid in the industrial context when the form feature model needs to be transferred from a CAD system to a FEA environment; the graph of the form feature is lost when standard file format is used. In addition, identifying or attaching form features incorporating free-form surfaces and exchanging feature information among different software through standard format is still a strong limitation of these approaches. Form features approaches also suffer from limitations because they rely solely on geometric information, whereas simulation Information is mandatory to characterize details. 2. The polyhedral approach [8] and [9] can be applied to tessellated models, digitized models or even pre-existing FE meshes since a triangulation can always be obtained from these models. Thus, the polyhedral model can be considered as an intermediate model between the CAD model and FE mesh generation. When this type of approach focuses on the direct use of triangulation obtained from the digitizing phase,
730
it avoids the construction of a CAD (NURBS) model from the digitized points and can provide directly either the FE model required for the analysis or the adapted geometry required to generate the FE mesh. This type of approach requires some healing or conformity set up processes [10], prior to the simplification process. These processes are complex and difficult to perform robustly because the variety of defects encountered is not a finite number of patterns and the target topology of the model after healing, i.e. the topology of the initial NURBS CAD model if the input triangulation comes from such a model, is not known explicitly, because no direct link exists between the initial CAD (NURBS) model and its tessellated model. Obviously, such topology information could be very useful to make the conformity set up process robust and to monitor the shape changes during the simplification process, and also to check the consistency between the initial CAD model and its polyhedral representation. In this approach the link between the CAD (B-Rep NURBS) model and its polyhedral representation is not maintained. Only the geometric information obtained from a face-by-face tessellated B-Rep model is available and the initial topology of the CAD model is not explicitly defined on the resulting polyhedral model. In our application scenario, we aim at dealing with various types of input data, such as CAD models, scanned objects or pre-existing FE meshes, therefore the polyhedral model seems to be the common representation for the preparation phase of FE models, as opposed to the use of a CAD model [7]. In order to take advantage of all the available information, especially from CAD geometry and form feature models, it is necessary to bring new solutions enabling the management of the various possible input models to increase the efficiency of an analysis model preparation process for structural analysis. As depicted through the previous analysis, the data exchange standard plays an important role in the process of FEA model preparation in terms of model conformity. The above analysis also demonstrates that the extraction of some geometric parameters and other CAD model attributes is critical in obtaining an efficient model transfer for FEA preparation. The objective of the proposed approach is to preserve the advantages of the existing approaches while extending the
Hamri, O. – Léon, J.C. – Giannini, F. – Falcidieno, B.
StrojniĹĄki vestnik - Journal of Mechanical Engineering 56(2010)11, 728-743
efficiency of polyhedral models to exploit the richness of CAD models where it can be useful for the model preparation. From the CAE point of view, few works have addressed the problem of maintaining the information between a FE model and the CAD model representing the shape of a simulation model [11] and [12] by mainly using attribute structures. 2 OVERVIEW OF THE PROPOSED APPROACH 2.1 Mixed Shape Representation for CADCAE Consistency For the above stated reasons, an automatic conversion of the CAD model into an analysis one is not possible in general. Most of the time, such a conversion first requires a selection of a sub-domain of the object on which the analysis can provide results comparable with those obtained on the whole object, which is normally designated as the case of study definition. To avoid the editing of complex CAD models and to ease the integration of the preparation process into a wide variety of design configurations and input data, the approach proposed here is based on two simultaneous representations. The first one is the B-Rep NURBS model imported from STEP files, and the second representation is the polyhedral model generated by the proposed tessellation process. The main objective of such a mixed representation is to reduce the gap between the CAD and simulation fields, and to give our approach more robustness by exploiting the advantages of both representations. Our proposed approach considers the CAD (B-Rep NURBS) as a reference model (or master model) in the CAD environment. This means that the B-Rep NURBS model is the most faithful representation of the component ``as manufactured" (see Fig. 2). In the CAE software environment, this representation becomes a slave of the mixed representation defined by the HLT and polyhedral representations. This change is justified by the fact that all the simplification operators are performed on the polyhedral model, which provides a robust and flexible representation because the faces are exactly adjacent to each other and general shape changes may be performed.
However, this polyhedral representation is not sufficient to address high level operations. Then, it requires a HLT and geometry support to reflect the initial topology of the B-Rep NURBS model and to ensure the efficiency of the detail removal operators. Our idea intends to maintain both representations simultaneously in order to take advantage of each one at each step of the FE model preparation process. For these reasons, it is mandatory to maintain the link between both representations during the FE simulation model preparation in order to evaluate the impact of any modifications performed on the CAD (B-Rep) model on the FE simulation model. Indeed, it is during the FE model preparation process where shape transformations are operated that there is a need to refer simultaneously to both representations in order to be able to propagate information across these models and hence, preserve the â&#x20AC;&#x153;linkâ&#x20AC;? across them.
CAD Env.
Reference model B-Rep NURBS (Master)
CAE Env.
Reference model Mixed-Rep = HLT + Conform polyhedron(Master)
Visualisation model Polyhedron(Slave)
B-Rep NURBS(Slave)
Change of the reference representation
Fig. 2. Change of reference model during the FE simulation model preparation 2.2 The High Level Topology The concept of High Level Topology (HLT) aims at efficiently supporting all the processes and the models involved in the FE simulation model preparation. The HLT concept
Computer Aided Design and Finite Element Simulation Consistency
731
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 728-743
can be applied to handle either manifold and nonmanifold objects. Therefore, it can be used to represent all the required shapes, i.e. the product shape (described as a B-Rep or a feature-based model), the simplified one for the simulation purposes (described as a polyhedron, possibly with idealized areas), and the shape of the BCs as well as other key concepts for the FE preparation process. The overall structure of the HLT data structures and NURBS geometry is also described to highlight their relations. The new topological representation should satisfy the following requirements [13]: • to support non-manifold representations for the adaptation and the idealization processes, • to describe the object’s topology at the level required by the user to provide him (resp. her) the concepts needed to apply some FE preparation operators, • to support representations of mechanical attributes (BCs, materials ...) providing an explicit description that is intrinsic to the corresponding concepts, • to contribute to the polyhedron conformity set up process, • to describe the geometry and topology required to specify the FE meshing constraints, • to be able to describe the topology and geometry of the initial component, in case of an input model coming from CAD systems, • to be independent of any geometric modeler and to be able to be linked to any CAD/CAE software without modifying the existing tools for shape representations. The Non-manifold models are constructed using the same basic topological elements of traditional B-Rep, i.e. faces, edges and vertices whereas the connection elements expressing the adjacencies among them have their data structures modified to deal with more generalized configurations. An example of a non-manifold BRep is given in [14], where the entity-use has been added to indicate the occurrence of the entity into a higher dimensional element. Thus, a “face-use” element denotes the appearance of a face in an object. Similarly, the “edge-use” denotes the appearance of an edge in a loop of edges around a face. Therefore, an edge may have any number of “edge-uses”. Fig. 3 summarizes these main constitutive elements of the mixed representation. The
732
purpose here is not to describe in detail the corresponding data structures needed to achieve an explicit topological description of the information attached to a shape because of lack of place [15]. Directly linking the topological entities of a polyhedral model to HLT entities is not possible because there is no match generally. To achieve the desired link, ‘polyedges’ and ‘partitions’ have been introduced. Polyedges are defined as a set of connected edges of a polyhedron in such a way that it forms a manifold of dimension 1, i.e. the geometric description of a polyedge is a polygon discretizing a curve. Similarly, partitions are defined as a set of connected faces of a polyhedron so that it forms a manifold of dimension 2, i.e. the geometric description of a partition is a polyhedron, either closed or open, discretizing a surface. Fig. 4b illustrates these concepts on a simple component. This structure is one of the main advantages of the mixed representation to propagate and transform the shape and the ‘semantic’ information attached to it, as depicted in Fig. 5.
Polyhedron topology
Polyedges, Partitions
HLT entities
B-Rep NURBS topotology
Fig. 3. Main constitutive elements of the mixed representation and corresponding relations (courtesy EADS CRC) As illustrated below, with the operators acting on this data structure, the links between all these topological entities are dynamically updated during the shape transformation processes. The concept of mixed representation shows that B-Rep NURBS models input must be converted to a facetted representation. At the level of the tessellation process, both types of models must have the same topological invariants, i.e. Euler characteristic. Details about this phase can be found in [16]. The key feature of this tessellation operation is its ability to
Hamri, O. – Léon, J.C. – Giannini, F. – Falcidieno, B.
StrojniĹĄki vestnik - Journal of Mechanical Engineering 56(2010)11, 728-743
preserve the largest possible amount of information available in a B-Rep NURBS model, considering that shape exchanges among product views is an important aspect of the product development process where different software is used. Based on tests with industrial major industrial software, i.e. CATIA from Dassault Systèmes, Pro/E from PTC, IDEAS from SDRC, etc., STEP standard appeared the most robust and efficient standard for the transfer of shapes. As highlighted in previous works [16], this standard enables transferring the topology of a B-Rep NURBS model. Its geometry is based on NURBS curves and patches and geometric parameters of analytic surfaces for the NURBS patches describing this category of surfaces. This property contributes actively to the propagation of key information about shapes throughout the product development process.
in fact, modeller tolerance free because this discretization scheme does not take any modeller tolerance into account as it is necessary during intersection computations or other geometry processing taking place during NURBS model generation.
HLT-Representation
Polyhedral-Representation
Based on
Based on Mixed-representation
Form-Features identification
Hole
Fillet
Form-Features removal
Corner
Hole removal
Fig. 5. Advantages of the mixed representation: HLT data structure plus NURBS and polyhedron representations a)
b)
Fig. 4. Illustration of polyedges, partitions, BRep NURBS edges and faces attached to HLT entities; a) B-Rep NURBS representation of the component with edge in green and faces in light blue, b) polyhedral representation of the component with polyedges (dotted red/black polylines) and partitions (a different color per partition) As a complementary point, STEP standard incorporates another important feature from the robustness point of view when exchanging shapes between CAD and other software used in downstream product views. Since STEP standard describes trimmed NURBS patches with the 3D intersection curves between adjacent patches in addition to the corresponding trimming curves in the parametric space of each patch, this 3D description can be used to robustly generate a conform polyhedron. Indeed, these curves can be discretized and then effectively serve as common boundary between the polyhedrons describing the NURBS patches. Processing patches that way is,
The concept of HLT has been introduced here to meet the requirements identified in the previous sections for explicitly representing the semantic information attached to a shape. In addition, the objective of the HLT is also to enable the intrinsic representation of some semantic information attached to a shape. To illustrate these two complementary concepts, the following example can be considered.
a)
b)
Fig. 6. Illustration of HLT entities on a component; a) initial component represented as a polyhedron, b) polyedges, partitions reflecting the B-Rep NURBS decomposition produced by a CAD modeller Fig. 6 depicts a component after the generation of its facetted representation from the
Computer Aided Design and Finite Element Simulation Consistency
733
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 728-743
STEP file input. Using this file content, the topology of this component can be inserted in the HLT data structures. The concept attached to this decomposition is that it reflects the trace of the component modelling process in the industrial CAD software used to generate it. This decomposition may have an interest if interactions are needed between the ‘current PV’ where the HLT is used and the product view where the CAD modeller has been used. Fig. 7 describes another HLT decomposition of the component where the concept represented is the geometric shape features attached to the component. Here, partitions are associated to a surface type meaning, i.e. planes, cylinders, cones, etc. based on the information available in the STEP file input. This HLT instance is the intrinsic representation of the shape features belonging to the component surface and it is an explicit topological representation of the component surface according to this feature concept. This HLT instance, like the HLT instance in Fig. 5 both rely on vehicular in information only.
instance can be generated either on a semiautomatic basis if the actor’s know-how cannot be formalized enough or automatically if the criteria needed are entirely formalized. Here, the concept explicitly and intrinsically represented is the component decomposition according to FE mesh discretization constraints.
Fig. 8. Illustration of a HLT instance representing sub domains for FE mesh generation; the component surface decomposition reflects the sub domains needed for the domain decomposition into FE
Initial component
B-Rep CAD
a)
FE Mesh constraints
BCs
b)
Fig. 7. Illustration of a HLT instance representing shape features; a) partitions decomposing the component according to shape features, b) the same partitions coloured according to the surface type (pink for planar areas, blue gray for cylindrical areas) Fig. 8 represents an HLT instance of the component surface decomposition into sub domains as they are needed for a FE mesh generation process. Dotted white/red polyedges indicate that they are no longer the effective boundary of HLT partitions. Here, it has been considered that the sub domains generation could take into account the shape of the object as well as the size of the FE desired and the local minimization of the discretization error. As a result, this decomposition results both from vehicular in information and vernacular one specific to a product view focusing on FE simulation model preparation. Such a HLT 734
Form Features
Fig. 9. Synthesis of HLT instances attached to a single component and describing explicitly the topology of semantic information represented intrinsically Fig. 9 gives another example of HLT instance describing Boundary Conditions (BCs) applied to the component for a given FE simulation. According to the corresponding semantic, partitions could be further subdivided to describe more precise concepts, e. g. pressure areas and clamped areas. Again, the corresponding component decomposition is an explicit topological description intrinsically dedicated to the concept of BCs. In line with the HLT instance described in Fig. 9, this HLT instance also relies both on vehicular in information and vernacular one.
Hamri, O. – Léon, J.C. – Giannini, F. – Falcidieno, B.
StrojniĹĄki vestnik - Journal of Mechanical Engineering 56(2010)11, 728-743
The example described above has illustrated the concept of HLT instances attached to a component. It should be pointed out that each of these instances can be of type non-manifold to preserve the intrinsic representation of the corresponding concepts. This further generalizes the contribution of Bronsvoort [17] and [18] focuses on cellular type decomposition of objects. A further point considering the HLT instances is that they can be all attached to the same component instance. This can be graphically synthesized with Fig. 8 where the boundary partitioning reflecting the concepts in Figs. 6 to 9 are all attached to the initial component [19]. 2.3 Process Flow of the Proposed Approach for CAD-FE Simulation Consistency Fig. 10 illustrates the structure of the proposed FE model preparation process, starting from a CAD model (possibly feature-based) enriched with additional mechanical information (A) and finishing with standard FE mesh generators and solvers (C). This structure clearly shows that the preparation process can be inserted in any CAD-CAE software environment without modifying the existing tools. Standard CAD modelers are distinguished from feature-based ones to clearly cover all the possible industrial configurations, i.e. CAD modelers that can use a variable range of form feature primitives. In this paper, we restricted to B-Rep models since they are always available through standard data exchange formats, e.g. STEP. The preparation process (B) can be applied to different input polyhedrons, i.e. evaluated from CAD models, digitized models or pre-existing FE meshes. Even if this capability is not under focus here, it is shown to reveal the overall model preparation scheme to demonstrate how this scheme behaves depending on the amount of information existing in the input model. The architecture of the proposed approach is composed of three blocks. 1) Block (A) Represents the Level of Geometric Modelers At this level (see Fig. 10), standard solid modelers can be associated with a process, namely the insertion of BCs, to produce the case of study. The case of study designates the component sub domain that is required for the
targeted FE analysis in accordance with the mechanical hypotheses as well as the location of prescribed forces and/or displacements defining the loading conditions over the component. However, this process is not mandatory and, if not performed at that stage or if not entirely performed at that stage, should be possible later in the FE model preparation process. The generation of a case of study is often based on some of the BCs required to set up or to simplify the analysis model. Symmetry planes, pressure areas, force locations are the most recurrent BCs. Most of them are inserted through the use (and possibly creation) of adequate geometric elements; thus motivating at CAD level some of the BCs insertion process that should be specified and then transferred to the HLT and polyhedral representation. Geometric operators of standard CAD modelers as well as specific ones are used to create the geometric model of the case of study, which is often non-manifold. Whilst the geometric model can be exported at model preparation level through a standard format such as STEP, BCs parameters that coincide with the mechanical parameters attached to the geometric model of BCs, i.e. pressure values, forces components, are not incorporated in the STEP application protocols used by most of CAD systems, i.e. AP 203 and AP 214. Thus, BC parameters currently need to be transferred through specific file formats to input them into the model preparation environment if they are defined outside this environment. Feature based modelers are distinct from standard solid modelers because there are still limitations in handling feature information in standard formats. Similarly, not all these modelers treat the same set of feature primitives. As a consequence, three categories of data are returned from these modelers: 1. A geometric model of the component that can be exchanged through a STEP file. BRep models in STEP files (AP 203) are defined as â&#x20AC;&#x153;a geometric representation through several layers of topological representation items, reference curves, surfaces, and pointsâ&#x20AC;? and incorporate various levels of geometrical and topological information [20];
Computer Aided Design and Finite Element Simulation Consistency
735
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 728-743
Fig. 10. Process flow of the proposed approach 2. A set of feature parameters and data to describe the form features present in the component. This requires a specific file format since STEP files cannot incorporate such a type of information yet; 3. A set of BC parameters to describe the possibly defined mechanical parameters. It should be noticed that the geometric semantic attached to some primitive curves or surfaces, i.e. the nature of circles, line segments, 736
cylinders, etc., can be further exploited during the preparation phase to enrich the polyhedral description with the primitive parameters and to help the extraction of the simplification features as stated later at section 2.3.4. In addition, the AP 203 contains other type of information, like the assembly information, which is not supported by AP 214. This information is useful for the proposed approach to specify the BCs on a component during the shape
Hamri, O. – Léon, J.C. – Giannini, F. – Falcidieno, B.
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 728-743
transformations taking place in the FE model preparation. The contact area between components of the same assembly can be seen as the geometric domain where BCs can be applied. This type of information is neither explicitly available in STEP or in a CAD software environment. Only the spatial relationships between the components of an assembly are stored as a graph, which reduces them to logical links only. In our context, the exploitation of such a kind of information represents the first step for the geometric domain identification of BCs. This type of information should be maintained on the HLT and transferred to the polyhedral model in order to define the geometric domain of BCs. Indeed, these contact area identification needs to be performed on the initial B-Rep NURBS representation of the assembly either as a specific function inside a CAD modeler, i.e. the block currently described, or as a function in block (B) (see Fig. 10). Since the data exchange standard plays an important role in the process of FEA model preparation in terms of model conformity, the above analysis demonstrates that the extraction of some geometric parameters and other CAD model attributes are critical to obtain an efficient model transfer for FEA preparation. The objective of the proposed approach is to exploit simultaneously the advantages of both representations, i.e. B-Rep NURBS (through the HLT model) and polyhedral models, in order to reduce the tedious work of the adaptation process as much as possible. The first representation is the HLT one with the NURBS model, contributes to identify the form features which can be details, and is used to perform the conformity set up automatically. The second representation is the polyhedral one and contributes to perform robustly the detail removals. Both representations are mandatory for the robustness of our approach. 2) Block (B) Represents the Processes Related to the FE simulation model generation in accordance to the FE model preparation requirements A common requirement is the capability of explicitly representing the topology related to BCs, material which led us to propose a concept of HLT discussed in the previous section. In this
section we describe the processes related to the use of the HLT. These ones can be applied either on a HLT-Body (left set of images in Fig. 10) or on a given HLT-Component (right set of images in Fig. 10) with several HLT-Bodies, i.e. an assembly. Such configurations can be distinguished according to the used STEP protocol. The resulting model from a STEP AP 214 preprocessor can only be a HLT-Component with several HLT-Bodies without logical links between them, whereas the resulting model from an AP 203 pre-processor can be either a HLTComponent with a set of disconnected HLTBodies and logical links between them or a set of HLT-Components with logical links between them, each of them containing only one HLTBody. In order to perform efficient shape simplifications during the FE simulation model preparation, four steps are proposed before the preparation process itself: 2.3.1 Model Conversion from STEP to HLT and Polyhedral Representations To reflect the topology and geometry of the initial object to be analyzed, the HLT is created directly from the object B-Rep NURBS. Firstly, the STEP entities are loaded into the BRep CAD data structure of a hosting CAD modeler [15]. In the implementation described in this paper, we used Open Cascade as the geometric modeling hosting system. At this level, non-manifold solid models were considered as they can be available in CAD systems through STEP files. Even though all of the major modeling kernels are B-Rep based, they all have some differences in how they represent that topology. The topological entities are mapped between the STEP file and HLT data structure. In the first place the STEP entities are loaded into classical B-Rep CAD data structures. Here, these data structures are that of the geometric modeling kernel available from Open Cascade. At this level, non-manifold solid models have been considered, as available in CAD systems. Even though all of the major modeling kernels provide a boundary representation of the model, they all have differences in how they represent that topology. To expose all of these differences to the
Computer Aided Design and Finite Element Simulation Consistency
737
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 728-743
rest of the geometry-based environment would greatly complicate the software architecture. By providing a consistent interface, the FEA model preparation data structures are isolated from these differences, which are all encapsulated in the model interface classes, i.e. the shape kernel interface.
conformity of the polyhedron generated by the tessellation algorithm. The faces of the output polyhedron are constrained by form (no very small angles to avoid numerical instabilities) and size (edge length lower than a given size) requirements in order to produce a satisfactory shape adaptation.
2.3.2 Tessellation Process
2.3.3 The conformity Set Up Process
A tessellation process to convert the HLT of B-Rep NURBS model into a polyhedral one whose discretization must be compatible with a given FE map of sizes defined a priori. The model preparation environment for the above range of data starts with the tessellation of CAD data to produce the polyhedral representation required for the detail removal process [5]. Rather than using a tessellation process integrated into CAD modelers, where the criteria used can vary widely from one modeler to another and the control parameters may not be suited to the FE simulation preparation process, i.e. the lack of control parameters may produce very sharp triangles that are not compatible with the range and accuracy of the simplification operators applied later on, a specific tessellation process has been developed. This process is independent of any CAD software, it uses Ruppert's algorithm [21], and adopts an edge length criterion while avoiding degenerated triangles. Incorporating the tessellation process into the model preparation phase offers also the possibility to relate its control parameters to the detail identification taking place later on. However, it should be mentioned that there is not yet any clear specification of operators to bind efficiently the tessellation to the simplification processes. Thus, the tessellation process by itself needs to be controlled by the mechanical engineer in charge of the simulation to ensure that the discretization of the CAD model is somehow compatible with the FE size required in the FE mesh. The polyhedral model generation forms the basic step in the overall adaptation procedure because the operators efficiency of the adaptation process strongly depend on tessellator characteristics, i.e. density of elements,
To produce a conform polyhedron as needed by the simplification process, a conformity set up process is mandatory for the tessellated CAD models as well as for the digitized ones. Even pre-existing FE triangulations may require such a treatment when the objective is to set up a model from several parts to merge them into only one HLT-Body. In the case of CAD model input, generating a conform model means that the topology of the polyhedral model must be identical to that of the initial B-Rep NURBS model as it is available in the STEP file. In other words, for a two-manifold closed surface representing the boundary of a volume, each edge of the corresponding polyhedron must be connected to exactly two faces and the genus of the polyhedral model must be identical to that of the volume in the STEP file. The configuration of non-manifold models is not specifically addressed here since STEP does not incorporate capabilities to describe the topology of such objects. In addition, as previously discussed, industrial modelers are not offering specific and efficient operators to generate such models. In the [22] the authors present a connection between the mesh and the parametric generation with a CAD kernel for a selected structure generation. Such configurations only occur for simple shapes handled in a so-called integrated environment. However, it is no longer efficient when the CAD model has an inconsistent topology due to data exchange between different software or to the difference of shape requirements between FE meshes and CAD models. Indeed, certain topological details may severely complicate the mesh generation, and the quality of the resulting mesh is not always suited to the simulation objectives. In general, the conformity set up process based solely on the polyhedral model input cannot be performed robustly because it requires
738
Hamri, O. – Léon, J.C. – Giannini, F. – Falcidieno, B.
StrojniĹĄki vestnik - Journal of Mechanical Engineering 56(2010)11, 728-743
a specific set of tolerances related to the model source [13]. In order to robustly achieve the conformity set up process, the topological information available in the HLT data structure can be used to apply some conformity set up operators on the tessellated model and achieves its conformity. To this end, the following classification of the HLT entities helps defining the process principle (see Fig. 11).
â&#x20AC;˘
Merge the list of vertices associated to the boundary of HLT-Faces belonging to the corresponding homologous polyedges: this step consists in merging the polyedges two by two according to their orientation, which has the same orientation as their HLT-CoEdges. Finally, only one polyedge results from two adjacent boundary partitions. This polyedge represents the discretization of the initial HLT-Edge and has a topology identical to the corresponding edge defined in the STEP file (see Fig. 11d).
Fig. 11. Illustration of HLT entities classifications A HLT-Edge which does not take part to the description of any HLT-Face is classified as an isolated HLT-Edge (see Fig. 11d). From the above list of edge status (see Fig. 10), the concept of homologous edges is the key status required to characterize the polyhedron conformity needs between polyedges. Therefore, it is the only status that needs to be implemented for the conformity set up operators. Again, the vertex classification shows that the status of homologous vertex is the key status for the operators performing the conformity set up process. There, it is the only status that needs to be considered as a basis for the basic operators to be implemented. As a result of the above analysis, the automatic conformity set up consists in the following steps: â&#x20AC;˘ Merge the list of homologous vertices: first of all, the homologous vertices belonging to the homologous polyedges are collected (see Fig. 12b) and then merged two by two. The results are updated in the list of homologous polyedges (see Fig. 12c),
Fig. 12. Automatic conformity set up process; a) a set of geometrically disconnected HLT-Faces as produced by the STEP pre-processor, but topologically connected at the homologous vertices and along homologous edges, b) corresponding polyhedral model produced by the tessellation process, c) merging of the homologous vertices belonging the polyedges, d) merging of the boundary vertices belonging the polyedges The resulting polyhedron is conform, and its topology is identical to the topology of the object initially described in the input STEP file. In conclusion, the conformity set up can be performed automatically and robustly since it is based on topological information provided by the link between the initial B-Rep topology and the polyhedron. As a result of this approach, the conform polyhedron is bound to the B-Rep topology of the CAD model and if available (see Fig. 13), to the form features.
Computer Aided Design and Finite Element Simulation Consistency
739
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 728-743
a)
b)
Fig. 13. Example of automatic conformity set up process performed on the imported models in STEP files; a) HLT-Component representing an assembly (each color represents a type of HLTbody), b) HLT description of the initial component and its geometric semantic (each color represents a type of HLT-Face geometry, i.e. plane) 2.3.4 Form Features Identification In the previous section, the link between B-Rep NURBS CAD models and polyhedral models was successfully set up through the mixed representation, i.e. HLT data structures, plus NURBS and polyhedron representation. This new representation, i.e. mixed representation, allows algorithms to exploit simultaneously the advantages of the B-Rep NURBS and polyhedral representations (see Fig. 5) as follows: The first representation, which is the HLT data structure plus NURBS geometry, allows algorithms to identify certain form features (holes, fillets, corners), which can be considered as detail features. New concepts are required to classify these identified form features as feature details. Such concepts lead to the notion of “simplification features” whose objectives are [13]: • Reducing the complexity of the detail identification and strengthening the tasks related to simplification by enabling reasoning directly on a set of geometric elements belonging to a specific form feature instead of the low level elements only, i.e. vertices, edges, faces of a polyhedral model [23]. Form feature information brings high level information either to complement or supersede polyhedral model data structures, • Avoiding inconsistencies of information between the CAD models and the corresponding simplified polyhedrons, in such a way that form features which are not
740
relevant for structural analysis do not appear in the simplified models. This observation is highly significant because an adequate preservation of information during the simplification process contributes to an evaluation of the impact of CAD geometry changes on simulation models, because if the added form features to the CAD model is a simplification feature, then the initial FEA model does not require a re-evaluation. The second representation, which is polyhedral, allows algorithms to remove certain from features on this model, whose removals solely performed with the HLT data structures and NURBS model can be fairly complex and hence not robust. Some works have been already performed on this subject [24] and [25], which consists in identifying and removing certain form features on the B-Rep NURBS model. The majority of these algorithms modify the geometry and the topology of the initial model. Consequently, these modifications in many cases involve the computation of intersections between 3D curves or surfaces in order to ensure the consistency of the resulting BRep model. However, the computation of intersections between 3D curves is not exact and often requires some approximations. Therefore, these approaches cannot be robust since they incorporate tolerances that generate model consistency problems later on in the downstream processing of the simplified model. On the other hand, this problem does not arise on the polyhedron because of the very simple geometric support of each face, which is exactly adjacent to others through a common edge which is a line segment. Additionally, most of the approaches addressing the feature removal aspect are based on geometric criteria only, whereas the concept of simplification detail strongly depends on the analysis to be performed, i.e. it relies both on geometric and mechanical criteria. In order to give more robustness to our approach, the form features will be identified on the HLT-NURBS model and then removed on the polyhedral model to create a progressive shape transformation and produce a model that can be robustly transferred to downstream processes.
Hamri, O. – Léon, J.C. – Giannini, F. – Falcidieno, B.
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 728-743
Fig. 14. Examples of hole identification results using the B-Rep-NURBS/analytic surfaces representation The interest of the mixed representation is also to take advantage of the B-Rep NURBS/analytic surface representation to identify some shape feature characteristics. Fig. 14 illustrates the configuration of a hole identification process performed with this representation. Then, based on the partition entities, the description of these holes can be propagated on the associated polyhedron representation. Similarly, fillet features can be identified using the B-Rep NURBS/analytic surfaces description. This is currently applied to simple fillet configurations where basis surfaces are among the analytic surfaces, i.e. planes, cylinders and spheres. Fig. 15 gives some examples of the corresponding results displayed on the polyhedral model.
Fig. 15. Examples of fillet identification results using the B-Rep-NURBS/analytic surfaces representation (courtesy EADS CCR) The combination of distance criteria and shape features identified through B-Rep NURBS / analytic surfaces information can provide new concepts for performing shape transformations. The concept of the simplification feature is one of those new concepts and is defined as follows: A form feature F is a simplification feature if F is fully contained in the discrete envelope defined by the map of sizes associated to it. An example of such a feature is given in Fig. 16. It combines the through hole feature identification with local user-defined map of sizes defining the allowed distance between the initial and polyhedrons.
Fig. 16. Topological detail feature; a) initial component, b) through holes identification, c) local map of sizes defined by the user that contains some through holes, d) result of the simplification operator associated to the simplification feature thus defined 3 CONCLUSION This paper proposes a method, models and tools for CAD-CAE consistency. The method consists in setting up a new approach for CADCAE consistency based on the concept of mixed representation (High Level Topology data structures, NURBS and polyhedral representations), which allows the software architecture to explicitly maintain the links between these models, to manipulate them and to take advantage of their richness. Exploiting the advantages provided by the information content and organization of each representation, the proposed approach improves the robustness of the various processes involved in FEA model preparation from CAD data and makes the overall conversion more efficient. The concept of HLT is presented in this paper to extend the shape description capabilities of current CAD modelers to satisfy the requirements of simulation models. The defined HLT representation can handle configurations either manifold and non-manifold, which frequently appear in the specification of the underlying geometric support of BCs, in FEA models having idealized subparts and which are useful for the specification of the composing material. Through the use of multiple HLTs in the proposed method, the intrinsic shapes of all the above concepts (BCs, material, object) are explicitly represented and suitably combined through what is called evaluated topology. Transforming B-Rep NURBS to polyhedral representation is one of the processes required for FE simulation model preparation in the proposed
Computer Aided Design and Finite Element Simulation Consistency
741
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 728-743
approach. Such a process called tessellation is taking advantage of the HLT concept and allows maintaining the link between the B-Rep NURBS information and its polyhedral approximation. During the tessellation the geometric semantic attached to the HLT data structure of a B-Rep NURBS model is used and is propagated onto the polyhedron model. This semantic serves to make the conformity set up process robust. To speed up the shape adaptation step, the concept of simplification features has been proposed. Such a concept combines the geometric and mechanical data to reduce the complexity of the task of the FE simulation model preparation, and contributes to evaluating the consistency between the various versions of the CAD model of the same object and the already computed FEA models. The geometric data are characterized by specific form features, such as holes and fillets. The mechanical data are characterized by the map of FE sizes defined a priori or posteriori on the polyhedron model. New categories of details features can be derived from the simplification features concept. Topological detail features, and skin detail features are two new categories which can be useful for FE simulation model preparation and that have been proposed. Through hole, fillet and round features are identified on the object model, taking advantage of the HLT data structures and of the attached geometric surface description. The proposed approach can be implemented into any existing industrial software, without modifying a current preparation process flow if not desired by the users.
5 REFERENCES [1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
4 ACKNOWLEDGEMENTS [9] The work has been carried out within the AIM@SHAPE (Advanced and Innovative Models And Tools for the development of Semantic-based systems for Handling, Acquiring, and Processing knowledge Embedded in multidimensional digital objects) European network of excellence, contract n°506766 (http://www.aimatshape.net/). This network incorporates also a Network Industrial Group where EADS CCR is active and work on the proposed topic has been carried out through a partnership between INPG, IMATI-CNR and EADS CCR.
742
[10]
[11]
Beall, M.W., Walsh, J., Shephard, M.S. (2003). Accessing CAD geometry for mesh generation. Proceedings of International Meshing Roundtable, Sandia National Lab. Dabke, P., Prabhakar, V., Sheppard, S., (1994). Using features to support Finite Element idealization. International conference ASME, Minneapolis, vol. 1, p. 183-193. Fine, L., Rémondini, L., Léon, J.-C. (2000). Automated generation of FEA models through idealization operators. Int. J. for Num. Meth. in Eng., vol 49, no. 1-2, p. 83108. White, D.R., Leland, R.W., Saigal, S., Owen, S.J. (2001). The meshing complexity of a solid: an introduction. Proc. of International Meshing Roundtable, Sandia National Lab., p 373-384 Hamri, O., Léon, J.C. (2003). Interoperability between CAD and simulation models for cooperative design. CIRP Design Seminar, Grenoble. Lua, Y., Gadha, R., Tautges, T.J. (2001). Feature based hex meshing methodology: feature recognition and volume decomposition. CAD, vol. 33, p. 221-232. Shah, J.J., Mantyla, M. (1995). Parametric and Feature Based CAD/CAM: Concepts, Techniques, and Applications. John Wiley & Sons, Inc., New York. Fine, L. (2001). Processus et méthodes d’adaptation et d’idéalisation de modèles dédiées l’analyse de structures mécaniques. PhD. thesis, INPG, Grenoble. (in French) Véron, P., Léon, J.C. (2001). Using polyhedral models to automatically sketch idealized geometry for structural analysis. Engineering with computers, vol. 17, p. 373-385. Barequet, G., Duncan, C.A., Kumar, S. (1998). RSVP: A geometric toolkit for controlled repair of solid models. IEEE Trans. on Visualization and Computer Graphics, vol. 4, no. 2, p. 162-177. Mobley, V., Carroll, M.P., Canann, S.A. (1998). An object oriented approach to geometry defeaturing for finite element meshing. Proceedings of 7th International Meshing Roundtable, Sandia National Lab., p. 547-563.
Hamri, O. – Léon, J.C. – Giannini, F. – Falcidieno, B.
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 728-743
[12]
[13]
[14]
[15]
[16]
[17] [18]
O’Bara, R.M., Beall, M.W., Shephard, M.S. (2002). Attribute management system for engineering analysis. Engineering with computers, vol. 18, p. 339-351. Hamri, O., Léon, J.C., Giannini, F., Falcidieno, B. (2004). From CAD models to F.E. simulations through a feature-based approach. Int. ASME DETC Comp. in Eng. Conf., Salt Lake City, p. 377-386. Weiler, K.J. (1988). The radial-edge structure: A topological representation for non-manifold geometric boundary representations. Geometric modeling for CAD applications, North Holland, p. 3-36. Hamri, O. (2006). Method, models and tools for Finite Element model preparation integrated into a product development process, PhD thesis, INPG, Grenoble. Hamri, O., Léon, J.C., Giannini, F., Falcidieno, B. (2005). Using CAD models and their semantics to prepare FE simulations. ASME DETC Int. Conf. DAC, Los Beach, p. 129-138. Bronsvoort, W.F., Noort, A. (2004). Multipleview feature modelling for integral product development, CAD, vol. 36, no. 10, p. 929-946. de Kraker, K.J., Dohmen, M., Bronsvoort, W.F. (1997). Multiple-way feature conversion to support concurrent engineering. In: Pratt, M., Siriram, R.D., Wozny, M.J. (eds.), Product Modelling for Computer Integrated Design and Manufacture, Chapman and Hall, London, p. 203-212.
[19]
[20] [21] [22]
[23] [24]
[25]
Hamri, O., Léon, J.C., Giannini F.,·Falcidieno, B., Poulat, A.,·Fine, L. (2008). Interfacing product views through a mixed shape representation. Part 1: Data structures and operators. International Journal on Interactive Design and Manufacturing, vol. 2, no. 2, p. 6985. Marler, B. (2001). STEP Tutorial: Effective Exchange of STEP Data. PDE-STEP Aerospace Workshop. Ruppert, J. (1995). A delaunay refinement algorithm for quality 2 Dimensional mesh generation. Journal of Algorithms, p. 548-585. Kos, L., Kulovec, S., Zaletelj, V., Duhovnik, J. (2009). Structure generation for free-form architectural design. Advanced engineering, vol. 3, no. 2, p. 187-194. Belaziz, M., Bouras, A., Brun, J.M. (2000). Morphological analysis for product design. CAD, vol. 32, no. 5-6, p. 377-388. Lua, Y., Gadha, R., Tautges, T.J. (2001). Feature based hex meshing methodology: feature recognition and volume decomposition. CAD, vol. 33, p. 221-232. Deciu, E.R., Ostrosi, E., Ferney, M., Gheorghe, M. (2008). Product family modelling in conceptual design based on parallel configuration grammars. Strojniški vestnik Journal of Mechanical Engineering, vol. 54, no. 6, p. 398-412.
Computer Aided Design and Finite Element Simulation Consistency
743
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 744-753 UDC 658.512.2:004.5
Paper received: 28.04.2010 Paper accepted: 20.10.2010
The Effect of Visual Feedback on Learnability and Usability of Design Methods Ronald Poelman1,* - Zoltán Rusák2 - Alexander Verbraeck1 - Leire Sorasu Alcubilla1 1 Faculty Technology, Policy and Management, Department of Systems Engineering Delft University of Technology, the Netherlands 2 Faculty Industrial Design Engineering, Computer Aided Design & Engineering Delft University of Technology, the Netherlands Currently 2D displays are used for the majority of design tasks, but 3D successors are slowly being introduced as a promising alternative. Understanding how the dimensionality of displayed images influences the effectiveness and efficiency of designers working with virtual reality design environments is therefore of great interest. This paper presents the results of a comparative usability study, which focuses on assessing the influence of different types of displays on learnability and usability. In our study, we carried out an experiment to compare 2D screen and 3D displays. Users were asked to design free form shapes in each of the three types of environments using a prescribed human-computer interaction process. This research shows that there are significant differences between the preferences of different user groups. Gender and level of expertise in using advanced visualization technologies had a significant influence on the usability and learnability of the design methods for each of the three display types. ©2010 Journal of Mechanical Engineering. All rights reserved. Keywords: virtual reality, dimensionality, display, design, usability, learnability 0 INTRODUCTION The use of 3D virtual reality technologies for various design activities (e.g. product modeling, evaluation of production processes, user evaluations of new product concepts) has been proliferating in the aerospace and automotive industry. The technology was, however, adopted on a much smaller scale in consumer product design. There are several explanations for this difference such as: (a) the relatively high cost of hardware and software for virtual reality design environments, (b) a lack of dedicated design methods that support the use of 3D virtual reality tools for design activities, and (c) an undefined cost-benefit ratio of applying 3D virtual reality for consumer products design. On the other hand, with the introduction of desktop virtual reality investment in hardware and software have been significantly reduced. Potentially, effectiveness and efficiency can be improved by using higher dimensionalities than the standard 2D when designing. Small design offices might weigh the investment costs of the virtual reality solutions and the costs of training users for these new solutions against the expected improvement of effectiveness and efficiency of the design process. The costs of hardware and user training can be clearly defined but the *
744
benefits of using higher dimensionality environments on the efficiency and effectiveness of the design processes is more difficult to assess. Literature indicates that effectiveness and efficiency of the user and his/her performance of a task can be assessed as the usability of an interface, product or system [1]. In this paper we investigate the effect of a dimensional degree of visual feedback on the learnability and usability of new virtual reality based design methods. The paper presents the results of a comparative study, which focuses on assessing the influence of different types of displays on learnability and usability. In our study, we compared the effectiveness of 2D and 3D output devices for human-computer interaction in designing free form shapes. We define that 2D works with perspective and occlusion to indicate depth and 3D is surface independent to create 3D imagery. The goal of the study was to identify and understand relationships between the dimensionality degree of visual feedback and the learnability and usability of new virtual reality based design methods. In the experiment, the users were asked to perform a design task in three environments. In our research, the independent variable was the dimensionality of the visual feedback. The dependent research variables were the usability
Corr. Author's Address: Faculty Technology, Policy and Management, Department of Systems Engineering, Delft University of Technology, Jaffalaan 5, 2628BX Delft, The Netherlands, R.Poelman@tudelft.nl
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 744-753
and the learnability, which was measured according to the Technology Acceptance Model (TAM) [2], illustrated in Fig. 1. TAM defines usability or the ease of use as “the degree to which an individual believes that using a particular system or device would be free of physical and mental effort” [3]. Learnability is defined by Santos and Badre as “the effort required for a typical user to be able to perform a set of tasks using an interactive system with a predefined level of proficiency” [4]. The variables have been measured using a questionnaire.
Fig. 1. Technology acceptance model (TAM) [2] In this questionnaire more parameters than the learnability and the usability were measured such as the helpfulness and the efficiency, the user’s satisfaction, the desirability the user feels towards interacting with the device/system and the frequency the user expects the device/system to be used in a design company in a near future. The results of our study are presented according to the following structure. First we discuss the state-of-the-art in visualization techniques, followed by a discussion of the usability and learnability of design tools. In the second section, we present the setup of our experiments (i.e. the research apparatus, participants, and the procedure). The results and the data analysis are discussed in section three.
Interpretation of the data is presented in section four, and the paper ends with conclusions. 1 STATE OF THE ART Although 3D design software solutions are common nowadays, the interface of software applications has remained the same over the past 20 years. This might be a result of the fact that 3D designing software is still unable to provide the user the five required design tasks [5]: (i) 3D sketching and structuring, (ii) rapid experimentation within the design space, (iii) working with multiple design ideas in parallel, (iv) collaboration, (v) reflection and “anywhere” refinement. These are likely some of the reasons for new types of technologies to arise, which enable new ways of working with more parameters than the current design environments. In fact, major leaps are established in tracking technology, display technology, motion sensors (MEMS) and 3D rendering which enable new and more options for human computer interaction. We are generally still using the keyboard and mouse to interact with our 3D designs but new interaction devices have been commercially launched for mass markets that are much more intuitive. For 3D navigation there are space navigators, console controllers and track cams. High definition cams with gesture interpretation can run in real time because of the current parallel hardware. Furthermore, the display types and their visualization capabilities have also been improved. The next generation of game consoles can show photorealistic 3D content in real-time because of the leisure game engines that are being developed. Currently, some of the movie blockbusters have a 2D and 3D counterpart because of new 3D glasses that were introduced to the public in 2007. Augmented reality is expected to hit mass market because of the onchip integration of GPS and the software applets that can run on mobile phones. It is quite likely that these technologies will influence our way of designing and fulfill some of the lacks in the five required design tasks [6]. There are many examples in the information visualization literature with regards to design (e.g., [7] and [8]). Unfortunately, they have been rarely subjected to empirical evaluation. Especially usability and learnability in relation to the interface type has received little attention. There are several studies
The Effect of Visual Feedback on Learnability and Usability of Design Methods
745
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 744-753
where empirical evaluation has been carried out, but where just one or two immersion degrees of display types have been tested (e.g. [9] to [14]). In this paper the three different immersion levels of display and additional variables such as the usability and learnability are compared. As shown, human computer interaction is commonly studied but without the goals provided in this research. By testing the usability we hope to acquire insight into the weakest and strongest parts of a particular technology. We interpret usability as the level of ease (effort) in which people can employ a particular tool in order to achieve a particular goal. In broader sense usability also refers to the methods used for measuring usability itself as well as to the study of the principles behind a product's perceived efficiency or elegance. We have also investigated learnability, which is the capability of a software product (also in combination with hardware) to enable the user to learn how to use it. Learnability also may be considered as an aspect of usability and it is of major concern in the design of complex software applications. 2 SETUP OF THE EXPERIMENT 2.1 Research In order to be able to measure the effect of dimensionality of the visual feedback on learnability and usability, the hardware/software should facilitate visual perceptions of 3D images on one 2D and two 3D displays as well as different grades of immersion. With this in mind the following hardware was chosen for the experiments: a) 19” LCD monitor as 2D display, b) holographic display (HoloVizio 128WLD), which has horizontal parallax, and c) head mounted display (eMagin Z800 3D Visor), which provides full immersion of the user in a virtual environment. So as to perform the design tasks, the hand motion of the users have been captured by a Vicon optical motion tracking system with 6 Hawk Digital Cameras, the EagleHub, and EvaRT v.5.0 software. To process the capture data and visualize the data on the displays, three workstations have been used with 4 graphics cards. The setup of the experiment is illustrated in Fig. 2.
746
Fig. 2. System setup The software application for designing free form surfaces has been developed in house. It consists of: (i) a module for hand motion based modeling, (ii) a geometric modeling module, which supports creation and manipulation of surfaces, (iii) a module that renders the virtual scene for 2D or 3D devices. 2.2 Participants 24 participants took part in the experiment. They were mostly university students, aged between 19 to 26 years; with different backgrounds in design and engineering (e.g. industrial design, architecture, mechanical engineering). Their experience with computer devices, CAD software, and virtual reality technologies varied between novice, advanced and expert users. The users were asked to perform the same design task three times, with different visual feedback. To avoid effects caused by the first used technique, the participants were divided into three groups: the participants who belong to group A interacted first with the system which includes the holographic display; the ones who belong to group B interacted first with the HMD; and the ones who interacted first with the 2D Screen belong to group C. With each group we tested the learnability of a design method with a different display. Learnability of the method has been tested in the first task, while usability of the method has been measured in all three experiments. Each display has been tested in two setups: with or without user interaction. Holographic display without (with) user interaction is denoted as system 2 (system 5), 2D screen without (with)
Poelman, R. – Rusák, Z. – Verbraeck, A. –Alcubilla, L.S.
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 744-753
user interaction is referred to as system 3 (system 6), and head mounted display without (with) user interaction is denoted as system 4 (system 7). Fig. 3 shows the different systems graphically.
questionnaires were answered using a Likert scale of 5 points where respondents could indicate to what extent he or she agrees with the statements.
Fig. 4. Order of the interactions depending on the type of group the users belong
Fig. 3. Organization of the different systems to be studied 2.3 Procedure First, the setup and research apparatus were introduced to participants followed by an explanation of the goal of the research, the design tasks, and the sequence of interaction. In addition, the purpose of the pre-test questionnaire and three different post-test questionnaires (one for each interaction) was presented to the participants. During the experiments, each participant had to interact with all visualization devices in a different order. The order of interaction is shown in Fig. 4. We have taken into account that some of these devices are under development, and therefore, only simple design tasks had to be performed by the users. These simple tasks were: (i) moving a red cube towards a blue square which is randomly situated in the 3D space, Fig. 5; (ii) drawing a planar circle with 10 cm of diameter in XZ plane; (iii) drawing a single surface with the palm. Before each test, the participants had to fill in the pre-test questionnaire which contained information about their background. After each test, the participants had to fill in three different post-test questionnaires. The post-test
The variables that were measured are shown in Fig. 1. In the questionnaires, the measured variables are only the independent ones, such as: helpfulness, efficiency, control dimension, learnability, satisfaction and frequency. Then, the arithmetic mean of the helpfulness and the efficiency represents the usefulness, whereas the mean of the control dimension and the learnability represents the ease of use or usability.
Fig. 5. Results of the different task by a participant Last but not least, a brief interview was conducted after each first interaction where the researcher wrote down the impression of the participants by asking them some questions as well as the results of the Product Reaction Cards (PRC) tool which aims to test the desirability of
The Effect of Visual Feedback on Learnability and Usability of Design Methods
747
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 744-753
the participants towards the devices/systems they have just interacted with. Furthermore, these questions were also used to rate the variables such as the desirability and the frequency. 3 RESULTS AND DATA ANALYSIS First, we analyzed the results of our study for each device or system separately. The following tables refer to the participants’ results towards each device or system. As explained in Section 2.2, a system is a combination of two complementary devices. Table 1. Questions from the post-test questionnaire referring only to the display Question 1 2 3 4 5 6 7 8 9
Independent variable Learnability Helpfulness Efficiency Affect Efficiency Control Dimension Learnability Affect Learnability
Table 2. Questions questionnaire Questio n 1 2 3 4 5 6 7 8 9
Dependent variable EASE OF USE USEFULNESS USEFULNESS SATISFACTION USEFULNESS EASE OF USE EASE OF USE SATISFACTION EASE OF USE
from
Independent variable Affect Affect Control Dimension Helpfulness Learnability Control Dimension Efficiency Affect Affect
the
post-test
Dependent variable SATISFACTIO SATISFACTIO EASE OF USE USEFULNESS EASE OF USE EASE OF USE USEFULNESS SATISFACTIO SATISFACTIO
Table 3. Questions from the post-test questionnaire referring to the frequency of the whole system Question 1 2
Independent Variable Frequency Frequency
Each question from the questionnaires refers to a specific variable as shown in Tables 1
748
to 3, where the colors indicate the parameters of interest. The results from the questionnaires are shown in Tables 4 to 9, where the mean value, the standard deviation and the maximum and minimum data referring to each variable is presented. Table 4. System2: results referring to the Holographic display Helpf. Effic.
X S Max. Min.
2.7 1.1 5 1
2.8 0.6 4 2
Contr. Ease of Learn. Satisf Useful. Dim. use 2.5 3.3 3.4 2.7 3 1 0.6 0.9 0.6 0.7 4 4 5 4 4 1 2.3 1.5 2 2.3
Table 5. System 5: results referring to the whole system with the holographic display as output device Ease Desir use 2.6 2.7 0.6 0.9 3.5 3.6 1.5 1.2
Hel. Effic. Contr.Dim Learn Satis Usef 2.8 1.1 5 1
2.5 1.0 4 1
2.3 0.9 4 1
2.9 1.1 5 2
3.1 0.7 4.3 1.5
2.7 0.8 4 1.5
Freq 2.4 0.6 4 1.5
Table 6. System 3: results referring to the HMD Helpful
Effic.
Contr. Dim.
Learn.
Satisf
Useful.
Ease of use
3.0 0.1 4 1
3.0 0.7 4.5 2
2.7 1 5 1
3.5 0.4 4 3
3.8 0.7 5 2.5
3.0 0.7 4.3 2
3.1 0.8 4 2.2
Table 7. System 6: results referring to the whole system with the HMD as output device Contr Learn Satis Dim 2.7 2.8 3.3 3.4 1 1 1.0 0.8 4 4 4 4.5 1 1 1 2
Hel. Effic. 3.1 0.9 5 2
Usef 2.9 0.6 4.5 1.5
Ease Desir. Use 3.1 3.3 0.9 0.5 4 4.1 1.5 2.6
Freq 2.8 0.9 4.5 1
Table 8. System 4: results referring to the 2D Screen Helpful Effic. 3 1.0 5 2
3.0 0.7 4.5 1.5
Contr. Dim. 2.8 1.1 5 1
Poelman, R. – Rusák, Z. – Verbraeck, A. –Alcubilla, L.S.
Learn.
Satisf
Useful
4 0.5 4.7 3
3.5 0.7 4.5 1.5
3.0 0.6 4.8 2.3
Ease of Use 3.1 0.4 3.5 2.5
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 744-753
Although each system is studied individually as shown in the figures, there are also comparisons between different systems. However, all the systems cannot be compared due to incompatibility; only the systems 2 to 4 can be compared and only the displays will be taken into account, whereas the comparison among the systems 5 to 7 refer to systems as a whole.
13. For system 5 the whole system is taken into account, with the holographic display as output device. The rest of the systems are evaluated in the same manner. The comparisons have also been done for participants with common characteristics for different systems.
Table 9. System 7: results referring to the whole system with the 2D Screen as output device Hel. Effic. 2.9 2.8 0.8 0.9 4 4 2 1
Contr Learn Satis Dim 2.8 3.5 3.4 1.1 1.1 0.7 4 5 4.5 1 2 1.8
Usef 2.8 0.6 4 1.5
Ease Desir. Use 2.9 3.6 0.6 0.7 4 4.8 2 2.8
Freq 2.7 0.9 4 1
Table 10. Results of the participants with “design and architecture” background in system 5 (the whole system with the holographic display as output device) Contr Learn Satis Dim 2.8 2.3 3 3.0 0.8 1.1 0.7 4 4 3 4 1 1 3 1.5
Hel. Effic. 3.2 1.0 5 2
Usef 2.7 0.7 4 1.5
Ease Desir. Use 2 3.5 2 3.5 2 3.5
Freq 2.1 0.5 4 1
Table 11. Results of the participants who do not have a “design and architecture” background in system 5 (the whole system with the holographic display as output device) Contr Learn Satis Dim 2.9 3.3 2.4 2.6 3.5 1.1 1.1 1 0.6 0.7 4 4 3 3.5 4.3 2 1 1 1.5 2.3
Hel. Effic.
Usef 3.1 0.9 4 1.5
Ease Use 2.6 0.6 3.5 1.5
Desir.
Freq
2.5 0.9 3.6 1.2
2.8 0.7 4 2
Secondly, we studied the influence of the background of the participants on the dependent variables (i.e. usefulness, ease of use, satisfaction, desirability and frequency). For the analysis, the participants were divided into two groups: 1) participants with a “design and architecture” background and 2) participants with a non-design background. Another aspect for evaluation was their experience with modeling and CAD tools. There were 7 participants, who had extensive experience with 3D modeling programs and 6 out of the 24 users considered themselves as expert users of 3D displays. The data of these categorizations is shown in the next Tables 10 to
Fig. 6. Comparisons made in each system individually; the participants who interact first with system 5 belong to group A whereas group B has interacted first with the HMD and group C with the 2D Screen Each participant starts with a different system first. The interaction with that system is evaluated separately, as it provides a good indication on how “first users” react to that
The Effect of Visual Feedback on Learnability and Usability of Design Methods
749
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 744-753
system. Fig. 6 shows the comparisons done among the variables for system 5 (where system 5 refers to the whole system with the holographic display as output device). The figure clearly shows that for the participants who belong to group A (starting with system 5) more variables in this first interaction are measured. The lines which contain a number 1 are the comparisons made among the same participants with different variables whereas the lines with number 3 are comparisons made for different people but about the same system. Table 12. Results of the participants who have had extensive experience with 3D modeling programs on system 5 (the whole system with the holographic display as output device) Contr Learn Satis Usef Dim 2 2.2 2 2.9 2.5 0.6 1 0.7 1 3 4 2 4 4 1 1 2 2.3 1.5
Hel. Effic. 3 1.6 5 1
Ease Desir. Freq Use 1.5 1.2 2.2 0.4 1.5 1.2 2.5 1.5 1.2 1.5
Table 13. Results of the participants who considered themselves experts users of 2D, 2.5D and 3D displays on system 5 (the whole system with the holographic display as output device) Contr Ease Learn Satis Usef Desir. Freq Dim Use 2.3 2.4 2 3.1 2.9 2 2.4 2.1 1 1 0 0.8 1 0.7 1.7 0.4 4 4 2 4.3 4 2.5 3.6 2.5 1 1 2 2.3 1.5 1.5 1.2 1.5
Hel. Effic. 3.4 1.1 5 2
The differences in answers for participants with different characteristics are also studied (always grouped by the same device/system, otherwise it is difficult to draw conclusions). 4 DISCUSSION The most important results obtained from the data analysis are: (i) The holographic displays are considered to be much more utilizable than useful (p = 0.017) which means that they are easier to work with than being efficient or helpful; (ii) The three systems as a whole are considered to be too primitive for a current implementation in a design company according to the users. Furthermore, graduated people are more reluctant to accept these new devices than the students. In fact, the students rank learnability
750
of the holographic display significantly higher than graduated people (p = 0.055); (iii) in addition to this, students found the 2D screen significantly more satisfying than graduated people (p = 0.004); (iv) people who use drawing/modeling programs regularly or who are familiar with different in-put/output devices believe that the HMD’s are a logical future step; as an example, the participants who use many drawing or modeling programs give higher usefulness score to the HMD than to the other devices (p = 0.075); (v) furthermore, participants who use drawing or modeling programs rate the 2D Screen significantly easier to learn than the HMD or the Holographic display (p = 0.071); (vi) there is almost no difference among people who have a basic, intermediate or expert level in the programs they use; (vii) one of the surprises of the research is that men and women rate the displays differently: men found it significantly easier to use the holographic display in comparison to women (p = 0.044) whereas according to women, the device which is easier to use (p = 0.150) and is more useful (p = 0.054) is the HMD; (viii) last but not least, the rates referring to the HMD are more extreme than the other displays, which is caused by good or bad head tracking (input device). Table 14. Results of the learnability and the satisfaction variables of students and graduated people referring to the holographic display and the 2D Screen display respectively Holographic display
X S Max Min
Learnab. Students 3.4 1.14 5 2
Learnab. Graduated 2 2 2
2D Screen Display Satisf. Students 3.6 0.59 4.5 2.25
Satisf. Graduated 2.75 0.66 3.5 1.75
Table 15. Results of the ease of use variable of men and women referring to the holographic display X S Max Min
Holographic display Ease of Use Men Ease of Use Women 3.17 2.33 0.62 2.5 2.33 4 2.33
Poelman, R. – Rusák, Z. – Verbraeck, A. –Alcubilla, L.S.
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 744-753
Table 16. Results for women when interacting with the three devices referring to the utility or ease of use and the usefulness Women Utility Utility Utility Useful Hologr. HMD 2DScreen Hologr.
Useful HMD
Useful 2D Screen
X
2.3
4
3.2
2.5
3.4
2.7
S Max Min
2.3 2.3
4 4
0.2 3.3 3
0.6 3.5 2
0.6 2.5 4
0.3 3 2.3
5 CONCLUSIONS One of the most significant conclusions is that the participants felt generally satisfied and indicated a high desirability towards the three systems. However, they all agreed that the three displays need more development because currently they are not useful neither easy to use. Furthermore, the participants provided feedback that clearly shows that they do not see an implementation of these devices in a design company in the near future (5 years). As [14] explains, from a practical point of view, these kind of devices or displays can be currently regarded as “eye candy”. Another interesting conclusion that can be obtained from this study is that it seems that most experts in this field (people who have more familiarity with these kinds of devices or the ones who use many drawing/modeling programs) believe that HMD has more potential compared to the Holographic display and the 2D Screen. However, they do not seem to be very satisfied with what the HMD currently offers. “Experts” in this field might be the only ones able to see the potential of the HMD. An interesting observation is that the 3D effect might have only appeared with experienced user who did not have the initial learning curve and were able to see its utility [14]; in other words, the novice users are unable to see the added value of the HMD. The third conclusion is connected to the holographic display. It is a device that the participants in general felt least satisfied with. The reason for that can be found in the intended purpose of use that these devices have been designed for. The display is designed for watching 3D models from a distance of 2 to 7 m. In our experiment all user were standing in front of the screen within a distance of 70 cm. In this range the observed image was a little blurry,
which has influenced the experience of the users. This suggests that it needs more development and that the depth sensation should be more significant. The novice should feel and understand the difference in dimensions in visual perception so that they do not fail to learn [15]. According to this study, the knowledge level of this type of solution for the user does not seem to make significant differences among the studied variables. This conclusion is significant because it shows that this type of HCI is not influenced by the skill level of the participant since the punctuation of the variables is similar for all the participants. This research has shown that there is a difference between men and women with regards to which display they prefer. In fact, men seem to prefer the holographic display whereas women opt for the HMD. This can of course be a statistical anomaly because the percentage of “expert” women on these technologies is higher than the men “experts”. However, as the paper from [15] states, women suffer at first from a lack of confidence which makes them need more time to understand the technology. Therefore, if they prefer the HMD, it could mean that it is a more intuitive device where they do not need extra time to understand it in order to feel confident with it. The research shows that the hypothesis about the higher dimension in visual perception has a consequence of a better perceived ease of use or an easier learnability is not entirely true although the experts seem to prefer the HMD compared to the holographic or the 2D Screen. However, the holographic display was the worst ranked generally and according to the hypothesis, it should have been better ranked than the 2D Screen. There are indications that the HMD and Holographic display which have been analyzed are on the better side of intuitive HCI compared to the current 2D screen. Yet, the devices which favor a more intuitive HCI should be much more developed if they want to replace the current devices. Looking at the research that has been done, it would be of great importance to repeat the experiments as part of a larger case study for validation and reliability. It would be highly recommendable to make an experiment with a bigger sample: at least 100 participants, because in this way, more powerful statistical techniques
The Effect of Visual Feedback on Learnability and Usability of Design Methods
751
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 744-753
can be used [16] which means that the hypotheses can be tested with more certainty. Important aspects that should be improved regarding the holographic display are (i) the perception of the depth in case the user is standing in close proximity of the display; (ii) not only the horizontal parallax but also the vertical parallax should be implemented [14] (iii) blurriness of the image should be un-noticeable from a distance of 70 cm, and (iv) shape distortion and its dependency on the location of virtual objects [17]. With regards to the HMD, the following improvements should be performed.: (i) better visualization, (ii) being able to visualize the participants’ whole body and not just the hand to give a more realistic feeling to the user, (iii) better interfaces with a more realistic rendering of the hand/body, (iv) more ergonomic and more comfortable, (v) a more robust display and (vi) an unseeing technology among other things. Finally, possible improvement options for future 2D Screen Display are: (i) a better interface and (ii) a more complete immersion in the VR world.
[7]
[8]
[9]
[10]
[11] 6 REFERENCES [1] Dillon, A. (2001). Usability evaluation. In: Karwowski, W. (ed.). Encyclopedia of Human Factors and Ergonomics, Taylor and Francis, London. [2] Davis, F.D. (1993). User acceptance of information technology. International Journal of Man Machine Studies, vol. 38, p. 475-487. [3] Davis, F.D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, September, 13th, p. 319-340. [4] Santos, P.J., Badre, A.N. (1995). Discount learnability evaluation. GVU Technical Report GITGVU-95-30, Georgia Institute of Technology. [5] Cook, D.J., Metcalf, H.E., Bailey, B.P. (2005). SCWID: A tool for supporting creative work in design. UIST - Adjunct Proceedings of the 18th annual ACM Symposium on User Interface Software and Technology, Seattle, poster 7. [6] Dillon, A. (2001). Usability evaluation. In: Karwowski, W. (ed.). Encyclopedia of
752
[12]
[13]
[14]
[15]
Human Factors and Ergonomics, Taylor and Francis, London. Zheng, J.M., Chan, K.W., Gibson, I. (2001). Desktop virtual reality interface for computer aided conceptual design using geometric techniques. Journal of Engineering Design, vol. 12, no. 4, p. 309329. Balogh, T., Kovács, P.T., Dobrányi, Z., Barsi, A., Megyesi, Z.G.Z., Balogh, G. (2008). The holovizio system – new opportunity offered by 3D displays. Proceedings of the TMCE, p. 79-89. Demiralp, C., Jackson, C.D., Karelitz, D.B., Zhang, S., Laidlaw, D.H. (2006). CAVE and fishtank virtual-reality displays: A qualitative and quantitative comparison. Visualization and Computer Graphics, IEEE Transactions, vol. 12, no. 3, p. 323-330. Dinh, H.Q., Walker, N., Song, C., Kobayashi, A., Hodges, L.F. (1999). Evaluating the importance of multi-sensory input on memory and the sense of presence in virtual environments. Proceedings of the IEEE Virtual Reality, p. 222-228. Kaufmann, H., Dünser, A. (2005). Summary of usability evaluations of an educational augmented reality application. Virtual Reality, vol. 4563, p. 660-669 Santos, B.S., Dias, P., Silva, S., Capucho, L., Salgado, N., Lino, F., Carvalho, V. Ferreira, C. (2008). Usability evaluation in virtual reality: a user study comparing three different setups. Proceedings of the EGVE Symposium, p. 21-24. Tory, M., Kirkpatrick, A.E., Atkins, M.S., Möller, T. (2006). Visualization task performance with 2D, 3D, and combination displays. IEEE Transactions on Visualization and Computer Graphics, vol. 12, no. 1, p. 2-13. Risden, K., Czerwinski, M.P., Munzner, T., Cook, D.B. (2000). An initial examination of ease of use for 2D and 3D information visualizations of web content. International Journal of Human-Computer Studies, vol. 53, no. 5, p. 695-714. Teague, D., Roe, P. (2008). Collaborative learning – towards a solution for novice programmers. 10th Australasian Computing Education Conference (ACE), p. 147-153.
Poelman, R. – Rusák, Z. – Verbraeck, A. –Alcubilla, L.S.
StrojniĹĄki vestnik - Journal of Mechanical Engineering 56(2010)11, 744-753
[16] Heijnen, P.W. (2008). Research Methods and Data Analysis. Technische Universiteit, Delft. [17] HorvĂĄth, I., Opiyo, E.Z. (2007). Qualitative analysis of the affordances of the threedimensional imaging systems with a view to
conceptual shape design. Proceedings of the ASME International Design Engineering Technical Conferences and Computers Information in Engineering Conference, p. 163-176.
The Effect of Visual Feedback on Learnability and Usability of Design Methods
753
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 754-764 UDC 659.122:681.337
Paper received: 27.04.2010 Paper accepted: 21.10.2010
A User-Centered Approach for a Tabletop-Based Collaborative Design Environment Christophe Merlo1,2,* - Nadine Couture1,3 1 ESTIA, Biarritz, France 2 IMS/LAPS, UMR 5218, Bordeaux 1 University, France 3 LaBRI, Bordeaux 1 University, France In a global product development market, collaboration between team members has become a key factor for the success of product design projects and innovation. Most of the time, such collaborative situations are supported by traditional tools such as paper-based methods or single-user IT tools. Our aim is to enhance direct interactions between users through the IT tools by proposing physical devices dedicated to users’ business tasks. The proposed collaborative environment is based on the use of a tabletop technology as an output device and physical devices as input devices. An electronic pen and a Wiimote device have been implemented and combined to design tools and tabletop technology for allowing such direct interactions and enhance design collaborative situations. A specific analysis of designers’ activities has been achieved for helping to define input devices and tests scenarios. The results of these tests are presented and validate the feasibility of such collaborative system. ©2010 Journal of Mechanical Engineering. All rights reserved. Keywords: collaborative design activities, user interactions, tabletop, physical devices interacting for proposing solutions or controlling their work. In both situations, multidisciplinary stakeholders interact to exchange viewpoints through adequate intermediate objects [6]. Collaborative tools have been proposed to support such design interactions and intermediate objects in association with CAD systems: Maher [7] proposes CSCW (Computer-Supported Collaborative Work) tools, Rosenman [8] proposes multiple views of product functionalities, added to CAD ones. Virtual Reality techniques improve DMU (Digital MockUp) to analyze and evaluate product design at different steps of its development [9, 10]. Despite such research works [11] and [12] and tools, such collaborative situations are not well-supported in most companies. Moreover, creative tasks are often supported by sketching and are thus based on paper-supported work. IT support is generally limited to one vertical screen, which is not always wide enough for the stakeholders, and a unique input mode (one mouse and one keyboard), which makes it difficult for several designers to act upon the visualized object through a CAD system in an interactive mode. To solve this situation we consider that two fundamentals aspects can be improved: the possible interactions between users and the IT
0 FROM COLLABORATIVE DESIGN TO 3D USER INTERACTION In a worldwide context, companies must develop increasingly complex and innovative products in order to remain competitive. Several approaches, methods and tools have been developed for many years, as for example concurrent engineering [1], multi-disciplinary teams and collaborative Information Technology (IT) systems [2]. In such a context, collaboration between team members has become a key factor for the success of product design projects and innovation. Furthermore collaboration is seen as an effective and concrete articulation between designers involved in a collective action within the same design objectives [3]. We focus on collocated collaborative situations involving designers. Designers must take technical decisions and choices [4] that constrain the product for all its lifecycle after such collective processes and many interactions between all stakeholders, involving a repetitive cycle of perception, decision and action [5]. At first, we study creativity sessions occurring in the early design phases of a design project, as an example of innovative and collaborative situation; then we study project reviews occurring later on in the design process, as an example of designers *
754
Corr. Author's Address: ESTIA, Technopole Izarbel, Bidart, France, c.merlo@estia.fr
StrojniĹĄki vestnik - Journal of Mechanical Engineering 56(2010)11, 754-764
system are limited to a single user; moreover the use of a mouse and a keyboard is not always the best way of interacting with the IT system to achieve a dedicated task. This study builds on the works recently synthesized in Johnson [13]. Nevertheless, companies are far from integrating these concepts and techniques. The aim of our research is to propose new tools to support these interactions between designers by combining physical devices, input and output devices, allowing multiple usersâ&#x20AC;&#x2122; interactions, and by developing an IT framework based on a multidisciplinary model for design collaboration. In this paper only the physical devices dimension is studied. The first objective is to foster multidisciplinary collocated and synchronous collaboration among designers. The second objective is to develop a new way of interaction between them, corresponding to the fact that some specific design tasks may benefit from specific device use rather than traditional mouse manipulation. Therefore, we implemented a collaborative environment that proposes direct interactions between all the designers and the 2D or 3D data [14], using dedicated interactive devices. The followed approach is based on the use of the tabletop technology combined with physical interface devices in order to allow designers to behave in a paper-support like mode. In the next section, we review works on shared interactive surfaces and tables regarding the proposed collaborative environment. 1 SHARED INTERACTIVE SURFACES AND TABLES: STATE OF THE ART The shared interface, which allows multiple users to interact simultaneously on the same device, is an old concept, already explored by the end of the 1980s at Xerox Park in Palo Alto [15]. Wellner [16] proposed the DigitalDesk, the first tabletop that allows interacting with IT by the way of an interactive table and by the use of physical devices. For fifteen years, these devices such as interactive tables remained rare [17] and [18] but recently, there have been proposals for marketed interactive multi-touch tables (the Microsoft Surface, the Mitsubishi MERL DiamondTouch, the IntuiFace from IntuiLab or the Ilight from Immersion). However, providing such devices is not sufficient to support the interactions between co-located users.
Concurrently a lot of interaction styles have been developed using a wide variety of devices (mouse stylus, keyboards, microphone, etc.) and a large variety of interaction techniques (drag and drop, pull-down menus, tabs, collapsible trees, etc.). Recent work on interactive tables and multi-touch tablets, like the iPhone from Apple, the Lemur from JazzMutant or the Jeff Han's surfaces from his society Perceptive PixeL, really question a foundation of HCI: interaction through a single pointer. The goal of our work is to explore, the user interaction on a large surface of visualization and with devices offering more than a single pointer in the context of collaborative design. Firstly, for a collocated collaboration, large shared-displays such as walls or tables are especially useful. A large surface allows a group of users to work together while providing enough space for personal/private and public spaces. Several researchers in the tabletop community provide ad-hoc solutions for new ways of interaction. Mixed-presence drawing surfaces and tangible interfaces [19] have been experimented with: TIDL, RemoteDT, DiamondSpin, Buffer Framework and the very recent DigiTable and T3. Other works such as Verlinden and Horvath [20] explore a different point of view which enables each designer involved in a collocated situation to manipulate virtual models with their own system combining an I/O pad and a projector. Secondly, a person naturally acts by watching the space around her/him to gather information and by manipulating physical objects. Those physical objects can be grasped or moved. Similarly, some special physical objects can serve as a means to interact with a computer system in a natural way as in the everyday life. Let us look at one of the several types of sensing-based interaction: the pen based interaction. Subrahmonia [21] underlines that pen based interaction is still the most convenient form of input in a large number of applications. For example, in the preparation of a first draft of a document, using a pen allows concentration on content creation. The pen is a socially acceptable form of capturing information in meetings that is quieter than typing and creates a minimal visual barrier. Pen is also well adapted for applications that need privacy and that need entering annotations/marking. Most of the time annotations remain informal and are considered as mere supports to a verbal exchange. It is
A User-Centered Approach for a Tabletop-Based Collaborative Design Environment
755
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 754-764
2 DEVELOPMENTS OF PROTOTYPES IN ORDER TO EXPERIMENT WITH NEW INTERACTION TECHNIQUES We developed an operational prototype to experiment with two tasks: handling a 2D object and handling a CAD part in the 3 dimensions. 2.1 Research Method The followed method aims at supporting the implementation of prototypes that must answer to users’ needs. This method (Fig. 1) is a combined user-centered and technological approach composed of several activities: after an initial definition of the studied domain activity, two parallel but integrated phases are engaged - a user-centered one and a technological one - and finally, these two phases merge through a combined evaluation activity then the activity of improvements definition. The prototype is then ready for more industrial tests.
756
Domain identification
State of the art
Technical choices
Scenarios
User tests
Design, Implementation and tests
Technological phase
Design activities analysis User-centered phase
important to consider annotations as complex and composite elements that can play a central role in design co-operation. Today, pen hardware technology is improved in user-interfaces and handwriting recognition algorithms. There are still however, a number of challenges that need to be addressed before pen computing can address the features we listed above from Subrahmonia’s work [21] to an acceptable level of user satisfaction. Thirdly, the context of design leads us to take into account the assembling of two elements such as CAD parts as it is a very common task in this context. For this task, the user has to manipulate double 6 degrees of freedom at the same time, and classical user interfaces such as the 2D mouse or the keyboard are impractical for this assembly task. On a more general point of view interacting with 3D data is a challenge, particularly well described in [22]. Despite those difficulties, in this paper, we explore the integration of three main concepts: the use of pen based interaction on large shared-displays for 2D user interaction with sketches/drawings and for 3D user interaction with CAD parts. In the next section new types of handling devices (i.e. devices useable with the hand), with the aim of improving both business tasks and collaboration among designers, are studied.
Evaluation
Improvements definition
Fig. 1. Prototype development activities During the user-centered phase, we apply different techniques inspired from ergonomics. Firstly, the designers’ professional context in order to identify, characterize and analyze their collaborative design activities is studied. Secondly, scenarios for experiments in order to evaluate the prototypes in a real-like environment are defined. Finally, we proceed with the users tests. The technological phase is a traditional research phase with firstly, a large state of the art to evaluate previous work and plan our solutions with the definition of comparison criteria. Secondly, technological choices lead to the design of a prototype. The choice of one technology is defined by taking into account the type of interaction identified during the analysis activity of the user-centered phase. In our approach, the key element in the early stage of design of the 2D or 3D user interface consists of identifying the users' major needs, taking into account the users' skills and experience in doing the two targeted tasks. The design of the prototype is a multidisciplinary integrating
Merlo, C. – Couture, N.
StrojniĹĄki vestnik - Journal of Mechanical Engineering 56(2010)11, 754-764
mechatronics designer, software designer, and the end user in order to design the right interaction technique and the most suitable device. We identified 16 different technologies / devices which were compared using a set of criteria. At this preliminary stage, all criteria are very generic and based on technical and economical aspects as well as human aspects and the easiness of the possible customization of the device. Our final choice, a pen and Wiimote-based devices, is explained in the next section. Finally, the design and the implementation of the prototype and technical tests validate that the prototype is operational. The implementation influences the definition of usersâ&#x20AC;&#x2122; scenarios.
switch with a very weak displacement (<1 mm) for activating the infrared diode (Fig. 3). Note that industrial solutions are available, for example the Anoto technology, whose pen enables digital capture, transfer and processing of handwritten text and drawings on a paper sheet.
2.2 Prototype Implementation The implemented prototype (Fig. 2) is based on a platform that includes a video projector assembled on a moveable tripod in order to work on any horizontal flat surface like a table or a desk. This set-up is related to Bricks [23] and the IP Network Design Workbench [24] and uses the prototype designed and built in [25]. A graphical representation of data is displayed on the surface of a tabletop. Direct data manipulation is enabled by acting with physical tools on the projection space. Several persons can work together to accomplish a common task around the common space that is the surface of the table.
videoprojector Wiimote 1
pen
Wiimote 2
table Fig. 2. Description of the prototype environment As a physical input device a homemade electronic pen is used. It is composed of an infrared diode of 950 nm that allows a Nintendo Wii Remote (Wii) device to identify the position of the cursor on the table and a contact micro
Fig. 3. First 2D prototype: pen device version 2 In addition, two Nintendo Wii Remotes and a Nunchuk for achieving the whole functions are used. Firstly, the Wiimote device is used as a receiver and must be placed to look at the screen projected onto the table. The angle that forms between this device and the screen is 45 degrees for an optimal reception and its distance does not have to exceed 4 to 5 meters. During its use the field between the Wiimote device and the pen must be kept empty. The second Wiimote device is an active device for transmitting userâ&#x20AC;&#x2122;s commands. For 3D interaction, the pen has a pointer role and must be used close to the table, under 1 centimeter for good selection recognition. Finally, a led rail (10 centimeters long) is used as a base for the Wiimote devices. To integrate and allow interaction between hardware and trade software three tools were used: the Microsoft .NET Framework 3.5 Service Pack; the BlueSoleil software that allows Bluetooth communications between the PC and the wiimotes; and Smoothboard 1.6, a Wiimote Whiteboard freeware that contains a customizable floating toolbar that allows effortless control of presentations. The built-in annotation feature allows writing and highlighting directly on top of any application or document. Finally, we configure the devices to emulate the mouse and some keys combinations coupled to the functions of collaborative DMU software (Product View).
A User-Centered Approach for a Tabletop-Based Collaborative Design Environment
757
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 754-764
3 USER STUDY 3.1 Aim of the User Study The aim of the user-centered experiments is to compare the interaction between the users in a day-to-day activity and the prototype environment. This evaluation is influenced either by the software and the way functions are available, or by the implemented handling devices. Various criteria have been defined. We thus intend to identify the advantages and the disadvantages of each handling device for the following qualifications of effectiveness: Handiness (many actions, time of the actions, position of people); Precision; Intuitiveness (time of catch in hand); Representativeness of the awaited results; Transparency of the object that one holds in his hands, i.e. the faculty to use the interface while doing other actions. The followed method consists in defining a campaign of experiments based on a set of scenarios. They are combined with two different configurations (working on the table and working on a wall) and for a panel of users. The evaluation will allow identifying improvements for the handling devices. We have chosen the table and the wall because in collaborative situation of discussion and thinking, as they are the more common supports. The following table shows the quantitative variables to be collected.
from 22 to 42 years. Their level of study was an MSc or a PhD in technical or scientific disciplines. Nearly half of the subjects knew the DMU software used (Product View) but were not experts in using it. Subjects were divided into design tasks experts, 2D/3D software users and innocent users. None of them had used such handling devices previously. This preliminary study was undertaken in order to determine basic tasks that would be representative of collaborative situations in design. Consequently, people without any specific knowledge in design or with handling devices should be able to participate in the experiments. In order to efficiently collect and exploit the results of the user study, three persons managed the experiments. The first person explained the task and conducted the tests. The second one observed how the users were interacting with sketches, drawings and CAD parts. This observer recorded all the achieved actions by the users for further analysis. Controls were visual but also measured for some parameters (time for example). The third person was guiding through a questionnaire. This questionnaire was designed to get a qualitative and subjective feedback of our user interfaces. Then, a scenario was defined by a grid for the observer evaluation, an instruction form for the users and the final questionnaire for the users. An objective of half an hour per user for achieving the scenarios was defined.
4.2 The Set-Up of the User Study
3.3 The First Targeted Task: Handling a 2D Object
15 subjects participated in our user study. They are all researchers in different fields: mathematics, mechanical design and computer scientists. They were not paid. 7 of the volunteers were female and 8 volunteers were male, aged
The final aim of the first prototype was deduced from the analysis of the designers’ activities. Our intent was to propose a new type of interactions between designers involved in a creativity session.
Table 1. Measured quantitative criteria Criteria Measured parameter Velocity for achieving the task Time Precision Distance between 3D parts at the end of the task Richness of marking Quantity and readability of possibilities the text / symbols / sketches Level of collaboration Number of interactions / Time
758
Merlo, C. – Couture, N.
Reference value Time obtained with a mouse Precision obtained with a mouse Marking made by an expert Number of interactions between experts with a mouse
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 754-764
During such sessions, designers usually interacted with objects represented on sheets of paper: sketches, drawings, images, etc. Their activities consisted of drawing, writing or marking. Sometimes they redrew a section of the paper using a bigger scale. During paper-based sessions, each designer has its own tools (pencil, pen, ruler, rubber, etc) and may use them after or during another designer interaction. During software-based sessions, each designer has to wait for another designer interaction before taking the devices and then interacts. This is the exact situation that we intend to improve in order to be close to paper-based situations. To simulate direct interactions, we selected the following functions because of their representativeness; they are the first ones to be used by designers as well as the most often used: Visualize images; Enlarge / reduce image; Draw hand-made sketches; Write readable text. The 2D image editor software associated with the pen device is proposes four basic functions that will be used for testing: to open a 2D document, to modify the scale of the display, to create a separated level, to draw on this level. The first two functions concern the visualization of the 2D image. The final two functions allow managing marking.
3.4 The Second Targeted Task: Handling a CAD Part in the 3 Dimensions Usually design project reviews aim at controlling large assemblies of 3D models by visualizing them on large vertical screens. Stakeholders interact in oral mode between them and one operator, which is the only one to manipulate the 3D models. During informal design reviews involving a few designers, these look on a traditional workstation screen but only one is able to interact with the workstation at a time. Most common functions concern the visualization and the visual analysis of the assemblies in order to have an overview of the parts made by other teams and to control the coherency of the whole parts definition and position. As a consequence, the functions to be implemented are: Visualize 3D parts and assemblies; Make positive / negative zooms, translations and rotations of the 3D scene; Translate and/or rotate selected parts with regard to the whole assembly; Make dynamic sections of the parts in order to “look inside” the assemblies. 4 EVALUATION OF THE PROTOTYPES We achieved the two prototypes and we tested them in very basic situations. These first experiments should be considered as a cognitive walkthrough based user study [26]. They validate the technical approach but they are not significant enough for evaluating the interest or limitations of our approach in real business situations.
Fig. 4. Pen device for marking (left) then pen and Wiimote devices in a 3D situation (right)
A User-Centered Approach for a Tabletop-Based Collaborative Design Environment
759
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 754-764
The set of users especially, is not heterogeneous enough, but they are all familiar with the tasks and all of them had never used such a pen-device. Moreover, the conditions were laboratory. Before using the system, a quick calibration step is required: the corners of the screen useful zone on the table are identified with the pen to allow the first Wiimote identifying the working zone. Then the pen and the second Wiimote can be used with both 2D and 3D software. Figure 4 illustrates the way the pen is used to add marking on a 2D image and how it is combined to the Wiimote to 3D tasks. Considering the initial needs that guide the design of the prototypes, three user-centered scenarios have been formalized: 1st scenario: handling of a 3D CAD assembly. 2nd scenario: marking on a 2D document. 3rd scenario: drawing in a creative context. In the first scenario two users start the exercise with three CAD parts that are not assembled. They are expected to modify their relative positioning in order to approach the ideal positioning of the parts. Each one has at least one part to move. The useful functions are zooming, rotating and translating. The exercise must be achieved in fifteen minutes, this period of time was defined in order to avoid users from learning about the devices and modify the results of the tests.
Fig. 5. The first scenario on the wall configuration This situation (Fig. 5) allows an evaluation the way the users collaborate and work with the two handling devices: the pen and the Wiimote, even if the situation is very close to a mouse use situation. The pen is used to point and select
760
objects or actions and the Wiimote device is used to achieve dynamic movements of the 3D part such as translation, rotation, etc. The second scenario (Fig. 6) is dedicated to one single user. A screenshot representing a 3D assembly is shown. The assembly contains many positioning errors and the user has to identify them and make marking on the screenshot to describe them most precisely. This experiment is focused on the way the pen device is used and especially for writing. It lasts ten minutes maximum.
Fig. 6. The second scenario: resulting marking The third scenario is supposed to take place during a creativity session where two users have to exchange to define a new design. A sketch of a car design is projected onto the screen. The users have to modify the design of the car by erasing and drawing upon it. They have different objectives that should generate interactions between them and even conflicts: the first user must introduce sharp edges with hard angles and the second user must maximize glass surfaces. This scenario also requires two users but is more focused on the way the pen device helps to formalize ideas through sketches. Only the pen device is used. It lasts five minutes. Two experts achieved the scenarios in order to measure initial values and calibrate the future observations. Each user will have a similar environment but two main configurations were tested: horizontal or vertical screen.
Merlo, C. – Couture, N.
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 754-764
5 RESULTS AND DISCUSSION The experiments have generated a lot of information resulting from the observations, the measures and the answers of the users’ panel to the final questionnaire. Here, general results are presented as the numerous quantities of data are not entirely exploited. First of all, all the scenarios were achieved before the time limit. All users were able to manipulate the different handling devices after a very brief description of their use. Then several statistics were generated. Concerning the first scenario dedicated to 3D manipulations, nearly 1 user upon 4 only prefers the handling devices to the mouse. This corresponds to the fact that manipulating 3D objects requires strong precision. Moreover, the users had to learn how to use the handling device, and they were more expert with mouse use. Analyzing these statistics in greater depth, we established that nearly 90% of the users found the handling devices easy to understand with 3D manipulations, and 70% that they foster the exchanges between users. The average time of this scenario was 13 minutes (from 8 to 15 minutes) and the time to understand how to use the devices was 2 minutes approximately (between 1 minute minimum and 3 minutes maximum). Nevertheless 56% said that the handling device would help them with a design project review. And there were only 11% to validate the fact that the handling devices are more precise, more easy-to-use, and quicker to achieve a task than a mouse. Therefore, two key points as possible improvements have been identified: First, the implemented devices do not require specific learning and they have an added value compared to the mouse for several 3D tasks; Second, they suffer from a lack of precision that reduces their added-value in a 3D context. Furthermore, the scenario was built for a generic validation and several kinds of tasks were defined and tested. The fact that perhaps different tasks should be associated to different types of devices of which each is more specialized, must be considered. This conclusion is also justified by the following results that allow comparing the performances of the handling devices with respect to basic tasks: selection of an object, translation, rotation, positioning and zooming. The
positioning as a combination of selections and links between objects is a very technical task and the handling devices facilities have a very bad evaluation (less than 2 upon 6). More basic and non-technical tasks such as selection and zooming have a good evaluation: nearly 4 upon 6. Finally, rotation and translation are intermediate tasks considering the technical level asked to the user, and they have also a good evaluation (3.3). Such tasks may be already known by most users who knew CAD systems or video games. As far as the scenarios 2 and 3 dedicated to 2D interactions are concerned, the general ratio is reversed and 3 upon 4 users prefer the pen device than the mouse, arguing that the gestures are more “natural” than using the mouse. This can be explained by the fact that using a pen corresponds to years and years of apprenticeship since childhood. More precisely, 70% admitted that the pen device was easy to understand. There were also 70% to say that it helped with exchanges between users and 75% that it was helpful for creative sessions or annotations/marking. Finally, the pen device seems more precise and simple to 30% of the users. During the tests, users felt that the pen device will work as a traditional pen, but they were surprised by the fact that it was similar but not identical. A traditional pen has a thin and precise lead but the pen device has a larger lead and the location of the numerical projection depends on the angle between the pen device and the table. This point is also a source of improvements for the pen device. Detailed results illustrate better this conclusion by underlining the lack of facilities in the case of marking: the evaluation is only 2.7 upon 6. The zoom task and the draw task were evaluated at 4 and 4.3 upon 6. A commercial pen is certainly the solution for our further experimentations. Using Nintendo Wiimote devices was a way to develop low cost prototypes to conduct experiments, see e.g. Duval [27]. One of the initial objectives was to propose new handling devices that will help collaboration during specific design activities. The origin was the induced curb on collaboration due to the existing keyboard and mouse devices, which cannot be easily shared between people working together on one computer. We consider the mixed and interactive interfaces/devices, in particular Tangible User
A User-Centered Approach for a Tabletop-Based Collaborative Design Environment
761
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 754-764
Interface (TUI), as systems being able to mitigate these disadvantages. Leaving the paradigm of “virtual reality” where the user is in immersion in the virtual mock-up, we enter in the paradigm of “augmented virtuality” where the user interacts with the virtual mock-up by the way of real (i.e. physical) objects. They make it possible to add new types of handling devices, that is to say devices manipulable by the hand, - dedicated to very dedicated tasks such as design tasks - to the traditional keyboard and mouse, and even to commercial pen and SmartBoard’s products that correspond to generic collaborative tasks. These new devices allow the user to carry out inputs corresponding to specific business functionalities. The first results of the presented experiments demonstrate that the implemented handling devices have a real potential for achieving design tasks. Several basic tasks were made successfully by the user panel, and some of them seemed to be preformed with the handling devices than with a mouse (selection, writing, sketching for example). Some other tasks were too specific and showed the limits of our prototypes. Several improvements were then considered. First, the precision of the pen device must be improved by identifying better components and focusing on the real manipulations of the users. Second, we used existing software and thus we were obliged to use functions implemented for a mouse and a keyboard; as a consequence we must study similar functions optimized for the type of handling device that we proposed. For example, a rotation is possible dynamically with Product View by acting upon a thin circle around the part, if we replace it by a large circle, located on the part or outside the part, we may expect better results concerning the precision. Third, working on a table or on the wall may result in different perceptions by the users: our experiments did not analyze this point which might have influenced the results. More dedicated scenarios must be created in order to identify different practices than more specific devices. Finally, we used the same handling devices for several tasks. We have to analyze deeper each task and the obtained results to propose for some of them evolutions of the handling devices or even other types of handling devices. For example for a 3D manipulation as a rotation, a more specific
762
handling device should be easier to understand and to manipulate. In addition, information to evaluate if the learning of the first scenario has an influence on the further scenarios should be gathered. It is the same if the impact of the wall or the table during the first scenarios is evaluated or not. 6 CONCLUSION AND FUTURE WORK Our approach, based on the use of a tabletop technology and physical devices aims at proposing a new way of interaction between designers and collaborative design IT tools. The achieved experiments validate this feasibility work by demonstrating the added value of the implemented handling devices compared to standard keyboard and mouse devices for most of basic tasks. Technical tasks also show some limitations: devices are still too generic for very technical tasks and there is a lack of precision for general tasks. Therefore, in future work their precision and the appropriateness of the devices should be improved. Further work will focus also on the business activities to improve both software behavior and devices. Therefore, first the IT environment in order to propose adequate functions for each handling device has to be improved. Second, more realistic scenarios in relation with the context of use (wall/table, one task/one device, etc.) in order to identify the real added value of dedicated physical devices vs. standard devices (mouse, commercial pens, etc.) should be identified. 7 REFERENCES [1] [2]
[3]
Prasad, B. (1996). Concurrent engineering fundamentals - vol. 1. Prentice-Hall, Englewood, Cliffs. Gonzalez, V., Mark, G. (2005). Managing currents of work: Multi-tasking among multiple collaborations. Proceedings of the 9th European conference on CSCW, Paris, p. 143-162. Legardeur, J., Merlo, C., Franchistéguy, I., Bareigts, C. (2003). Co-operation and coordination during the design process: Empirical studies and characterization. Proceedings of International CIRP Design Seminar, Grenoble, p. 385-396.
Merlo, C. – Couture, N.
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 754-764
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
Klein, M., Sayama, H., Faratin, P., BarYam, Y. (2003). The dynamics of collaborative design: Insights from complex systems and negotiation research. Concurrent Engineering: Research and Applications, vol. 11, no. 3, p. 201-209. Gibson, J.J. (1977). The theory of affordances. In: Shaw, R.E., Bransford, J. (eds.), Perceiving, Acting and Knowing, Lawrence Erlbaum, Hillsdale. Boujut, J.F., Blanco, E. (2003). Intermediary objects as a means to foster co-operation in engineering design. Journal of Computer Supported Collaborative Work, vol. 12, no. 2, p. 205-219. Maher, M.L., Rutherford, J.H. (1997). A model for synchronous collaborative design using CAD and database management. Research in Engineering Design, vol. 9, p. 85-98. Rosenman, M.A., Gero, J.S. (1996). Modelling multiple views of design objects in a collaborative CAD environment. Computer-Aided Design, vol. 99 no. 4, p. 193-205. Moreau, G., Fuchs, P. (2001). Virtual reality in the design process: application to design review and ergonomic studies. Proceedings of ESS, Marseille, p. 123-130. Lehner, V., DeFanti, T. (1997). Distributed virtual reality: Supporting remote collaboration in vehicle design. IEEE Computer Graphics and Applications, vol. 17, no. 2, p. 13-17. Yoon, J., Oishi, J., Nawyn, J., Kobayashi, K., Gupta, N. (2004). FishPong: Encouraging human-to-human interaction in informal social environments. Proceedings of the ACM conference CSCW, Chicago, p. 374-377. van der Auweraer, H. (2008). Frontloading design engineering through virtual prototyping and virtual reality: Industrial applications. Proceedings of the International Symposium series on Tools and Methods of Competitive Engineering TMCE, Izmir. Johnson, G., Gross, M., Hong, J., Do, E. (2009). Computational support for sketching in design: A review. Foundations and Trend in Human-Computer Interaction, vol. 2, no. 1, p. 1-93.
[14]
[15]
[16] [17]
[18]
[19]
[20]
[21]
[22] [23]
[24]
Rivière, G., Couture, N. (2008). The design of a tribal tabletop. Proceedings of IEEE International Workshop on Horizontal Interactive Human-Computer Systems (Tabletop), p. 29-30. Stefik, M., Foster, G., Bobrow, D.G., Kahn, K., Lanning, S., Suchman, L. (1987). Beyond the chalkboard: computer support for collaboration and problem solving in meetings. Communications of the ACM, vol. 30, no. 1, p. 32-47. Wellner, P. (1993). Interacting with paper on the DigitalDesk. Communications of the ACM, vol. 36, no. 7, p. 86-96. Bérard, F. (2003). The magic table: computer-vision based augmentation of a whiteboard for creative meetings. Proceedings of the Workshop on ProjectorCamera Systems (Procams), IEEE International Conference in Computer Vision, Nice. Rekimoto, J. (2002). SmartSkin: An infrastructure for freehand manipulation on interactive surfaces. Proceedings of CHI conference on Human factors in computing systems, p. 113-120. Tuddenham, P., Robinson, P. (2007). T3: Rapid prototyping of high-resolution and mixed-presence tabletop application. Proceedings of IEEE International Workshop on Horizontal Interactive Human-Computer Systems, p. 11-18. Verlinden, J., Horváth, I. (2008). Enabling interactive augmented prototyping by a portable hardware and a plug-in-based software architecture. Strojniški vestnik Journal of Mechanical Engineering, vol. 54, no. 6, p. 458-470. Subrahmonia, J., Zimmerman, T. (2000). Pen computing: Challenges and applications. International Conference on Pattern Recognition, vol. 2, p. 20-60. Bowman, D.A., Kruijff, E., LaViola, J.J., Poupyrev, I. (2005). 3D user interfaces: theory and practice, Addison Wesley. Fitzmaurice, G., Ishii, H., Buxton, W. (1995). Bricks: laying the foundations for graspable user interfaces. Proceedings of the 13th SIGCHI conference on human factors in computing systems, p. 442-449. Kobayashi, K., Hirano, M., Narita, A., Ishii, H.A. (2003). Tangible interface for IP
A User-Centered Approach for a Tabletop-Based Collaborative Design Environment
763
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 754-764
[25]
[26]
764
network simulation. Proceedings of CHI conference on Human factors in computing systems, p. 800-801. Couture, N., Rivière, G., Reuter, P. (2008). GeoTUI: A tangible user interface for geoscience. Proceedings of the second ACM International Conference on Tangible and Embedded Interaction TEI, p. 89-96. Polson, P.G., Lewis, C., Rieman, J., Wharton, C. (1992). Cognitive
[27]
walkthroughs: a method for theory-based evaluation of user interfaces. Journal of Man-Machine Studies, vol. 36, no. 5, p. 741-773. Duval, T., Fleury, C., Nouailhas, B., Aguerreche, L. (2008). Collaborative exploration of 3D scientific data. ACM Symposium on Virtual Reality Software and Technology, p. 303-304.
Merlo, C. – Couture, N.
StrojniĹĄki vestnik - Journal of Mechanical Engineering 56(2010)11, 765-783 UDC 004.3:316.774
Paper received: 27.04.2010 Paper accepted: 23.10.2010
The Upcoming and Proliferation of Ubiquitous Technologies in Products and Processes Bart Gerritsen1,* - Imre HorvĂĄth2 TNO Netherlands Organisation for Applied Scientific Research, The Netherlands 2 Delft University of Technology, Faculty of Industrial Design Engineering, The Netherlands 1
Ubiquitous computing is a research field that started in the late 1980s, and is now believed to be at the brink of a steep acceleration in terms of technology development and applications. Ubiquitous computing is often regarded as the third wave of computing, after a first wave of mainframe computing and a second wave of PC computing. It aims at supporting humans in their daily life activities in a personal, unattended and remote manner. Towards this end, it scatters computing capacity across the environment, and takes out the oblique PC man-machine interface. Instead it employs networked sensors and devices surrounding us. There is no dedication in the sense that many devices in an environment collectively serve multiple humans around. Both humans and devices are assumed to be nomadic, and possibly enter or leave the environment. In addition, to materialize a personal and context-dependent interaction, identification and context awareness are also key factors. Although the vision itself has become fairly well-conceived, several technological and non-technological problems are yet to be overcome. This paper provides a comprehensive overview and a critical survey of the current and future state of ubiquitous technologies. Š2010 Journal of Mechanical Engineering. All rights reserved. Keywords: mobile communication, omni-present sensing, ad-hoc networking, ubiquitous computing, information transformation, pervasive approaches 0 INTRODUCTION Ubiquitous computing (or: UbiComp or UC) is a research field that started in the late 1980s [1] (Fig. 1), and rapidly expanded in the 1990s. The UC paradigm is centered on the idea of integrating computing power in devices and environments in such a way that they offer optimal support to human daily life activities [2]. Instead of going to a single specific device at a fixed location, and explicitly formulating the information or action sought, the environment and devices that sense us more or less autonomously serve us in a proactive, unattended and hardly noticed but effective manner [3]. Ubiquitous computing is the dawning era of computing, in which individuals are surrounded by many networked, spontaneously yet tightly cooperating scattered 'computers', worn or carried, mobile, embedded and remote. Many of them serve dedicated processes as part of physical objects [4]. Car manufacturers who traditionally have a relatively strong say over their suppliers, already successfully integrated safe breaking, auto-activated lighting and screen wipers, navigation and active safety protection,
and a wealth of other electronic supplies, to assist in car diving without obtrusively demanding the driver's attention or intervention [5]. Ubiquitous computing is not just the same as mechatronics, intelligent systems, or the Internet-of-things. It is underpinned by different basic principles [6]. Weiser, M. is often seen as the founder of UC [1]. In fact, he named this field of interest while he was working at Xerox, and also defined its principal goals, the most of which still stand today [7]. Alternatively, omnipresent computing, calm technology [8], pervasive computing [9], and ambient intelligence [10] are nowadays also used and believed to be roughly the same as UC [11] and [12]. Several experimental devices, from active badges to smart class and conference facilities served as a vehicle to take the emerging technology further. Research in UC is inspired by two grand challenges, namely by humane computing [13] and integrative cooperation [14]. From the mid-1990s, humane computing advanced through a number of results in the domain of Human-Computer Interaction (HCI). Research in full-body gesture recognition was proposed in the AliveII research project [15]. Various methods and technologies
*
Corr. Author's Address: TNO, Wassenaarseweg 56, 2333 AL, Leiden, The Netherlands, Bart.Gerritsen@planet.nl
765
StrojniĹĄki vestnik - Journal of Mechanical Engineering 56(2010)11, 765-783
Molecular computing and brain interfaces Ad hoc networking and omnipresent information appliances Cellular networking and mobile information appliances Cognitive technologies Immersive and multisensational environments
Ubiquitous technologies Mobile communication technologies
Internet-based networking and information appliances
Virtual reality technologies
Personal computing and visualization
On-line networking technologies Workstation oriented computing technologies
Digital computing and data storage
1930
Mainframe oriented computing technologies
1950
1980
1990
2000
2010
2020
Fig. 1. Computing technologies advancements of the past decades have been proposed to recognize moving objects [16], to track pointed-at objects on a desk [17], and to recognize objects drawn in the air by humans [18]. A recent and extensive overview of these developments is presented in [19]. These examples cast light on a specific characteristic of UC: it can be effective on a short distance: (i) inside and on the human body, (ii) between worn and/or carried personal devices, or (iii) between personal devices and environment. Humans are mobile and their personal devices are portable. Therefore UC is generally thought of as a class of mobile technologies. In addition to human-device interaction and mobile computing, a fully fledged UC implementation requires many more technological constituents, such as networked intelligent sensors, agents, data transmitters, and sensation generators. The goal of this paper is to provide a structured and comprehensive overview of existing and emerging ubiquitous technologies and their application in smart customer products, information appliances and ambient processes. It also explores the potential role of ubiquitous computing in development scenarios for the future. Finally, it seeks to identify and formulate some major technological challenges of the next decade. In order to structure the survey of this widely ranging research field, we applied the reasoning model shown in Fig. 2. This reasoning model identifies five functional clusters of ubiquitous technologies, articulating their classes
766
of technologies. Actually, these classes reflect the relationships of functionally similar technologies to humans, artifacts and environments. We must note that observing the industrial practice, this overview could be neither as structured, nor as comprehensive as we saw fit. In terms of structuring the overview of the clusters of ubiquitous technology, we had to face the multi-faceted implementation of these technologies, which hindered their sharp demarcation. Just to mention one example: integrated sensor nodes may be simultaneously sensors and transmitters, and elements of sensor networks. Then, do they belong to the cluster of sensing, transmission, or networking technologies? Additionally, the wide spectrum of technologies within each of these clusters and classes, forced us to neglect certain important domains, developments, and new applications. For instance, we could not consider and address broader societal problems being under debate at the moment. In addition, we had to restrict the focus of the study to enabling technologies, but to skip the related applications for the larger part. Finally, we had to stick to the surveillance of commonalities and trends, without digging into the finest details. Further to this Introduction, the paper is organized as follows. Section 1 discusses a brief and instrumental overview of ubiquitous sensor technologies. Section 2 deals with the various transmission technologies. Section 3 investigates
Gerritsen, B. - HorvĂĄth, I.
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 765-783
Data/Information miners Knowledge repositories Content search engines
Visual sensation
Object retrieval engines
Haptic sensation
Sensor materials
Thermal sensation
Embedded sensors
Audio sensation
exploration
Ambient sensors
Behavior simulation
Wearable sensors
Interaction simulation sensing
transmission
conversion
networking
Embedded transmitters
Ad hoc networking
Ambient transmitters
Sensor networking
Portable transmitters
Wireless networking
Wearable transmitters
Ultrawide networking Internet networking
Fig. 2. Functional clusters of ubiquitous technologies the various approaches of wired and wireless, and permanent and ad-hoc networking. Section 4 gives an overview of the various sensation generation and interaction simulation technologies. In Section 5, we discuss the major content exploration technologies and techniques with a view to understanding of contexts and building context-awareness. In Section 6, we classify and briefly discuss the application of UC technologies in next-generation products. The last section elaborates on the use of ubiquitous technologies in typical creative industrial processes, such as design and manufacturing. Finally, some concluding remarks are offered together with future research outlooks. 1 SENSING TECHNOLOGIES Ubiquitous sensing technologies allow us to elicit and collect information about steady or changing states of artifacts, phenomena, and environments. As shown in Fig. 2, we can further classify them according to their placement so as: (i) sensor materials, (ii) embedded sensors, (iii) ambient sensors, and (iv) wearable sensors. Note that these classes are not orthogonal, i.e., not mutually exclusive: embedded sensors can also be
ambient, for instance. Sensors sample physical parameters from their environments, such as chemical agents, temperature, acoustic pressure, force, and light. These measured parameters can be associated with events such as a hazardous gas blow out, opening doors, human presence, tipping ladders, etc. Sensors not only sense but also actuate. Sensors come in a variety of forms; embedded, fixed, adhesive, as a gel, movable, wearable, etc. For UC appliances, sensors can also be mounted on an auxiliary device or cloth, such as a jacket [20] or a helmet. Critical in the (non-)perception of UC is a rich, multi-modal humane HCI model as outlined in [21] to [24]. Important modalities of sensation in the context of UC are: view (visual modality), requiring image sensing and recognition technology; voice (auditory modality), requiring speech sensing and recognition technology and natural language processing; gesture, requiring gesture sensing and recognition technology; touch, force and grip, requiring touch sensing and haptic technology. Lesser strictly human gesture-driven sensors are:
The Upcoming and Proliferation of Ubiquitous Technologies in Products and Processes
767
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 765-783
Fig. 3. Overview of wireless nano-to-wide area networks and network technologies presence and proximity, requiring (body) heat, movement or sound sensing and recognition technology; mood, requiring gesture, vocal and facial sensing and recognition, image analysis and expression analysis technology; activity and motion (accelerometers); much like presence, possibly combined with mood. Sensors must fit underlying physics and deliver a fitting output signal. Sensors can function individually, clustered or networked. 1.1 Sensor Materials Sensor materials are a natural manifestation of physical sensors. These materials changes their state and properties, if sensing physical change [25]. Multi-functional materials and bio-engineering materials lend themselves to this class of sensors [26] and [27], giving rise to the emergence of a new class of products, manifesting in smart exo-skeletons and endoprosthesis, as well as in sports, outdoor and wellbeing activity devices, and veterinarian, food and beverage applications [28] to [30]. 1.2 Embedded Sensors Robust, self-contained and self-organizing miniaturized sensor nodes are easily embedded in
768
products and environment [31]. This includes even full functional video cameras. Gluing techniques and Shape Deposition Manufacturing (SDM) technology allow for swift embedding [32]. Synthetic multi-functional materials are structured materials that are designed and engineered to possess multiple principal behaviors and properties that can be tuned to specific applications. Nanostructured such materials allow for the embedding of nanoelectromechanical building blocks (NEMS) and will cause further integration of sensorsprocessing-actuator systems into multi-functional materials [33]. Apart from synthesized materials, biomaterials that can perform the same functions [26] and [27] are being developed. Bio-chemical and mechanical problems represent significant challenges here, and operational lifetime. Perhaps the most compelling technological challenge to overcome is energy consumption [34]. Most embedded sensors are still AAA- or AA-battery powered and battery change is tiresome or even impossible. Technological challenge for the coming decade is recharge from heat, pressure, motion, solar power, or otherwise. A sub-200 W power consumption per NEMS (node) is believed to be necessary for that. Radio integration and energy consumption remains another challenge, addressed later on.
Gerritsen, B. - Horváth, I.
StrojniĹĄki vestnik - Journal of Mechanical Engineering 56(2010)11, 765-783
The concept of Subscriber Identity Modules (SIM) was introduced in 1996, to authenticate and bind users to devices, and devices to networks by means of a device mounted smart card (SIM card). The use of SIM cards technology is one way to authenticate humans by means of a device-mounted smart card. SIM technology is a preconfigured hardware solution with limited capabilities. Since preconfigured in the device, no real time assessment as to the authentication of its user is possible. Multi-modal identity tracking is a key technology in UC that may replace SIM authentication for that (and other) purposes.
needs to be transparently in sync to the perceiving process. Further technological issues are: ď&#x201A;ˇ filtering off noise, for instance traffic noise in an urban setting, and ď&#x201A;ˇ juggling the bandwidth, particularly with ambient sensing. Multi-functional materials will play an important role in future ambient sensor technology [27]. Together with the above issues, the technological potential of ambient disposable, environment-powered, self-organizing, and ambient sensors has also been discussed in [10]. Familiar examples are smart dust and the Carnegie Mellon claytronics project [46].
1.3 Ambient Sensors
1.4 Wearable Sensors
Ambient sensors are sensors capable of jointly sensing multiple physical phenomena in various surroundings. Note that in industry, the term is also used for sensor systems that, for instance, combine different classes of light. Apart from smart rooms and buildings [35] to [37], ambient sensing techniques have been applied to urban environments [38] to [40]. Application of ambient sensor technology to human motion tracking purposes is discussed in [41]. A method for human identification and tracking in a smart room using pan-tilt-zoom cameras and phasearray auditory localization was presented in [42]. Group-level and individual level identification is combined for scene analysis. Several issues related to localization, remoteness and nonintrusiveness have been discussed in [43]. One of the primary technological concerns is the sensor timing and synchronization of heterogeneous signals and data. In mid- and long distance combined auditory and vision localization, auditory signal timing can bring down the overall sensing bandwidth and thus introduce positioning and tracking errors [43]. At present, global time synchronization is common, but at the cost of considerable overhead [44]. An approach has been proposed based on the total sensor node residence time of arriving samples, and align times in the end, in order to reduce overhead [45]. Signal travel or arrival time differences are not the only cause of asynchronicity; processing, encoding, transmission and presentation can also be sources of timing mismatch. In fact, the whole service pipeline
Wearable sensors invade many remote spots and scenarios [40]. Essentially, wearable wireless sensor networks can be carried anywhere, but operation conditions may make effective operation hard [41]. The concept of onbody wearable sensors is gaining more and more attention in research [47] and [48]. They can be networked in a Wireless Body Are Network (WBAN) (Fig. 3) [34] and [49]. At present, mood and emotion recognition is an active topic of research. Recognizing emotions and expressions may help to track aggression, violence, suspect behavior but also illness, boredom, weakness, unconsciousness, death, in tele-surveillance approaches [50]. Technological challenges are the positioning of the sensors in a controlled affix position, without hindering body movements. Further technological challenges are posed by body fluids such as sweat, and shocks. 2 TRANSMISSION TECHNOLOGIES In telecommunications, transmission is the process of sending, propagating and receiving an analogue or digital information signal over a wired or wireless transmission medium. Transmission technologies support communication by means of transmitting signals in a peer-to-peer (1:1), peer-to-cluster (1:N), or cluster-to-cluster (N:M) manner. In the literature the terms point-to-point or point-to-multipoint transmission are also used. There are multiple viewpoints and classification principles that can be applied to them. One view is represented in Fig. 2, and this identifies (i) embedded, (ii)
The Upcoming and Proliferation of Ubiquitous Technologies in Products and Processes
769
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 765-783
ambient, (iii) portable, and (iv) wearable classes of these technologies. In addition to this exploitation oriented view, another view arranges UC technologies according to their functionalities on device level and network level, respectively. 2.1 Device Level As predicted by Moore’s law, miniaturization of VSLI microprocessors and memories is progressing rapidly; sub 20 nm CMOS technology at near radio-frequencies is current state. In the 1990s, the development and mass-production of, a few-dollar cost, dime-size 8-bit hardware platforms for Wireless Sensor Networks (WSN) for generic industrial applications readily took off [51]. These hardware platforms are known as sensor nodes [52] or: motes [31]. ZigBee and Bluetooth were developed as low cost radio frequency (RF) communication technologies applicable to standard motes, replacing infrared. The development of ZigBee started back in 1998, with an initial support by Philips, and a first release appeared in 2004. ZigBee is now under the patronage of the ZigBee Alliance. Bluetooth has been developed by Jaap Haartsen at Ericsson in 1994, and is now under the auspices of the Bluetooth SIG. A third industry technology, WiFi has been developed for a short distance wireless networking. Wi-Fi was developed in 1991 by Vic Hayes at Agere Systems, in the Netherlands, and is now under concern of the IEEE Group 11 and the Wi-Fi Alliance. The transfer rate of ZigBee is 40 to 100 KBit/s, Bluetooth 2.0 has a transfer rate up to 2.1 MBit/s, Wi-Fi up to 54 MBit/s. ZigBee is for low level and simple control data, Bluetooth is for serializable data, Wi-Fi for full office data. All three have been unified in the IEEE 802-family of standards (Table 1). Antenna-based Near Field Communication (NFC) technology, used in cellular devices, smart cards, and for Radio Frequency Identification (RFID) labels emerged from ISO/IEC 14443 [53]. RFID technology was developed by IBM in the 1980s and early 1990s [54]. Originally designed for supermarket products, RFID developed into a general purpose product identification code in the time period from 1999 to 2003. This advancement was fostered by EPCglobal founded by the Uniform Code Council. Nowadays EPC provides
770
generic ID-tag technology [55]. For longer distances, exceeding the typical range of a Wireless Personal Area Network (WPAN), connection can be made to a (Wireless) Local Area Network (WLAN) or Wireless Metropolitan Area Network (WMAN) gateway by using Wi-Fi or global communication technologies such as Global System for Mobile Communication (GSM) networks and Universal Mobile Telecommunications System (UMTS) (Fig. 3 and Table 1).
Fig. 4. Berkeley ATMEGA103-/TR1000 100 m range, 50 KBit/s, radio-based MICA node The basic technology of GSM was developed in the period 1982 to 1987, focusing mainly on Europe. Founding work on the GSM standardization is generally attributed to Torleiv Masing of SINTEF (Norway), who at that time was also working for NATO in the Netherlands. Roaming, a technology allowing a single device or user to be discovered and serviced by different networks in a transparent manner was introduced in 1993. Currently, GSM EDGE (3G) is the standard technology. UMTS was introduced in 1999. Similar to GSM, it applies USIM cards. Since 2006, many UMTS networks switched to High Speed Downlink Packet Access (HSDPA). Although the hardware platform for sensor nodes is highly integrated, at a functional level there is a difference between the functions of sensing, processing, transmission and actuation, like sensation generation. In Fig. 2 the classes of technologies are displayed as separated, but in reality the distinct functions are interlaced, even holistically integrated. A good example of the integration of various components in one device is the Berkeley MICA node shown in Fig. 4. Its basic architecture is shown in Fig. 5. On the I/O connector, analog/digital sensor signals come in,
Gerritsen, B. - Horváth, I.
StrojniĹĄki vestnik - Journal of Mechanical Engineering 56(2010)11, 765-783
which are processed locally by a Micro Controller. The output digital stream of packets is transmitted by the radio. The node is timed by a timer. Master nodes may orchestrate slave nodes in a WSN. Operating system updates are flashed from a separate incremental programming server. A sensor node can organize itself as well as its position in the network topology. It is capable of powering itself up, uniquely identifying itself, establishing a connection with the master node (if present) and preparing messages for transmitting up the network. The micro controller does the local signal processing, validation en encoding and prepares a stream of data packages (assuming a digital radio) for transmission. The timer coordinates the functions in the sensor node, most prominently the radio transmission. Once cleared by the timer, the radio sends off its packets.
negotiating allocation of larger time blocks for any of the three states: sending, receiving, idling (sleeping). Within a so-allocated coarse grain time slot, internal timing (at the MAC- or media access control level) can control data transmission. Keeping connections alive all the time is overly energy consuming and typically, sensors enter a sleeping mode when commanded so by a master node or otherwise [31].
2.2 Network Level Wireless sensor networks are not only typical manifestations of combined information eliciting and transmission technologies, but also typical from the point of view of issues related to the operation of transmitters on network level. Depending on the task, the topology of the WSN, and its synchronization, data transmitted by the nodes may be aggregated and processed further, in a master node, or dedicated host. Common topologies are star and tree networks but more flexible topologies are also possible, at the cost of extra overhead and negotiation time. In that respect, WSNs can be subdivided into broadcastbased and preconfigured connection-oriented networks. To operate WSN motes, work on tiny operating systems and micro-kernels began in the late 1990s. A working group at Berkeley developed TinyOS, in collaboration with Intel and Crossbow, the first release of which was presented in 2002. TinyOS is now in widespread use. Timing and synchronization is a complex issue. Centralized control, slotted time and fixed topology solutions can be synchronized fairly well, but in industrial practice, ad hoc networking with dynamic topology and changing control tasks cannot. Ambient signal transmission also adds additional complexity [56]. Most technological approaches try to tackle this with a multi-layer or coarse-and-fine-grain timing,
Fig. 5. Basic sensor node architecture Table 1. Wireless (top), communication (middle) and future (bottom) technologies TECHNOLOGY
IEEE 802 MEMBER
ZigBee
802.15.4
Bluetooth
802.15.1
Wi-Fi
802.11x
APPLICATION/ DISTANCE PURPOSE Industry, command and WPAN control Serial protocol, WPAN serializable data Full office data WLAN
APPLICATION/ DISTANCE PURPOSE Active/ Passive Near field, NFC ISO/IEC 14443 RFID antenna GSM ETSI TISPAN Voice Any distance, UMTS W-CDMA IMT (3G) Data cells TECHNOLOGY STANDARD
FUTURE STANDARD TECHNOLOGY None UWB (UWB.org) WiMAX
IEEE 802.16
FUTURE DISTANCE APPLICATION FUTURE WPAN High rate FUTURE WP/LAN General purpose WMAN
Technological research on low energy consumption WSNs will receive strong emphasis in the near future. Many innovations are expected such as: ď&#x201A;ˇ CMOS-improvements, multi-cores and Network-on-(a-)Chip (NoC) deep integration [57] and [58];
The Upcoming and Proliferation of Ubiquitous Technologies in Products and Processes
771
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 765-783
Fig. 6. MANET (vehicles: VANET) topology (source: www.Cisco.com) Digital radio and frequency sharing [59]; Enhanced (zero-conf) stack services [60] and [52]; New technology batteries and environment rechargeable batteries; Power management systems [61]; Data loss (which may be in access of 50%) prevention [34]; New multi-hop strategies and enhanced active-sleep timing strategies; Enhanced noise control, reducing overhead and processing cost; Low cost sleep states [62] and (Tiny)OS enhancements. Ad hoc networking appeared not only as an alternative to traditional fixed-topology networks, but also as a competitor. Significant progress has been achieved in the domain of Mobile Ad-hoc NETworks, which is often referred to by the acronym MANET [41] and [63] (in vehicles also: VANET). Mobile computing involves moving WPANs. This technology has been developed to be capable of embedding PANs. In a car or a train, it can provide, for instance, for vehicle-to-environment (or: vehicleto-infrastructure), vehicle-to-vehicle and vehicle772
to-driver/passenger HCI in parallel. With a fast moving object, like a car, complicated steps are the development of multi-homing, employing multiple Internet access channels in combination with vertical and horizontal handover protocols and fast media switching. The recently developed 6loIPv6 technology also provides mobile routing by combining a fixed node in a home network with a temporary care-of node in the local network [64] and [65]. MANETs are not exclusive to traffic but for general mobile computing purposes. A typical MANET topology is shown in Fig. 6. A VPN-like tunneling is used over the locally wirelessly visited network to connect to a fixed home network. This network topology incorporates mobile routers. Distributed and ambient transmission dialogues have also come to the focus of research. An approach to distributed ambient dialogues, device capabilities-dependent device switching, and state- and failure safe dialogues has been proposed in [66]. They propose a client and device abstraction in the form of an Application Programming Interface (API) and/or middleware to prevent stalled dialogues in combination with a standard device library. An Interaction
Gerritsen, B. - Horváth, I.
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 765-783
Specification Language (ISL) has been proposed to abstract interaction specification [67]. This proposal combines a versioning model and a separation of user interface and presentation, in the first place between users themselves and devices. To process a context-aware service interaction request, interaction specifications are parsed and fitting user interfaces are generated by a selected interaction engine. HTML, Java AWT and Swing and Tcl/Tk have been considered in this set of engines, but the list can be arbitrarily expanded. The major technological challenge for the next decade is the development of scalable and reconfigurable HCI snippets and a programming platform to construct flexible multimodal multi-device UC HCI-interfaces [24]. 3 NETWORK TECHNOLOGY UC wireless networking technology relies basically on various WPAN technologies (e.g. on ZigBee and Bluetooth), with an extension upward to LAN technology (Wi-Fi), and downward to NFC (RFID, EPC) for sub-meter distances (Table 1, top). In parallel, GSM and UMTS communication is used for short and long distance multi-channel communication (Table 1, middle). WiMAX and UWB (Table 1, bottom) are future technologies. In Table 1, NFC is assumed sub-meter, WPAN covers a distance of 1 to 10 m, up to 100 m for a class 1 Bluetooth radio, while Wi-Fi extends to 300 m. For the future, WiMAX covers up to or even above 10000 m. Fig. 3 shows the working distances. A gateway is a connection between two different networks. At the current state of technologies, Bluetooth and ZigBee gateway easily to IP networks, for instance, through a laptop or a dongle, using the so-called zeroconfiguration whereby only lower level IP services are used, for which no IP-address is required (often at the cost of decreased security and control). Gateways commonly take care of or are supported by routers that can establish connection at all OSI-layers, as needed. 4 CONVERSION TECHNOLOGIES Conversion technologies are enablers that allow generating computed feedback on sensations and converting this into human conceivable forms. In the context of UC,
conversion technologies allow virtual sensation at remote places and environments. 4.1 Visual Sensation Generators A dominant presentation mode for the near future is video presentation (visual modality). With the advent of high resolution large displays in various configurations, new technologies are needed for HCI-optimized appliances. A survey of the needs is presented and the emergence of these new technologies has been investigated in [68]. Taxonomy of multi-person-display ecosystems has been proposed in [69]. In a distributed environment, the two major issues are the possible large-scale heterogeneity of the display devices (ranging from airborne volumetric visualization to alphanumerical displays), and displaying different visual contents on display surfaces of various characteristics (physical size, resolution, interactivity, and refresh rates [70]. 4.2 Haptic Sensation Generators Haptic feedback is an intuitive, culturefree and basically non distracting modality. At the present state of technology, haptic sensation in mobile devices is generated by coetaneous technology: Vibration generating technology; Piezoelectric skin (e.g. finger tip) stretching technology; Voice coil transduction technology; Electric sensation generation technology by means of a piezoelectric ceramic film. The technological potential of lowfrequency force by kinesthetic feedback generators is high, but no suitable technology exists at present. From a mechanical point of view, only limited force can be generated in smaller devices. Spinning gyroscopic generators using a change in angular momentum are suited for a small device application, but gyroscopic effects are not translational. In order to achieve a perception of force in all directions, multiple gyroscopic generators spinning in different planes may be combined. A small oscillating mass haptic generator was proposed in [68], using miniaturized slider-crank, crank-piston or camspring mechanisms, perceived as an all-
The Upcoming and Proliferation of Ubiquitous Technologies in Products and Processes
773
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 765-783
directional force. Forces can be controlled using frequency control. 4.3 Multi-Modal Sensation Generators The major challenge in multi-modal sensation generation is the fusion of heterogeneous data and information, at model level but also at processing and exchange level [71]. Mobile spatial interaction is a HCI branch that focuses on showing more than the visible and more than the obvious [72]. It seeks to augment current environment perceptions by augmenting information and sensation. Augmenting sensations can be in the form of augmented reality scene generation, high end graphics but also by complementary auditory sensations. Hypo-vigilance analysis is the analysis of a human attention and mood level, by observing expressions such as yawning, eye movement, etc. indicating fatigue and inattention. Multi-modal analysis, among others facial image analysis, was applied to discover hypo-vigilance among drivers in a simulator [73]. These researchers combined ECGs and GSRs, and combined this with the hypo-vigilance indication. Attention regain is obtained by an alarm and haptic sensation in the form of steering wheel vibration. 5 EXPLORATION TECHNOLOGIES In general, ubiquitous exploration technologies allow remote and controlled elicitation, aggregation, filtering, codification and formalization of data, information, and knowledge from various knowledge sources, such as digital repositories, electronic documents, Internet warehouses, and information handling processes. Exploration typically happens according to specific objectives in mind and in varied contexts. Fig. 3 identifies the forms of explorations that are considered in this paper. 5.1 Data and Information Mining Knowledge management, in this context, is the process of creating value from an organization's intangible assets [74]. It is composed of the combined problems of knowledge coordination, knowledge transfer, and knowledge reuse [75]. Knowledge coordination is the problem of semantic compatibility of
774
seemingly non-conforming data. Knowledge transfer is the process of exploration and transmission of (codified) data from its discovery location to the process of context-awareness generation in the UC-environments. Knowledge reuse generally includes a set of facts (data) that can be inserted into processing rules, so as to infer further facts, asserting them with a certain degree of certainty. 5.2 Knowledge Repositories and Ontology In the present day, Knowledge Management Systems (KMSs) are commonly combined with or embedded in groupware and online collaboration frameworks. Knowledge repositories can furthermore be databases of structured data, but also WiKi's, manuals, etc. containing unstructured data. It has to be noted that the larger amount of data on the web is unstructured. Examples of large database can be found in engineering, health care and pharmaceutical research, crime fighting and defense etc. Structured online repositories can be subdivided into: Semi-structured registers, like Universal Description Discovery and Integration (UDDI) repositories for web services and other maintained registries such as OASIS (Organization for the Advancement of Structured Information Standards); Electronic Knowledge Repositories (EKRs) for codified explicit and tacit knowledge. Codified knowledge is knowledge that has been analyzed and validated, and stored in online EKRs for subsequent structured retrieval by a KMS [76] and [77]. Information that can be codified and stored in EKR includes customer, supplier and product information [78]. In this paper, knowledge use and reuse is primarily considered as an enabler of UC decision making. Data retrieved from the Internet are used as facts in context-aware decision making. Formulating the search context permits dedicated searches, if not additional structuring algorithms are needed somewhere in the server-to-UC environment chain. Two principal techniques are usually applied in data mining [79]: (i) supervised classification, and (ii) unsupervised clustering. A variety of algorithms has been developed for these classification and clustering operations, including trees and rules for classification and
Gerritsen, B. - Horváth, I.
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 765-783
regression, association rules, belief networks, classical statistical models, nonlinear models such as neural networks, and local "memory-based" models. The kernel approach writes N vectorial or non-vectorial data as an N by N matrix of data and features. In kernel-based learning, three classes of methods have been developed for learning vector-based supervised classification [80]: Support Vector Machines (SVM); Kernel Fisher Discriminants (KFD); Kernel Principal Component Analysis (KPCA). These classes of algorithmic have also been applied to unsupervised (no learning vector) clustering. Current trends are moving towards hybrid, highly stable and robust algorithms [79]. Fast k-Means algorithms on kernel matrices allows for fast and robust unsupervised learning over large data sets. Another challenge for the next decade is controling the quality of contextaware UC decision making. The Expected Value of Information (EVI) associates the risk of not knowing in decision making with the cost of acquiring additional data, thus quantifying a trade-off between data acquisition cost and acceptable uncertainty in the decision [81]. 5.3 Content Search Engines Knowing the inquiry context has lead to specific search engines and dedicated search channels. Context-aware searching may include the currently available UC device output capabilities. To illustrate, consider a domestic UC environment issuing a request resulting in a streaming multi-medial data, which may be processed as follows: A present cellular phone is given a text; Audio content goes to an audio device; Video is streamed to a video wall. A metamodel for context awareness that can be applied in context-aware searches has been proposed in [82]. A UC environment (device) may generate such search requests; the environment or a proxy server (loosely: a server that handles the internet search request on behalf) forwards the inquiry to a network service or dedicated search engine. Content search engines should also support data and information preprocessing,
transmission and distribution. In this context, the concept of Knowledge Transfer Networks (KTN) deserves attention. A KTN is composed of heterogeneous human nodes, such as (groups of) individuals, organizations, and non-human nodes such as repositories, web pages, databases, web crawlers and web bots [83] and [84]. Web crawlers and web bots (or: bots or: spiders) are intelligent programs (agents) that autonomously search the web for data associated with a topic [85]. Data and information transmission requires various technologies to: Discover and access to facts (data) in data stores and repositories; Verify and assess data quality and rank online repositories according to quality, plus: Ontologies, taxonomies and methods to map unaligned data definitions onto a common agreed definition; Tools to locate and fetch the data and deliver them timely and in the right format. Verification and assessment, mapping and other functions may be performed server-sided, by the crawler, or downstream in the UCenvironment [86]. Simple data and information can be delivered directly by simple crawlers [87] and [88]; (semi-)structured data are easily transcribed in the form of XML for immediate transmission. More complex data can be preprocessed by the crawlers themselves, or delivered at some remote classifiers or clustering application to further structure and codify them. Servlets are small sever-side web programs that can present queried data in a structured manner. Web services are server-side SOAP/XML-based services that can do the same. Preprocessing can also be conducted by a more advanced crawler (metabot), using ontologies to classify unstructured and semi-structured data. Crawlers may even learn new ontologies, using unsupervised clustering techniques and verify and assess proposed classifications using supervised classification [89]. Preprocessing in the UC environment itself and the overall coordination of the transmission may be dedicated to a proxy or another device in the UC environment capable of doing so. The experiences with inquiries in context and convenience of Napster- and Kazaa-like tools have led to specific search engines and transmission technology, such as peer-to-peer and multicast synchronous and asynchronous overlay
The Upcoming and Proliferation of Ubiquitous Technologies in Products and Processes
775
StrojniĹĄki vestnik - Journal of Mechanical Engineering 56(2010)11, 765-783
Ambient
The technological challenge for the next decade is to step up from textual description to generic form-feature and semantic descriptions that can be handled by object retrieval engines and shape matching technologies. A further technological challenge is to fit object descriptions (object templates), to observed objects in scenes and images. One of the applications is supply chain management and manufacturing. Design-by template (extended knowledge-based engineering) is another future application. Technology may be developed in the coming decade to implement and operate EKRs of product designs.
Portable
6 UBIQUITOUS COMPUTING IN PRODUCTS
networks and distributed content systems. Using peer-to-peer (P2P) transmission or content sharing are ways to robustly boost efficiency, potentially entirely machine-to-machine, using web services. This may also invoke an infrastructure for paid services; many B2B xxXML-services networks have emerged in industry. It remains as a technological challenge for the coming decade to bring about maturation of distributed hash technology (DHT) to implement more robust industry content distribution schemes.
Wearable Implanted med-aid
KPU
life-ID
corrective infotain
EPWR
partnering
language game
simulator
recording
all-in-one office furniture
house servicer
vehicles
displays
Fig. 7. Four genres of artifacts/products with ubiquitous computing 5.4 Object Retrieval Engines The goal of object search methods is to provide intuitive shortcuts to 3D objects or classes of objects, and retrieve models with requested characteristics, which are useful for designing. The basis of retrieval can be resemblance of global forms, existence of local shape features. Three major steps of retrieval are input specification, computational search and selection, and output presentation and assessment. Input specification can be based on descriptive parameters, graphical sketches, technical drawing, digital geometric models, and surrogate physical models [90].
776
It seems that only our fantasy poses limitations to UC applications, whereas additional considerations, such as consumer acceptance and social effects, determine its eventual success. The progress in some specific fields, like smart offices, living environments, urban regions, public spaces, medical care taking, etc. are evidence of the legacy and enormous potentials of UC enhanced artifacts and processes [6], [37] and [91]. The infinite number of existing, emerging and potential applications is prohibiting, even the discussion of the most representative applications would require a multiple volume book, instead of this paper (Fig. 7). Therefore, the below survey and analysis cannot be anything else but sketchy and incomplete. An additional technical issue is that the related literature reflects many different views, concepts, interpretations and definition of ubiquitous artifacts/products and the exploitation of ubiquitous technologies in products. There are trends that facilitate, and others that impede proliferation of the paradigm of ubiquitous products. Conventionalism, worry about impacts, and lack of awareness and understanding can be mentioned as major factors in slowing down proliferation. On the other hand, many social and technological trends in the field of electronics, information technology, social well-being and ecological conditions are enabling and urging, respectively, the progress in this direction. As concrete examples of these latter trends, we may refer to: (i) the periodical duplication of the computing power and capacities of microelectronics, (ii) the continuing reduction of the size (miniaturization) of
Gerritsen, B. - HorvĂĄth, I.
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 765-783
microelectronics components and products, (iii) the growing concerns about the access to and availability of critical material and energy sources, and the rate of material and energy consumption, (iv) the changing role of artifactual products in the society and the growing emphasis on service products, and (v) the influence of information and knowledge as production and business objective in the knowledge and innovation economy. A good illustration is the EU i2010 Intelligent Car Initiative, based on the idea that with the assistance of ubiquitous technology, traffic can be safer, more efficient and comfortable with cars in which some delicate driving tasks are overtaken or supervised. Technology for networked vehicles was developed in the InternetCAR project [63] to [65]. Its objective was to incorporate nested WPANs in cars in order to allow auto-configuring usage of mobile personal devices such as PDAs within an Internet-connected car with minimal or no driver distraction. The proposed solution is multi-level mobile computing, embedding an interior WPAN child in a moving car WPAN mother. UMICS mobile information access gathers additional traffic data and information from net-based locations during travel, with the aim of increasing safety and consuming less fuel [92]. In the coming decades, a significant part of customer durables will manifest as smartly behaving products [93], autonomously intelligent products [94], or emotionally intelligent products [95]. Catoms have been designed in the Carnegie Mellon claytronics project to have products constructing, shaping, empowering themselves from elementary intelligent particles (catoms), and adapting themselves to environmental demands in an evolutionary manner [46]. Nanostructured multi-functional materials permit product structural and state sanity monitoring on a continuous basis. Product structure mechanical and environmental load recording may also become a standard product feature [27]. In order to be equipped with adequate such competences, the designers’ curriculum should be enriched with UC theory and design methodology [96].
7 UBIQUITOUS COMPUTING IN CREATIVE PROCESSES Similar to the case of artifacts, there is an extremely large number of real-life processes, no matter if they are natural organic processes or artificially created processes, which can benefit from ubiquitous computing. In our view, UC will blur the boundary between products and processes even further. Miniaturized devices will be melting up in processes, like smart dust in agriculture and catoms in intelligent devices. Sensors, detectors, tags and counters embedded in customer products not only help the realization of information processing processes, but their functionality and behavior is determined through these processes. As smart information appliances, products can collect information not only about their location, but also about the time and frequency of use, operational circumstances, status of maintenance, incidental failures, damages of the product, the way of interacting with the product, and the users, unless it conflicts with privacy, personal interest, and confidentiality. These pieces of information can be recorded on a daily basis and forwarded to the designers through wireless communication, with the goal to provide additional intelligence or feedback for their work [97]. We believe that UC can have its largest impact in creative processes, such as design, innovation, development and management [98], which in turn raise the need for ubiquitous (ondemand, trans-professional, and lifelong) learning [99]. Creative processes are of high importance for the reason that intense (overdriving) innovation is seen as one of the top trends in the 21st century [100]. Researchers dealt with issues related to the application of UC throughout the entire product (part) life cycle too [101]. Product realization processes may be extended with the following, UC enabled functionalities: Support of operative design research either in the form of providing means for nomadic observational, explorative and experimental research actions [102], or in the form of indirect data gathering through products, which is becoming a modern form of empirical research; Elicitation and aggregation of information about stakeholders, who are either involved in designing of new products (e.g. fellow
The Upcoming and Proliferation of Ubiquitous Technologies in Products and Processes
777
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 765-783
designers, manufacturing experts, service providers), or influenced by the designed products (e.g. customers, operators and asset managers). In addition, in the early stages of design, prototypes may also deliver early feedback on their performances; From the earliest stages of production onwards, the supply chain and the adaptive manufacturing processes may organize and supervise themselves using UC [103], intelligent agents [86], and self-aware components [55]. This holds for materials, stock parts, sub-assemblies, and bundled and assembled products, and can be extended in the direction of business organization [4], and across the product life cycle [101]; As a layer-oriented manufacturing process, Shape Deposition Manufacturing (SDM) can be considered for the deposition of substrates to create films, layers, channels, etc., for instance, for printing of fiber optic sensors on products [104] and [105]. Digital manufacturing also support the fabrication of natural (bio-) sensor networks in and on products, while fulfilling the requirements for sustainable products and manufacturing [26] and [106]. SDM is expected to become multiscale, and to become able to also print largescale objects. During the stage of usage, feedback can be provided for designers, suppliers and manufacturers on a per item basis, like maintenance, end-of-life and reuse/recycling. Efficient process organization and control assume an understanding of the context of the process, in addition to people and artifacts. In the perspective of ubiquitous design support, context is considered a construct of any important information that has direct or indirect influence on a situation of entities, and can be used to characterize and to reason about a particular situation. The source of context information is data recorded by a configuration of sensors, which can combine measurement with the computer's ability to display, record, and communicate visualizations of the measured data [107]. This facilitates interpretation of real-life situations and processes. Currently, the main challenge of ubiquitous computing is the limited scope to teach computers about processes.
778
8 CONCLUSION In this paper we have surveyed the extraordinary wide research field of UC technology. In many parts of the discussion, we restricted ourselves to WSN and data and information retrieval technology in WSNs, as a field of current strong interest. We have discussed sensors, transmitters and sensation generators that can be applied in products and environments. More or less autonomous technological developments include the design of motes, generic miniature networked intelligent sensor units that organize themselves, HCI for UC and MSI for augmented environments. Not discussed deeply in the paper, but no less important technological developments such as bioengineering and multi-functional materials, NLP, agent technology and AI, and kernel technologies are all favoring to the above developments. Energy consumption is a constant technological concern (in particular, in case of WSN) and synchronization issues also need attention, specifically in dynamically changing ad hoc networks (MANETs). Based on the five clusters of ubiquitous technologies, a new generation of products can be expected soon, already able to (i) explore and reason with information and knowledge, (ii) sense their status and surroundings and build awareness of contexts, (iii) transmit signals on short to long ranges, (iv) remotely generate and provide sensations, and (v) network via heterogeneous platform in a volatile manner. Further research will be focusing on the route towards true cognitive environments, as expressed in Fig. 1, such that the cognitive overburden is taken off the hands of humans [108]. When migrating from well-bounded task and services models to evolutionary and unattended unsupervised autonomously developing open service models, human intervention models and roll back protocols will have to be developed and installed to guarantee unconditional and persistent human control under all circumstances. 9 REFERENCES [1] Want, R., Hopper, A., Falcao, V., Gibbons, J. (1992). The active badge location system. ACM Transactions on Information Systems, vol. 10, p. 91-102.
Gerritsen, B. - Horváth, I.
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 765-783
[2] Weiser, M. (1991). The computer for the 21st century. Scientific American, Special Issue on Communications, Computers, and Networks, vol. 265, p. 94-104. [3] Weiser, M. (1993). Some computer science issues in ubiquitous computing. Communications of the ACM, vol. 36, p. 7483. [4] Muhlhauser, M., Gurevych, I. (2008). Handbook of Research in Ubiquitous Computing Technology for Real Time Enterprises. IGI Global, Hershey. [5] Burnett, G.E., Porter, J.M. (2001). Ubiquitous computing within cars: Designing controls for non-visual use. International Journal of Human-Computer Studies, vol. 55, no. 4, p. 521-531. [6] Bell, G., Dourish, P. (2007). Yesterday's tomorrows: Notes on ubiquitous computing's dominant vision. Personal and Ubiquitous Computing, vol. 11, p. 133-143. [7] Mostefaoui, S., Maamar, Z., Giaglis, G. (2008). Advances in ubiquitous computing: Future paradigms and directions. IGI Publishing, Hershey. [8] Weiser, M., Brown, J. (1997). The coming age of calm technology. In: Denning, P., Metcalfe, R. (eds.), Beyond Calculation: The Next Fifty Years of Computing, SpringerVerlag, Berlin, p. 75-86. [9] Estrin, D., Culler, D., Pister, K., Sukhatme, G. (2002). Connecting the physical world with pervasive networks, IEEE Pervasive Computing - Mobile and Ubiquitous Systems, vol. 1, no. 1, p. 59-69. [10] Aarts, E. (2004). Ambient intelligence: A multimedia perspective. IEEE Multimedia, January-February, p. 12-19. [11] Emiliani, P., Stephanidis, C. (2005). Universal access to ambient intelligence environments: opportunities and challenges for people with disabilities. IBM Systems Journal, vol. 44, p. 605-619. [12] Remagnino, P., Foresti, G. (2005). Ambient intelligence: A new multidisciplinary paradigm. IEEE Transactions on Systems, Man and Cybernetics-Part A: Systems and Humans, vol. 35, p. 1-6. [13] Kelly, K. (1994). Out of control: The rise of neo- biological civilization. Perseus Books, Berkeley.
[14] Norman, D. (1998). The invisible computer. MIT Press, Boston. [15] Maes, P., Darrell, T., Blumberg, B., Pentland, A. (1995). The Alive System: Fullbody interaction with autonomous agents. Proceedings of Computer Animation Conference, p. 11-18. [16] Allen, P., Timcenko, A., Yoshimi, B., Michelman, P. (1993). Automated tracking and grasping of a moving object with a robotic hand-eye system. IEEE Transactions on Robotics and Automation, vol. 9, p. 152165. [17] Wellner, P. (1993). The digital desk calculator: Tangible manipulation on a desk top display. Proceedings of the ACM Symposium on User Interface Software and Technology, vol. 91, p. 27-33. [18] Cohen, C., Conway, L., Koditschek, D. (1996). Dynamical system representation, generation, and recognition of basic oscillatory motion gestures. Proceedings of the 2nd International Conference on Automatic Face- and Gesture-Recognition, p. 60-65. [19] Agah, A. (2001). Human interactions with intelligent systems: Research taxonomy. Computers and Electrical Engineering, vol. 27, p. 71-107. [20] Bouwstra, S., Chen, W., Feijs, L., Oetomo, S. (2009). Smart jacket design for neonatal monitoring with wearable sensors. Body Sensor Networks, p. 162-167. [21] Dourish, P. (2001). Seeking a foundation for context-aware computing. Human-Computer Interaction, vol. 16, p. 229-241. [22] Ishii, H., Ullmer, B. (1997). Tangible bits: towards seamless interfaces between people, bits and atoms. Proc. of the ACM CHI Conference on Human Factors in Computing Systems. [23] Iachello, G., Hong, J. (2007). End-user privacy in human-computer interaction. Now Publishers. [24] Jaimes, A., Sebe, N. (2007). Multimodal human-computer interaction: A survey. Computer Vision and Image Understanding, vol. 108, p. 116-134. [25] Gerard, M., Chaubey, A., Malhotra, B. (2002). Application of conducting polymers to biosensors. Biosensors and Bioelectronics, vol. 17, p. 3345-3359.
The Upcoming and Proliferation of Ubiquitous Technologies in Products and Processes
779
StrojniĹĄki vestnik - Journal of Mechanical Engineering 56(2010)11, 765-783
[26] Christodoulou, L., Venables, J.D. (2003). Multifunctional materials systems: the first generation. Journal of the Minerals, Metals and Materials Society, vol. 55, p. 39-45. [27] Lau, A.K.T., Lu, J., Varadan, V.K., Chang, F.K., Tu, J.P., Lam, P.M. (2008). Multifunctional Materials and Structures. Advanced Materials Research, vol. 47-50. [28] Raiteri, R., Grattarola, M., Butt, H., Skladal, P. (2001). Micromechanical cantilever-based biosensors. Sensors and Actuators B, vol. 79, p. 115-126. [29] Udd, E. (1996). Fiber optic smart structures. Proceedings of the IEEE, vol. 84, p. 60-67. [30] Wang, J. (2005). Carbon-nanotube based electrochemical biosensors: A Review. Electroanalysis, vol. 17, p. 7-14. [31] Nachman, L., Kling, R., Adler, R., Huang, J., Hummel, V. (2005). The Intel mote platform: A Bluetooth-based sensor network for industrial monitoring. Proceedings of the IPSN, p. 437-442. [32] Dollar, A., Wagner, C., Howe, R. (2006). Embedded Sensors for Biometric Robotics via Shape Deposition Manufacturing. Proc. of the 2006 RAS-EBMS BioRob. [33] Lou, J., Wang, J. (2008). Editorial; Nanomechanics and Nanostructured Multifunctional Materials: Experiments, Theories, and Simulations. Special Issue of the Journal of Nanomaterials, paper ID 408606. [34] Hill, J.,Culler, D. (2002). Mica: A wireless platform for deeply embedded networks. IEEE Micro, vol. 22, no. 6, p. 12-24. [35] Edwards, W., Grinter, R. (2001). At home with Ubiquitous computing: Seven challenges. In: Abowd, G., Brumitt, B. and Shafer, S. (eds.), Ubicomp 2001, LNCS 2201, Springer-Verlag, p. 256-272. [36] Hagras, H., Callaghan, V., Colley, M., Clarke, G., Pounds-Cornish, A., Duman, H. (2004). Creating an ambient - intelligence environment using embedded agents. IEEE Intelligent Systems, p. 12-20. [37] Intille, S. (2002). Designing a home of the future. Pervasive Computing, vol. 1, no. 2, p. 76-82. [38] Deshpande, A., Guestrin, C., Madden, S. (2005). Resource-Aware Wireless SensorActuator Networks. IEEE Data Engineering, vol. 28, p. 1-9.
780
[39] Kostakos, V., Nicolai, T., Yoneki, E., O'Neill, E., Kenn, H., Crowcroft, J. (2009). Understanding and measuring the urban pervasive infrastructure. Personal and Ubiquitous Computing, vol. 13, p. 355-364. [40] Srivastava, M., Hansen, M., Burke, J., Parker, A., Reddy, S., Saurabh, G., Allman, M., Paxson, V., Estrin, D. (2006). Wireless Urban Systems. Technical Report 65, UCLA-CENS. [41] Zhou, Y., Xia, C., Wang, H., Qi, J. (2009). Research on survivability of mobile ad-hoc network. J. Software Engineering and Applications, vol. 2, p. 50-54. [42] Bernardin, K., Ekenel, H., Stiefelhagen, R. (2009). Multimodal identity tracking in a smart room. Personal and Ubiquitous Computing, vol. 13, p. 25-31. [43] Pnevmatikakis, A., Soldatos, J., Talantzis, F., Polymenakos, L. (2009). Robust multimodal audio-visual processing for advanced context awareness in smart spaces. Personal and Ubiquitous Computing, vol. 13, p. 3-14. [44] Elson, J., Girod, L., Estrin, D. (2002). Fine grained network time synchronization in sensor networks using reference broadcasts. Proc. of the 5th Symposium on Operating Systems Design and Implementation Conference, vol. 36. [45] Whang, D., Xu, N., Rangwala, S., Chintalapudi, K., Govindan, R., Wallace, J. (2004). Development of an embedded networked sensing system for structural health monitoring. Proceedings of the 1st International Workshop for Advanced Smart Materials and Smart Structures Technology. [46] Goldstein, S.C., Campbell, J.D., Mowry, T.C. (2005). Programmable Matter. IEEE Computer, vol. 38, p. 99-101. [47] Jones, V., Mei, H., Broens, T., Widya, I., Peuscher, J. (2007). Context aware body area networks for telemedicine. Proceedings of the 8th Pacific RIM Conference on Multimedia (PCM), Hong Kong, p. 590-599. [48] Park, J.T., Nah, J.W., Kim, S.W., Chun, S.M., Wang, S., Seo, S.H. (2009). Contextaware handover with power efficiency for uhealthcare service in WLAN. Proc. of the Int. Conf. on New Trends in Information and Service Science, Los Alamitos, CA, IEEE Computer Society, p. 1279-1283.
Gerritsen, B. - HorvĂĄth, I.
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 765-783
[49] Yick, J., Mukherjee, B., Ghosal, D. (2008). Wireless sensor network survey. Computer Networks: The International Journal of Computer and Telecommunications Networking archive, vol. 52, p. 2292-2330. [50] Winters, J., Wang, Y., Winters, J. (2003). Wearable sensors and telerehabilitation; integrating intelligent telerehabilitation assistants with a model for optimizing home therapy. IEEE Engineering in Medicine and Biology Magazine, p. 56-65. [51] Senturia, S. (1998). CAD challenges for micro sensors, micro actuators, and microsystems. Proceedings of the IEEE, vol. 86, p. 1611-1626. [52] Eliasson, J., Lindgren, P., Delsing, J. (2008). A bluetooth based sensor node for lowpower ad hoc networks. Journal of Computers, vol. 3, p. 1-10. [53] Domdouzis, K., Kumar, B., Anumba, C. (2007). Radio-frequency identification (RFID) applications: A brief introduction. Advanced Engineering Informatics, vol. 21, 350-355. [54] Landt, J. (2005). The History of RFID. IEEE Potentials, vol. 24. no. 4, p. 8-11. [55] McFarlane, D., Sarma, S., Chirn, J., Wong, C., Ashton, K. (2003). Auto ID systems and intelligent manufacturing control. Engineering Applications of Artificial Intelligence, vol. 16, p. 365-376. [56] Hohlt, B., Doherty, L., Brewer, E. (2004). Flexible power scheduling for sensor networks. IPSN’04, Berkeley. [57] Friedman, E. (2006). On-chip interconnect: The past, present, and future. Proc. of the Future Interconnects and Networks on Chip; 1st NoC Workshop – DATE ’06. [58] Kundu, P., Peh, L. (2007). On-chip interconnects for multicores; Guest Editors' Introduction. IEEE Micro, p. 3-5. [59] Felegyhazi, M., Hubaux, J. (2006). Wireless Operators in a Shared Spectrum. Proc. of the 25th IEEE Infocom, Barcelona. [60] Bonnet, P., Beaufour, A., Dydensborg, M., Leopold, M. (2003). Bluetooth-based senor networks. SIGMOD Rec, vol. 32, p. 35-40. [61] Hohlt, B., Doherty, L., Brewer, E. (2004). Flexible power scheduling for sensor networks. IPSN'04, Berkeley. [62] Puccinelli, D., Haenggi, M. (2005). Wireless sensor networks: applications and challenges
[63]
[64] [65]
[66]
[67]
[68]
[69]
[70]
[71]
[72]
of ubiquitous sensing. IEEE Circuits and Systems Magazine, Third Quarter, p. 19-29. Ernst, T., De La Fortelle, A. (2006). Car-tocar and car-to-infrastructure communications based on NEMO and MANET in IPv6. Proceedings of the 13th World Congress and Exhibition on Intelligent Transport Systems and Services, London. Ernst, T., Uehara, K. (2002). Connecting automobiles to the internet. Proc. of the ITST workshop, Seoul. Ernst, T., Mitsuya, K., Uehara, K. (2003). Network mobility from the internetCAR perspective. Journal of Interconnection Networks, vol. 4, p. 15. Savidis, A., Stephanidis, C. (2005). Distributed interface bits: Dynamic dialogue composition from ambient computing resources. Personal and Ubiquitous Computing, vol. 9, p. 142-168. Nylander, S., Bylund, M., Waern, A. (2005). Ubiquitous service access through adapted user interfaces on multiple devices. Personal and Ubiquitous Computing, vol. 9, p. 123133. Ni, T., Schmidt, G., Staadt, O., Livingston, M., Ball, R., May, R. (2006). A survey of large high-resolution display technologies, techniques and applications. Proc. of the IEEE Conference on Virtual Reality VR, p. 223-236. Terrenghi, L., Quigley, A., Dix, A. (2009). A taxonomy for and analysis of multi-persondisplay ecosystems. Personal and Ubiquitous Computing, vol. 13, p. 583-598. Horváth, I., Rusák, Z., van der Vegte, W., Opiyo, E.Z. (2008). Tangible virtuality: Towards implementation of the core functionality. Proceedings of ASME 2008 International Design Engineering Technical Conferences - IDETC/CIE, New York City, p. 1637-1647. Karpouzis, K., Soldatos, J., Tzovaras, D. (2009). Editorial: Introduction to the special issue on emerging multimodal interfaces. Personal and Ubiquitous Computing, vol. 13, p. 1-2. Frohlich, P., Simon, R., Baillie, L. (2008). Editorial: Mobile spatial interaction. Personal and Ubiquitous Computing, vol. 13, no. 4.
The Upcoming and Proliferation of Ubiquitous Technologies in Products and Processes
781
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 765-783
[73] Benoit, A., Bonnaud, L., Caplier, A., Ngo, P., Lawson, L., Trevisan, D., Levacic, V., Mancas, C., Chanel, G. (2009). Multimodal focus attention and stress detection and feedback in an augmented driver simulator. Personal and Ubiquitous Computing, vol. 12, p. 33-41. [74] Liebowitz, J. (2001). Knowledge management and its link to artificial intelligence. Expert Systems with Applications, vol. 20, p. 1-6. [75] Sambamurthy, V., Subramani, M. (2005). Special issue foreword: special issue on information technologies and knowledge management. MIS Quarterly, vol. 29, p. 1-7. [76] Zack, M. (1999). Managing codified knowledge. Sloan Management Review, p. 45-58. [77] Kankanhalli, A., Tan, B., Wei, K. (2005). Understanding seeking from electronic knowledge repositories: an empirical study. J. of the American Society for Information Science and Technology, vol. 56, p. 11561166. [78] Lawton, G. (2001). Knowledge management: Ready for prime time? IEEE Computer, vol. 34, p. 12-14. [79] Kung, S. (2009). Kernel approaches to unsupervised and supervised machine learning, PCM, LNCS 5879, p. 1-32. Springer-Verlag, Berlin. [80] Muller, K., Mika, S., Ratsch, G., Tsuda, K., Scholkopf, B. (2001). An introduction to kernel-based learning algorithms. IEEE Transactions on Neural Networks, vol. 12, p. 181-201. [81] Hubbard, D. (2007). How to measure anything; finding the value of intangibles in business. John Wiley and Sons, Inc., Hoboken. [82] Hong, D., Chiu, D., Shen, V. (2005). Requirements elicitation for the design of context-aware applications in a ubiquitous environment. Proc. of the ICEC, p. 590-596. [83] Carley, K. (2002). Smart agents and organizations of the future. In Lievrouw, L. Livingstone, S. (eds.), Handbook of new media, Sage, p. 206-220. [84] Contractor, N., Monge, P. (2002). Managing knowledge networks. Management Communication Quarterly, vol. 16, p. 249258.
782
[85] Pinkerton, B. (2000). WebCrawler: Finding what people want. PhD Thesis, University of Washington. [86] Shen, W., Hao, Q., Yoon, H., Norrie, D. (2006). Applications of agent-based systems in intelligent manufacturing: an updated review. Advanced Engineering Informatics, vol. 20, p. 415-431. [87] Tweedale, J., Ichalkaranje, N., Sioutis, C., Jarvis, B., Consoli, A., Philips-Wren, G. (2007). Innovations in Multi-agent systems. J. of Network and Computer Applications, vol. 30, p. 1089-1115. [88] Van der Vecht, B., Dignum, F., Meyer, J. (2009). Autonomy and coordination: controlling external influences on decision making. Proc. of the IEEE/WIC/ACM Int. Conf. on Web Intelligence and Intelligent Agent Technology - Workshops, p. 92-95. [89] Fenijn, C. (2007). Semi-automatic creation of domain ontologies with centroid based crawlers. PhD thesis, Utrecht University. [90] Moustakas, K., Nikolakis, G., Tzovaras, D., Carbini, S., Bernier, O., Viallet, J. (2009). 3D content-based search using sketches. Personal and Ubiquitous Computing, vol. 13, p. 59-67. [91] Poznanski, V., Corley, S., Edmonds, P., Hull, A., Wise, M., Willis, M., Sato, R., Green, C. (2004). The Ubiquitous Office: A Nomadic Search and Access Solution. Sharp Technical Journal, vol. 89, p. 21-27. [92] Baresi, L., Dustdar, S., Gall, H., Matera, M. (2005). Editorial: Special issue on ubiquitous mobile information and collaboration systems (UMICS). Personal and Ubiquitous Computing, vol. 9, p. 261. [93] Strohbach, M., Gellersen, H.-W., Kortuem, G., Kray, N. (2004). Cooperative artifacts: Assessing real world situations with embedded technology. Proceedings of UbiComp, LNCS 3205, Springer-Verlag, Berlin, p. 250-267. [94] Meyer, G.G., Främling, K., Holmström, J. (2009). Intelligent products: A survey. Computers in Industry, vol. 60, no. 3, p. 137148. [95] Picard, R., Vyzas, E., Healey, J. (2001). Toward machine emotional intelligence: Analysis of affective physiological state. IEEE Transactions Pattern Analysis and
Gerritsen, B. - Horváth, I.
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 765-783
Machine Intelligence, vol. 23, no. 10, p. 1175-1191. [96] Thomas, P. (2006). Special Issue on personal and ubiquitous computing: Papers from 3AD-the second international conference on appliance design. Personal and Ubiquitous Computing, vol. 10, p. 10. [97] Sternberg, R.J., Lautrey, J., Lubert, T.I. (2002). Where are we in the field of intelligence, how did we get here and where are we going? Models of Intelligence: International Perspectives, American Psychological Association, Washington D.C., p. 3-25. [98] Horváth, I., Rusák, Z., Opiyo, E.Z., Kooijman, A. (2009). Towards ubiquitous design support. Proceedings of the ASME International Design Engineering Technical Conferences - IDETC/CIE, DETC200987573. [99] Horváth, I., Peck, D., Verlinden, J. (2009). Demarcating advanced learning approaches from methodological and technological perspectives'. European Journal of Engineering Education, vol. 34, no. 6, p. 465-485. [100] Canton, J. (2007). The extreme future: The top trends that will reshape the world in the next 20 years. A Plume Book, New York, p. 47-88. [101] Kiritsis, D. (2009). Product lifecycle management and embedded information devices. Nof, S. (ed.), Springer Handbook of Automation (Chapter 43), Springer-Verlag.
[102] Verlinden, J. Horvath, I. (2008). Enabling interactive augmented prototyping by a portable hardware and a plug-in-based software architecture. Strojniški vestnik Journal of Mechanical Engineering, vol. 54, no. 6, p. 458-470. [103] Gerritsen, B. (2008). Advances in mass customization and adaptive manufacturing. Proc. of the TMCE, Izmir, Turkey, vol. 2, p. 869-880. [104] Merz, R., Prinz, F., Ramawami, K., Terk, M., Weiss, L. (1994). Shape Deposition Manufacturing. Proc. of the Solid Freeform Fabrication Symposium, Austin. [105] Li, X., Golnas, A., Prinz, F. (2000). Shape deposition manufacturing of smart metallic structures with embedded sensors. Proc. of the SPIE's 7th Int. Symp. on Smart Structures and Materials, Newport, Beach. [106] Stratakis, E., Ranella, A., Farsari, M., Fotakis, C. (2009). Laser-based micro/nanoengineering for biological applications. Progress in Quantum Electronics, vol. 33, p. 127-163. [107] Gellersen, H.W., Schmidt, A., Beigl, M. (2004). Multi-sensor context-awareness in mobile devices and smart artifacts. Mobile Networks and Applications, vol. 7, no. 5, p. 341-351. [108] Doctor, F., Hagras, H., Callaghan, V. (2005). A fuzzy embedded agent-based approach for realizing ambient intelligence in intelligent inhabited environments. IEEE Transaction on Systems, Man, and Cybernetics-Part A: Systems and Humans, vol. 35, p. 55-65.
The Upcoming and Proliferation of Ubiquitous Technologies in Products and Processes
783
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 784-785 Instructions for Authors
Instructions for Authors For full instructions see the Authors Guideline section on the journal's website: http://en.sv-jme.eu/ Send to: University of Ljubljana Faculty of Mechanical Engineering SV-JME Aškerčeva 6, 1000 Ljubljana, Slovenia Phone: 00386 1 4771 137 Fax: 00386 1 2518 567 E-mail: info@sv-jme.eu strojniski.vestnik@fs.uni-lj.si All manuscripts must be in English. Pages should be numbered sequentially. The maximum length of contributions is 10 pages. Longer contributions will only be accepted if authors provide justification in a cover letter. Short manuscripts should be less than 4 pages. Prior or concurrent submissions to other publications should be included in the cover letter. We also ask you to specify the typology of the manuscript – you can define your paper as an original, review or short paper. The required contact information (e-mail and mailing address) and a suggestion of at least two potential reviewers should be provided in the cover letter. Reasons for a certain reviewer to be excluded from the review process can also be provided in the cover letter. Every manuscript submitted to the SV-JME undergoes the course of the peer-review process. THE FORMAT OF THE MANUSCRIPT The manuscript should be written in the following format: - A Title, which adequately describes the content of the manuscript. - An Abstract should not exceed 250 words. The Abstract should state the principal objectives and the scope of the investigation, as well as the methodology employed. It should summarize the results and state the principal conclusions. - 6 significant key words should follow the abstract to aid indexing. - An Introduction, which should provide a review of recent literature and sufficient background information to allow the results of the article to be understood and evaluated. - A Theory or experimental methods used. - An Experimental section, which should provide details of the experimental set-up and the methods used for obtaining the results.
784
- A Results section, which should clearly and concisely present the data using figures and tables where appropriate. - A Discussion section, which should describe the relationships and generalizations shown by the results and discuss the significance of the results making comparisons with previously published work. (It may be appropriate to combine the Results and Discussion sections into a single section to improve the clarity). - Conclusions, which should present one or more conclusions that have been drawn from the results and subsequent discussion and do not duplicate the Abstract. - References, which must be cited consecutively in the text using square brackets [1] and collected together in a reference list at the end of the manuscript. Units - standard SI symbols and abbreviations should be used. Symbols for physical quantities in the text should be written in italics (e.g. v, T, n, etc.). Symbols for units that consist of letters should be in plain text (e.g. ms-1, K, min, mm, etc.) Abbreviations should be spelt out in full on first appearance, e.g., variable time geometry (VTG). Meaning of symbols and units belonging to symbols should be explained in each case or quoted in a special table at the end of the manuscript before References. Figures must be cited in a consecutive numerical order in the text and referred to in both the text and the caption as Fig. 1, Fig. 2, etc. Figures should be prepared without borders and on white grounding and should be sent separately in their original formats. Pictures may be saved in resolution good enough for printing in any common format, e.g. BMP, GIF or JPG. However, graphs and line drawings should be prepared as vector images, e.g. CDR, AI. When labeling axes, physical quantities, e.g. t, v, m, etc. should be used whenever possible to minimize the need to label the axes in two languages. Multi-curve graphs should have individual curves marked with a symbol. The meaning of the symbol should be explained in the figure caption. Tables should carry separate titles and must be numbered in consecutive numerical order in the text and referred to in both the text and the caption as Table 1, Table 2, etc. In addition to the physical quantity, e.g. t (in italics), units (normal text),
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 784-785
should be added in square brackets. The tables should each have a heading. Tables should not duplicate data found elsewhere in the manuscript. Acknowledgement of collaboration or preparation assistance may be included before References. Please note the source of funding for the research. REFERENCES A reference list must be included using the following information as a guide. Only cited text references are included. Each reference is referred to in the text by a number enclosed in a square bracket (i.e., [3] or [2] to [6] for more references). No reference to the author is necessary. References must be numbered and ordered according to where they are first mentioned in the paper, not alphabetically. All references must be complete and accurate. Examples follow. Journal Papers: Surname 1, Initials, Surname 2, Initials (year). Title. Journal, volume, number, pages. [1] Zadnik, Ž., Karakašič, M., Kljajin, M., Duhovnik, J. (2009). Function and Functionality in the Conceptual Design Process. Strojniški vestnik Journal of Mechanical Engineering, vol. 55, no. 7-8, p. 455-471. Journal titles should not be abbreviated. Note that journal title is set in italics. Books: Surname 1, Initials, Surname 2, Initials (year). Title. Publisher, place of publication. [2] Groover, M.P. (2007). Fundamentals of Modern Manufacturing. John Wiley & Sons, Hoboken. Note that the title of the book is italicized. Chapters in Books: Surname 1, Initials, Surname 2, Initials (year). Chapter title. Editor(s) of book, book title. Publisher, place of publication, pages. [3] Carbone, G., Ceccarelli, M. (2005). Legged robotic systems. Kordić, V., Lazinica, A., Merdan, M. (Editors), Cutting Edge Robotics. Pro literatur Verlag, Mammendorf, p. 553-576. Proceedings Papers: Surname 1, Initials, Surname 2, Initials (year). Paper title. Proceedings title, pages. [4] Štefanić, N., Martinčević-Mikić, S., Tošanović, N. (2009). Applied Lean System in Process Industry. MOTSP 2009 Conference Proceedings, p. 422-427.
Standards: Standard-Code (year). Title. Organisation. Place. [5] ISO/DIS 16000-6.2:2002. Indoor Air – Part 6: Determination of Volatile Organic Compounds in Indoor and Chamber Air by Active Sampling on TENAX TA Sorbent, Thermal Desorption and Gas Chromatography using MSD/FID. International Organization for Standardization. Geneva. www pages: Surname, Initials or Company name. Title, from http://address, date of access. [6] Rockwell Automation. Arena, from http://www.arenasimulation.com, accessed on 2009-09-07. COPYRIGHT Authors submitting a manuscript do so on the understanding that the work has not been published before, is not being considered for publication elsewhere and has been read and approved by all authors. The submission of the manuscript by the authors means that the authors automatically agree to transfer copyright to SV-JME and when the manuscript is accepted for publication. All accepted manuscripts must be accompanied by a Copyright Transfer Agreement, which should be sent to the editor. The work should be original by the authors and not be published elsewhere in any language without the written consent of the publisher. The proof will be sent to the author showing the final layout of the article. Proof correction must be minimal and fast. Thus it is essential that manuscripts are accurate when submitted. Authors can track the status of their accepted articles on http://en.sv-jme.eu/. PUBLICATION FEE For all articles authors will be asked to pay a publication fee prior to the article appearing in the journal. However, this fee only needs to be paid after the article has been accepted for publishing. The fee is 180.00 EUR (for articles with maximum of 6 pages), 220.00 EUR (for articles with maximum of 10 pages), 20.00 EUR for each addition page. Additional costs for a color page is 90.00 EUR.
785
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11 Vsebina
Vsebina Strojniški vestnik - Journal of Mechanical Engineering letnik 56, (2010), številka 11 Ljubljana, november 2010 ISSN 0039-2480 Izhaja mesečno Uvodnik
SI 149
Povzetki člankov Dag Raudberget: Praktična uporaba sočasnega inženirstva na osnovi množic v industriji Michael Abramovici, Fahmi Bellalouna, Jens Christian Göbel: Adaptivno upravljanje s spremembami v industrijskih proizvodno-storitvenih sistemih Roberto Raffaeli, Maura Mengoni, Michele Germani: Programski sistem za vrednotenje vpliva “konstruiranja za X” v procesu spreminjanja zasnov Tristan Barnett, Elizabeth Ehlers: Računanje v oblaku za sinergetski razvoj modelov emocij v učnih sistemih z več agenti Okba Hamri, Jean Claude Léon, Franca Giannini, Bianca Falcidieno: Ujemanje računalniško podprtega konstruiranja in simulacij po metodi končnih elementov Ronald Poelman, Zoltán Rusák, Alexander Verbraeck, Leire Sorasu Alcubilla: Vpliv vizualne povratne informacije na učljivost in uporabnost metod konstruiranja Christophe Merlo, Nadine Couture: K uporabnikom usmerjen pristop za namizno sodelovalno konstrukcijsko okolje Bart Gerritsen, Imre Horváth: Prihod in širjenje vseprisotnih tehnologij pri izdelkih in procesih
SI 160
Navodila avtorjem
SI 161
Osebne vesti Doktorati, magisteriji, specializacija in diplome
SI 163
SI 153 SI 154 SI 155 SI 156 SI 157 SI 158 SI 159
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, SI 149-SI 152
Uvodnik
Tematska številka: Vloga omogočevalcev v kontekstu industrijskega razvoja izdelkov Znano dejstvo je, da se ustvarjalnost, zmožnosti in učinkovitost dela konstruktorjev in inženirjev izboljšajo, če so metode in orodja, ki jih uporabljajo, prilagojena kontekstu aplikacij in delovnih nalog. Kljub temu pa je bila velika večina današnjih računalniško usmerjenih omogočevalcev (metodologije, sistemi, orodja, metode, pravila) razvita predvsem kot nevtralno sredstvo podpore, brez upoštevanja značilnosti uporabnikov in okolij uporabe. Za optimalno izkoriščanje omogočevalcev konstruiranja in inženiringa je treba potrebno pozornost posvetiti dialektičnemu razmerju (interakciji) med njihovo funkcionalno manifestacijo in procesom snovanja, ki ga podpirajo. Za omogočevalce razvoja to pomeni, da je treba tako pri razvoju kot pri implementaciji konceptov čim širše zajeti potrebe uporabnikov in aplikacij. Poleg tega je treba predvideti tudi določeno raven občutljivosti in prilagodljivosti na kontekst. Po drugi strani pa je treba za optimalno izkoriščanje možnosti, ki jih zagotavljajo omogočevalci, racionalizirati in ponovno zasnovati tudi aplikativne procese. To je še posebno velik izziv, ko je treba pri praktičnih procesih uporabiti več funkcionalno in na zunaj heterogenih omogočevalcev. Trenutno poteka postopen prehod od razvoja izdelkov, usmerjenega k artefaktom, prek kombinacij artefaktov in storitev na produktne inovacije, usmerjene k storitvam, kar je poseben izziv tudi pri razvoju računalniških orodij za konstruiranje in inženiring. Na osmem mednarodnem simpoziju o orodjih in metodah konkurenčnega inženiringa, ki je potekal med 12. in 16. aprilom 2010 v italijanski Anconi, je bilo predstavljenih več prispevkov, ki ne le prepoznavajo naštete teme, ampak jih tudi obravnavajo iz različnih zornih kotov. Ena od najpogosteje omenjenih tem je bila prilagoditev omogočevalcev spremembam v procesih in okolju. Med drugimi popularnimi temami je tudi zagotavljanje ujemanja med
različnimi agenti v procesih razvoja izdelkov ter obravnava praktičnega vpliva naprednih obstoječih orodij in možnih prihodnjih orodij. Za to tematsko številko smo izbrali nabor člankov, v katerih so prikazani rezultati teh prizadevanj in alternativni pristopi za raziskovalce in razvijalce, ki bodo na teh področjih dejavnih v prihodnjih letih. Nabor člankov v tej tematski številki seveda ne more dati izčrpnega pregleda nad celotno problematiko. Članki pa ne glede na to osvetljujejo široko paleto strategij, s katerimi je računalniške omogočevalce konstruiranja in inženiringa možno ukrojiti po meri in jih prilagoditi posameznim kontekstom. Prvi štirje članki tematsko obravnavajo značilnosti procesov in prilagajanje na spremembe procesov, drugi štirje članki pa pokrivajo razmerje med uporabniki, nalogami, aplikacijami in okolji. Članek Daga Raudbergeta z naslovom Praktična uporaba sočasnega inženirstva na osnovi množic v industriji obravnava vedno aktualno temo učinkovitega sočasnega inženirstva. Pristop na osnovi množic pomeni sočasen in sistematičen razvoj alternativnih rešitev in preučevanje kompromisov, ki jih zahtevajo. Uporabljen je pristop projektnega vodenja, pri čemer se avtor zanaša na tri pragmatične principe, ki so pomembni za doseganje rešitev z najmanjšim organizacijskim vložkom: (1) široko iskanje možnih rešitev, ne da bi upoštevali potrebe ali mnenja drugih oddelkov, (2) integracija različnih rešitev z odstranjevanjem tistih, ki niso združljive z glavnim naborom rešitev, in (3) zavezanost k razvoju rešitev, ki se ujemajo z drugimi množicami in izpolnjujejo trenutne specifikacije. Neobetavne rešitve se izločijo v ponavljajočem postopku razvoja rešitev in zaostrovanja specifikacij, ter z uveljavitvijo drugega principa. Opravljene so bile štiri študije primerov z namenom ovrednotenja vplivov na povzročitev dodatnih stroškov, uporabo virov in značilnosti izdelkov, ter napovedovanja
SI 149
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 681-684
predvidenih učinkov na proces razvoja. Podana je objektivna analiza priložnosti in omejitev sočasnega inženirstva na osnovi množic. Izhodišče drugega članka z naslovom Adaptivno upravljanje s spremembami v industrijskih proizvodno-storitvenih sistemih, ki so ga napisali Michael Abramovici, Fahmi Bellalouna in Jens Christian Göbel, je opažanje, da so za industrijske proizvodno-storitvene sisteme značilne zelo pogoste dinamične spremembe, ne le v fazi načrtovanja, temveč v celem življenjskem ciklu. Trenutne metode in standardi upravljanja s spremembami nudijo le šibko podporo upravljanju s takšnimi spremembami in ne morejo izpolniti zahtev dinamičnega upravljanja z inženirskimi spremembami. Avtorji zato predlagajo adaptivno upravljanje sprememb – koncept na osnovi k ciljem orientirane metode modeliranja in upravljanja procesov. Poslovni procesi so definirani kot šibke povezave med inteligentnimi agenti. Naloge in dejavnosti v procesu so definirane kot inteligentni agenti. Ti agenti predstavljajo ustrezno pot do (pod)cilja, neodvisno in ob upoštevanju pravil. Osebam, ki sodelujejo v procesu, koncept daje tudi priporočila za sprejemanje odločitev ob naslednjih korakih procesa. Prej opisani agenti so modelirani kot modularne inteligentne storitve po principu prepričanje-želja-namere. Članek podaja izčrpno predstavitev koncepta, na širšo potrditev pa bo treba še počakati. Predmet raziskave, predstavljene v članku avtorjev Roberta Raffaelija, Maure Mengoni in Michele Germani z naslovom Programski sistem za vrednotenje vpliva “konstruiranja za X” v procesu spreminjanja zasnov, je zagotavljanje podpore konstruktorjem in inženirjem za hitro iskanje alternativnih rešitev pri spreminjajočih se pogojih. Predlagana je tako metodologija kot programsko orodje za podporo njenemu uresničevanju. Osnovni koncept je večnivojski okvir za predstavitev izdelkov, ki pokriva tako lastnosti kot relacije. Kadar nastopi potreba po spremembi lastnosti, predlagani mehanizem za uveljavljanje sprememb izda seznam komponent in enot, ki jih je treba spremeniti. Uvedba principov “konstruiranja za X” kot pravil konstruktorjem omogoča, da upoštevajo vse možne vidike, ki vplivajo na zasnovo izdelka, in da vrednotijo vplive sprememb na treh ravneh. Pristop in predlagana platforma sta bila
SI 150
uporabljena na področju proizvodnje hladilnikov, ob upoštevanju formaliziranih zahtev podjetja. Ta raziskava je zelo obetavna, za zaokroženo rešitev pa bodo potrebne še dodatne preiskave, razvoj in preizkusi. Tristan Barnett in Elizabeth Ehlers se v prispevku Računanje v oblaku za sinergetski razvoj modelov emocij v učnih sistemih z več agenti ukvarjata z najzmogljivejšo obliko prilagajanja, ki izhaja iz strojnega učenja. Ta tehnologija velja za nepogrešljivo pri izboljševanju prilagodljivosti sistemov na osnovi agentov. Njun konkreten predlog vključuje arhitekturo učenja več agentov s porazdeljeno umetno zavestjo (MALDAC). Nudi razširljiv pristop k razvoju prilagodljivih sistemov v zahtevnih, realističnih okoljih. Za obvladovanje zahtevnih okolij sta prilagodila teorijo umetne zavesti. V zasnovo arhitekture sta vključila tudi paradigmo računanja v oblaku za izboljšanje razširljivosti sistema. Cilj MALDAC-a je razširljiva implementacija visokoprilagodljivih agentov, zlasti v delno preglednih, stohastičnih in dinamičnih okoljih. MALDAC uporablja za boljšo prilagodljivost storitvenega agenta v oblaku. Arhitektura uporablja modele emocij za obvladovanje prilagodljivosti. To raziskovalno delo je vredno omembe, ker skuša vzpostaviti kognitivno arhitekturo z integracijo robustnosti učenja z več agenti in prilagodljivosti emotivnega učenja. Sistemi za računalniško podprto konstruiranje in računalniško podprt inženiring so običajno zgrajeni na različnih oblikovnih predstavitvah modelov. Če je geometrijski model ustrezen, je model s končnimi elementi možno ustvariti v polavtomatskem ali popolnoma avtomatskem postopku mreženja. To pa včasih ni možno zaradi težav s popolnostjo in pretvorbo predstavitev. Okba Hamri, Jean Claude Léon, Franca Giannini in Bianca Falcidieno zato predlagajo mešano predstavitev oblik v svojem članku z naslovom Ujemanje računalniško podprtega konstruiranja in simulacij po metodi končnih elementov. Bistvo njihovega konstrukta je topologija visoke ravni, ki povezuje različne geometrijske modele v kontekstu. Modele na osnovi značilnosti spremljajo težave pri pretvorbi predstavitev v kombinaciji s problematično ohranitvijo pomena visokonivojskih modelirnih entitet. V članku je predlagan inovativen pristop k pripravi simulacijskega modela s končnimi
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 681-684
elementi na osnovi mešane predstavitve, ki skrajšuje čas procesa priprave modela in ohranja ujemanje med CAD-modelom in simulacijskim modelom. Nova metodologija omogoča analitikom selektivno izbiro in izločitev želenih geometrijskih entitet iz več virov vhodnih oblik (CAD-modeli, modeli značilnosti oblik in obstoječe mreže) za ustvarjanje modela FEA. Avtorji trdijo, da bi bilo predlagani pristop možno vgraditi v vsak komercialni paket. S prehodom v ero 3D-konstruiranja in 3Dvizualizacije se porajajo nova raziskovalna vprašanja in novi raziskovalni problemi za znanstvenike, razvojnike in uporabnike. V članku z naslovom Vpliv vizualne povratne informacije na učljivost in uporabnost metod konstruiranja Ronald Poelman, Zoltán Rusák, Alexander Verbraeck in L. Sorasu Alcubilla preučujejo kognitivne vidike oblikovanja prostih površin z različnimi vizualizacijskimi napravami in tehnikami. Študija razkriva močno korelacijo med osebnimi izkušnjami in potrebnimi prizadevanji pri učenju. Avtorji navajajo zanimivo odkritje, da udeleženci preučevanih tehnologij v industrijskem okolju ne pričakujejo vsaj še naslednjih 5 let. Omembe vredno je tudi to, da imajo naglavni prikazovalniki po mnenju udeležencev večji potencial za 3D-konstruiranje kot holografski prikazovalniki. Avtorji zaključijo, da je potrebno še dodatno izboljšanje kakovosti, saj zaenkrat še nezrele tehnologije vplivajo na percepcijo možnosti 3D-vizualizacije pri uporabnikih. Christophe Merlo in Nadine Couture se v članku K uporabnikom usmerjen pristop za namizno sodelovalno konstrukcijsko okolje ukvarjata z neposredno interakcijo med člani tima. Glavno raziskovalno vprašanje je, kako lahko digitalne tehnologije pomagajo pri sodelovanju timov konstruktorjev, ki delajo na isti lokaciji. Takšno sodelovanje kljub akademskim prispevkom v večini podjetij še ni dobilo zadostne podpore. Cilj avtorjev so nova
orodja, ki omogočajo delo z 2D/3D-objekti v treh dimenzijah ter fizične in navidezne interakcije. Predlagane rešitve so bile preizkušene v uporabniških študijah po različnih scenarijih. Čeprav predlagana namizna tehnologija ne širi trenutnih meja znanja in tehnologije, bremenijo pa jo tudi mnoge omejitve, sta avtorja opazila, da nov način interakcije prinaša izboljšave zmogljivosti. Zadnji članek z naslovom Prihod in širjenje vseprisotnih tehnologij pri izdelkih in procesih, ki sta ga napisala Bart Gerritsen in Imre Horváth, razpravlja o paradigmah in osnovah vseprisotnega računalništva ter o zadnji generaciji teh tehnologij. Glavna domneva avtorjev je, da lahko vseprisotne tehnologije povzročijo revolucijo tako pri uporabniških izdelkih kot pri orodjih za podporo konstruiranju in inženiringu. Cilj je zagotoviti optimalno podporo konstruktorjem in inženirjem pri njihovih vsakodnevnih ustvarjalnih in analitičnih dejavnosti. Ta podpora mora biti sodelovalna, proaktivna, prilagodljiva, občutljiva na kontekst in personalizirana. Članek podaja izčrpen pregled trenutnega stanja na področju uporabe vseprisotnih tehnologij in pametnih izdelkov, ambientalnih okolj in kreativnih procesov. Avtorja sta prepričana, da brezžična omrežja senzorjev so in bodo verjetno tudi ostala najpomembnejši omogočevalec, skupaj z bioinženiringom, večfunkcijskimi materiali, tehnologijami pametnih agentov in semantičnimi sprožilci informacij. Avtorja poudarjata potrebo po dodatnih raziskavah resnično kognitivnih okolij, ki bodo zmanjšala preobremenitev z informacijami in intelektualno zahtevnost za človeška bitja. Končno izražamo svojo hvaležnost tudi vsem avtorjem in recenzentom, ki so s svojimi izvirnimi članki ali recenzijami prispevali k tej tematski številki. Njihovo sodelovanje v procesu pregledovanja in recenziranja zelo cenimo.
Regine W. Vroom1, Imre Horváth1, Ferruccio Mandorli2 1
Fakulteta za industrijski inženiring, Tehnična univerza v Delftu, Nizozemska 2 Fakulteta za strojništvo, Politehnika Marche, Italija
SI 151
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, 681-684
Regine W. Vroom je magistrirala iz industrijskega inženiringa (1986) na Tehnični univerzi v Delftu na Nizozemskem. V letih 1985 do 1987 je delala v družbi Volvo Car kot analitik na področju računalniško podprtega konstruiranja. Leta 1987 je pridobila naslov docenta na Tehnični univerzi v Delftu. Doktorirala je leta 2001 na Tehnični univerzi v Delftu. Ima več kot 25 let izkušenj s poučevanjem industrijskega inženiringa dodiplomskim in podiplomskim študentom, je pa tudi prva prejemnica nagrade za najboljšega učitelja konstruiranja na fakulteti (1994). Leta 1996 je bila za več let sprejeta v senat Fakultete za industrijski inženiring. Nato je postala vodja kakovosti izobraževanja. Med svojo zaposlitvijo na univerzi je delovala tudi kot svetovalka malim in srednjim podjetjem na področju upravljanja informacij in znanja v sklopu razvoja izdelkov. Delovala je kot recenzentka raziskav za Evropsko unijo, izkušnje pa ima tudi kot predsednica odbora za nadzor dodiplomskega in magistrskega študija. Njena glavna raziskovalna področja so konceptualno konstruiranje, zajem informacij in znanja, organizacija, postopki in komunikacija med snovanjem izdelkov, metodološki in upravljavski vidiki razvoja izdelkov, orodja za konstruiranje in vseprisotno konstruiranje. Imre Horváth (1954) je magistriral iz strojništva (1978) in poučevanja tehnike (1980) na Tehnični univerzi v Budimpešti, Madžarska. V letih 1978 do 1984 je delal v madžarskih ladjedelnicah in v tovarni dvigal. Med leti 1985 in 1997 je zasedal različna fakultetna mesta na Tehnični univerzi v Budimpešti. Naslov dr. univ. (1987) in doktorat (1994) je pridobil na TU v Budimpešti, leta 1993 pa mu je madžarska akademija znanosti podelila naziv kandidata za doktorja znanosti. Od leta 1997 je redni profesor računalniško podprtega konstruiranja in inženiringa na Fakulteti za industrijski inženiring Tehnične univerze v Delftu. Med 1. januarjem 2005 in majem 2007 je deloval kot direktor raziskav na fakulteti. Je pobudnik mednarodnega simpozija o orodjih in metodah konkurenčnega inženiringa (TMCE) in ga vodi že 14 let. Deloval je na različnih položajih v izvršnem odboru oddelka CIE pri ASME. Leta 2009 je prejel častni doktorat Tehnične in ekonomske univerze v Budimpešti, leta 2010 pa mu je Univerza v Miškolcu podelila naziv častnega profesorja. Njegova glavna delovna področja so filozofski in teoretični vidiki raziskav na področju konstruiranja, računalniška podpora izkustvenemu konstruiranju, orodja za podporo konstruiranju na osnovi vseprisotnih tehnologij, formalizacija in strukturiranje konstrukcijskega znanja ter produktne inovacije na osnovi tehnoloških omogočevalcev v socialnih kontekstih. Kot učitelj se trenutno ukvarja z računalniškimi aplikacijami konceptualnega konstruiranja, integracijo raziskav v poučevanje konstruktorjev in heterogenimi učnimi okolji na osnovi platform. Ferruccio Mandorli je magistriral iz informatike (1990) na Univerzi v Milanu in doktoriral iz inženiringa industrijske proizvodnje (1995) na Univerzi v Parmi. Leta 1992 je kot raziskovalec gostoval na Univerzi v Tokiu (laboratorij Kimura, oddelek za gradnjo preciznih strojev, tehnična fakulteta) V letih od 1990 do 1998 je deloval kot svetovalec več italijanskim podjetjem na področju informacijske tehnologije v inženiringu. Leta 1998 je postal raziskovalec na fakulteti za inženiring pri Politehniki Marche, kjer je leta 2006 dobil tudi redno profesuro. Predava tehnično risanje, računalniško podprto konstruiranje in virtualno izdelavo prototipov. Je član senata doktorske šole inženirskih znanosti pri Politehniki Marche. Njegova glavna raziskovalna področja so povezana z orodji in metodami za podporo konstruiranju, s poudarkom na aplikacijah za avtomatizacijo konstruiranja, ocenjevanje življenjskih ciklov in upravljanje z življenjskim ciklom izdelkov.
SI 152
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, SI 153 UDK 658.5:303.433.2
Prejeto: 27.04.2010 Sprejeto: 19.10.2010
Praktična uporaba sočasnega inženirstva na osnovi množic v industriji Dag Raudberget* Oddelek za strojništvo, Tehnična šola, Univerza Jönköping, Švedska Sočasno inženirstvo na osnovi množic je včasih lahko sredstvo, ki pripomore k dramatičnemu izboljšanju procesov snovanja izdelkov. Kljub popularnosti te metode v literaturi je število poročil o uspešnih aplikacijah zaenkrat še majhno. Ta članek prinaša nova spoznanja z opisom uveljavitve sočasnega inženirstva na osnovi množic v štirih razvojnih podjetjih. Pri raziskavi je bil uporabljen pristop študije primerov, cilj pa je bil raziskati ali lahko principi sočasnega inženirstva na osnovi množic izboljšajo izkoristek in učinkovitost razvojnega procesa. Študija kaže, da je z ustrezno podporo možno izvajati projekte na osnovi množic že v okviru obstoječe organizacije. Udeleženci trdijo, da pristop na osnovi množic pozitivno vpliva na rezultate razvoja, zlasti na ravni inovacij, stroškov izdelka in zmogljivosti. Izboljšave so dosežene na račun nekoliko višjih razvojnih stroškov in daljšega časa do prihoda izdelka na trg. Pozitivni učinki pa prevladajo in vsa udeležena podjetja nameravajo sočasno inženirstvo na osnovi množic uporabljati tudi pri prihodnjih projektih. ©2010 Strojniški vestnik. Vse pravice pridržane. Ključne besede: sočasno inženirstvo na osnovi množic, snovanje na osnovi možic, preverjanje metode snovanja, vitek razvoj izdelkov, študija primera, industrijsko sodelovanje
Prostor za snovanje Funkcija 1 Funkcija 2 Funkcija N
Množice možnosti
Presek neodvisnih rešitev Čas
Končna zasnova
Sl. 1. Princip sočasnega inženirstva na osnovi množic (prilagojeno po [9])
*
Naslov avtorja za dopisovanje: Oddelek za strojništvo, Tehnična šola, Univerza Jönköping, Box 1026, 551 11 Jönköping, Švedska, dag.raudberget@jth.hj.se
SI 153
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, SI 154 UDK 658.5:338.4
Prejeto: 27.04.2010 Sprejeto: 21.10.2010
Adaptivno upravljanje s spremembami v industrijskih proizvodno-storitvenih sistemih Michael Abramovici - Fahmi Bellalouna - Jens Christian Göbel* Katedra za informacijsko tehnologijo v strojništvu (ITM) Univerza Bochum Ruhr, Nemčija Za industrijske proizvodno-storitvene sisteme (IPS2) so v primerjavi s posameznimi fizičnimi izdelki ali storitvami značilne zelo pogoste dinamične spremembe, ne le v fazi načrtovanja, temveč v celem življenjskem ciklu. Te spremembe je možno upravljati, jim slediti in jih dokumentirati v procesu upravljanja sprememb, ki ga podpira sistem za upravljanje s spremembami. Spremembe in procese spreminjanja pri IPS2 je zelo težko načrtovati že med fazo razvoja IPS2, obstoječe statične in deterministične rešitve za upravljanje s spremembami namreč niso primerne za IPS2. V članku je opisan nov koncept adaptivnega upravljanja sprememb za IPS2. Opisani koncept omogoča ustrezno spreminjanje zasnove, prilagajanje in izvedbo procesov sprememb IPS2 v celem življenjskem ciklu, z namenom časovno in stroškovno čim učinkovitejše izvedbe sprememb IPS2. ©2010 Strojniški vestnik. Vse pravice pridržane. Ključne besede: prilagodljivost, adaptivni procesi, k ciljem usmerjeno modeliranje procesov, inteligentni agenti, upravljanje sprememb v inženirskih sistemih (ECM), industrijski proizvodnostoritveni sistemi (IPS2)
Sl. 2. Ilustracija izboljšave stroja za elektroerozijsko obdelavo
SI 154
*Naslov avtorja za dopisovanje: Katedra za informacijsko tehnologijo v strojništvu (ITM), Univerza Bochum Ruhr, Universitätsstraße 150, 44801 Bochum, Nemčija, jenschristian.goebel@itm.rub.de
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, SI 155 UDK 658.512.2:004.8
Prejeto: 27.04.2010 Sprejeto: 21.10.2010
Programski sistem za vrednotenje vpliva “konstruiranja za X” v procesu spreminjanja zasnov Roberto Raffaeli* - Maura Mengoni – Michele Germani Oddelek za strojništvo – Politehnika Marche, Italija Trg zahteva vedno nove izdelke v kratkem času. Zahteva tudi uvajanje novih orodij in metod za upravljanje z neprestanimi spremembami izdelkov, ki v realnem času vrednotijo vpliv konstrukcijskih naporov, proizvodnih stroškov, časa prihoda na trg itd. Predstavljeno raziskovalno delo je namenjeno razvoju snovalne platforme za podporo ustvarjanju, vizualizaciji in navigaciji po večnivojski predstavitvi izdelkov, kjer so funkcije, moduli, sestavi in komponente v striktni medsebojni povezavi. Uvedba principov “konstruiranja za X” kot pravil za povezovanje vseh vidikov snovanja izdelkov omogoča vrednotenje vplivov sprememb na treh ravneh. Pristop in predlagana platforma sta bila uporabljena na primeru proizvodnje hladilnikov kot podpora dizajnerjem in inženirjem za hitro pripravo optimalne konstrukcijske rešitve glede na zahteve podjetja, formalizirane z uveljavitvijo pravil konstruiranja za X. ©2010 Strojniški vestnik. Vse pravice pridržane. Ključne besede: funkcijska analiza in modularnost, konstruiranje za X, upravljanje sprememb, vpliv konstrukcijskih sprememb
Sl. 3. Predstavitev strukture izdelka z lastnostmi in relacijami
*
Naslov avtorja za dopisovanje: Oddelek za strojništvo – Politehnika Marche, Brecce Bianche, Ancona, r.raffaeli@univpm.it
SI 155
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, SI 156 UDK 519.6:004.85
Prejeto: 27.04.2010 Sprejeto: 22.10.2010
Računanje v oblaku za sinergetski razvoj modelov emocij v učnih sistemih z več agenti Tristan Barnett* - Elizabeth Ehlers Akademija za informacijske tehnologije, Univerza v Johannesburgu, Republika Južna Afrika Strojno učenje je tehnologija, ki je izjemno pomembna za izboljšanje prilagodljivosti sistemov na osnovi agentov. Učenje je zaželena lastnost sintetičnih osebnosti oz. ‘verjetnih’ agentov, ker prinaša določeno stopnjo realizma v njihove interakcije. Prednosti sodelovalnih naporov v učnih sistemih z več agenti pa lahko zasenčijo skrbi zaradi razširljivosti sistema in adaptivne dinamike. Predlagana arhitektura učenja več agentov z distribuirano umetno zavestjo (MALDAC) je razširljiv pristop k razvoju prilagodljivih sistemov v kompleksnih, verjetnih okoljih. Za podporo konceptu MALDAC je predlagana kognitivna arhitektura, ki uporablja modele emocij in teorijo umetne zavesti za obvladovanje zahtevnih okolij. Za izboljšanje razširljivosti sistema je v zasnovo arhitekture vgrajena paradigma računanja v oblaku. Navidezno okolje, ki uporablja MALDAC, se odlikuje z izboljšano razširljivostjo pri učnih sistemih z več agenti, zlasti v stohastičnih in dinamičnih okoljih. ©2010 Strojniški vestnik. Vse pravice pridržane. Ključne besede: učenje več agentov, kognitivna arhitektura, modeli emocij, razširljivost, inteligentni agent, računanje v oblaku
Sl. 8. Raziskovalni roboti v simulatorju HIVE
SI 156
*
Naslov avtorja za dopisovanje: Akademija za informacijske tehnologije, Univerza v Johannesburgu, PO Box 524, Auckland Park, 2006, Johannesburg, Republika Južna Afrika, tdd.barnett@gmail.com
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, SI 157 UDK 658.512.2:004.896:004.94
Prejeto: 27.04.2010 Sprejeto: 21.10.2010
Ujemanje računalniško podprtega konstruiranja in simulacij po metodi končnih elementov Okba Hamri 1,* - Jean Claude Léon 2 -Franca Giannini 3 - Bianca Falcidieno 3 1 Oddelek za strojništvo, Univerza Bejaia, Alžirija 2 Laboratorij G-Scop, INP-Grenoble, Francija 3 Inštitut za uporabno matematiko in informacijske tehnologije, Italija Računalniško podprto konstruiranje (CAD) in računalniško podprto inženirstvo (CAE) sta dve različni disciplini, ki potrebujeta različne predstavitve oblikovnih modelov. Modeli, ustvarjeni s sistemi CAD, zato pogosto niso primerni za analize po metodi končnih elementov (FEA). V tem članku je predlagan nov pristop, ki zmanjšuje vrzeli med programsko opremo za CAD in CAE. Pristop uporablja novo mešano predstavitev oblik. Mešana predstavitev oblik podpira tako predstavitev B-Rep (z mnogoterostmi in brez njih) kakor tudi poliedrsko predstavitev, ter ustvarja robustno povezavo med modelom CAD (B-Rep NURBS) in poliedrskim modelom. Osnova za obe predstavitvi je ista topologija visoke ravni (HLT), ki predstavlja skupno zahtevo za pripravo simulacijskega modela. V članku je opisan inovativen pristop k pripravi modela simulacije s končnimi elementi na osnovi mešane predstavitve. Z mešano predstavitvijo oblik je povezan nabor potrebnih orodij, ki kar najbolj skrajšajo proces priprave modela in vzdržujejo ujemanje med CAD-modeli in simulacijskimi modeli. ©2010 Strojniški vestnik. Vse pravice pridržane. Ključne besede: ujemanje CAD-CAE, priprava simulacijskega modela KE, mešana predstavitev oblik, poenostavljanje, topologija visoke ravni
a)
b)
Sl. 6. Ilustracija entitet HLT na primeru komponente; a) začetna predstavitev komponente kot polieder, b) polirobovi in delitve, ki odražajo dekompozicijo B-Rep NURBS, kot jo izdela CAD-modelirnik
*
Naslov avtorja za dopisovanje: Oddelek za strojništvo, Univerza Bejaia, Route de Targua Ouzemour, 06000 Bijayah Alžirija, okba_enp@yahoo.fr
SI 157
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, SI 158 UDK 658.512.2:004.5
Prejeto: 28.04.2010 Sprejeto: 20.10.2010
Vpliv vizualne povratne informacije na učljivost in uporabnost metod konstruiranja Ronald Poelman1,* - Zoltán Rusák2 - Alexander Verbraeck1 - Leire Sorasu Alcubilla1 1 Fakulteta za tehnologijo, politiko in management, Oddelek za sistemski inženiring Tehnična univerza v Delftu, Nizozemska 2 Fakulteta za industrijski inženiring, računalniško podprto konstruiranje in inženiring Tehnična univerza v Delftu, Nizozemska Pri konstruiranju se danes večinoma uporabljajo 2D-zasloni, kot obetajoča alternativa pa se počasi uvajajo tudi njihovi 3D-nasledniki. Razumevanje vpliva razsežnosti prikazane slike na izkoristek in učinkovitost dela konstruktorjev, ki delajo v konstrukcijskih okoljih navidezne resničnosti, ima zato velik pomen. V članku so prikazani rezultati primerjalne študije uporabnosti, ki je bila osredotočena na vrednotenje vpliva različnih vrst zaslonov na učljivost in uporabnost. V naši študiji smo eksperimentalno primerjali 2D- in 3D-zaslone. Uporabniki so morali narisati proste oblike v treh različnih okoljih po predpisanem postopku interakcije med človekom in računalnikom. Raziskava kaže, da obstajajo pomembne razlike med preferencami različnih skupin uporabnikov. Spol in izkušenost pri uporabi naprednih vizualizacijskih tehnik imata pomemben vpliv na uporabnost in učljivost metod konstruiranja z vsako od treh vrst zaslonov. ©2010 Strojniški vestnik. Vse pravice pridržane. Ključne besede: navidezna resničnost, razsežnost, zaslon, konstruiranje, uporabnost, učljivost Naloga 2
Naloga 1 Naloga3
Sl. 5. Rezultati različnih nalog po udeležencih
SI 158
* Naslov odgovornega avtorja: Fakulteta za tehnologijo, politiko in management, Oddelek za sistemski inženiring, Tehnična univerza v Delftu, Jaffalaan 5, 2628BX Delft, Nizozemska, R.Poelman@tudelft.nl
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, SI 159 UDK 659.122:681.337
Prejeto: 27.04.2010 Sprejeto: 21.10.2010
K uporabnikom usmerjen pristop za namizno sodelovalno konstrukcijsko okolje Christophe Merlo1,2,* - Nadine Couture1,3 1 ESTIA, Biarritz, Francija 2 IMS/LAPS, UMR 5218, Univerza Bordeaux 1, Francija 3 LaBRI, Univerza Bordeaux 1, Francija Sodelovanje med člani timov je na globalnem trgu razvoja izdelkov postalo ključni dejavnik uspeha projektov razvoja izdelkov in inovativnosti. Pri takšnih sodelovalnih situacijah uporabljamo pretežno tradicionalna orodja kot so metode na papirju ali enouporabniška računalniška orodja. Naš cilj je izboljšati neposredno interakcijo med uporabniki z računalniškimi orodji oz. s fizičnimi napravami, ki so namenjene podpori pri izvajanju delovnih nalog. Predlagano sodelovalno okolje uporablja tehnologijo namizne projekcije kot izhodno napravo in fizične vhodne naprave. S kombinacijo elektronskega peresa in naprave Wiimote so bila razvita orodja in namizna tehnologija, ki omogoča neposredne interakcije in izboljšuje sodelovanje pri konstruiranju. Za določitev vhodnih naprav in testnih scenarijev je bila uporabljena analiza specifičnih dejavnosti, ki jih izvajajo konstruktorji. Predstavljeni so rezultati teh testov, ki potrjujejo uporabnost takšnega sodelovalnega sistema. ©2010 Strojniški vestnik. Vse pravice pridržane. Ključne besede: sodelovanje pri konstruiranju, interakcije med uporabniki, namizna projekcija, fizične naprave
Sl. 4. Pero za označevanje (levo), pero in naprava Wiimote pri 3D-situaciji (desno)
*
Naslov avtorja za dopisovanje: ESTIA, Technopole Izarbel, Bidart, France, c.merlo@estia.fr
SI 159
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, SI 160 UDK 004.3:316.774
Prejeto: 27.04.2010 Sprejeto: 23.10.2010
Prihod in širjenje vseprisotnih tehnologij pri izdelkih in procesih 1
Bart Gerritsen1,* - Imre Horváth2 TNO - Nizozemska organizacija za aplikativne znanstvene raziskave 2 Tehnična univerza v Delftu, Fakulteta za industrijski inženiring
Vseprisotno računalništvo je raziskovalno področje, ki je nastalo konec 80. let prejšnjega stoletja in je danes tik pred začetkom naglega razvoja tehnologij in aplikacij. Na vseprisotno računalništvo pogosto gledamo kot na tretji val računalništva, po prvem valu osrednjih (mainframe) računalnikov in drugem valu osebnih računalnikov. Namenjeno je osebni, oddaljeni podpori ljudem pri vsakodnevnih dejavnostih, pri čemer človekovi posegi niso potrebni. Računalniške zmogljivosti so razdeljene po vsem okolju, posreden vmesnik med človekom in strojem pa je odpravljen. Koncept namesto tega povezuje tipala in naprave v našem okolju v omrežje. Koncept ne pozna namenskosti v smislu tega, da množica naprav v okolju kolektivno služi več ljudem v bližini. Tako ljudje kot naprave so nomadi in lahko v okolje vstopijo ali ga zapustijo. Dejavnika pri uresničevanju osebne interakcije glede na kontekst sta tudi identifikacija in zavedanje konteksta. Čeprav je sama vizija danes že precej dobro razdelana, je treba premagati še več ovir tehnološke in netehnološke narave. V članku je podan izčrpen pregled in kritična ocena stanja vseprisotnih tehnologij danes in v prihodnje. ©2010 Strojniški vestnik. Vse pravice pridržane. Ključne besede: mobilne komunikacije, vsenavzoče zaznavanje, ad-hoc omrežja, vseprisotno računalništvo, pretvorba informacij, prežemajoči pristopi
Sl. 4. Berkeley ATMEGA103-/TR1000 100 m doseg, 50 kbit/s, radijsko vozlišče MICA
SI 160
*
Naslov avtorja za dopisovanje: TNO - Nizozemska organizacija za aplikativne znanstvene raziskave, Wassenaarseweg 56, 2333 AL, Leiden, Nizozemska, Bart.Gerritsen@planet.nl
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, SI 161-162 Navodila avtorjem
Navodila avtorjem Navodila so v celoti na voljo v rubriki "Informacija za avtorje" na spletni strani revije: http://en.sv-jme.eu/ Članke pošljite na naslov: Univerza v Ljubljani Fakulteta za strojništvo SV-JME Aškerčeva 6, 1000 Ljubljana, Slovenija Tel.: 00386 1 4771 137 Faks: 00386 1 2518 567 E-mail: info@sv-jme.eu strojniski.vestnik@fs.uni-lj.si Članki morajo biti napisani v angleškem jeziku. Strani morajo biti zaporedno označene. Prispevki so lahko dolgi največ 10 strani. Daljši članki so lahko v objavo sprejeti iz posebnih razlogov, katere morate navesti v spremnem dopisu. Kratki članki naj ne bodo daljši od štirih strani. V spremnem dopisu navedite podatke o predhodnem ali hkratnem predlaganju članka v objavo drugje. Prosimo, da članku določite tudi tipologijo – opredelite ga lahko kot izvirni, pregledni ali kratki članek. Navedite vse potrebne kontaktne podatke (poštni naslov in email) in predlagajte vsaj dva potencialna recenzenta. Navedete lahko tudi razloge, zaradi katerih ne želite, da bi določen recenzent recenziral vaš članek. OBLIKA ČLANKA Članek naj bo napisan v naslednji obliki: - Naslov, ki primerno opisuje vsebino članka. - Povzetek, ki naj bo skrajšana oblika članka in naj ne presega 250 besed. Povzetek mora vsebovati osnove, jedro in cilje raziskave, uporabljeno metodologijo dela, povzetek rezultatov in osnovne sklepe. - Uvod, v katerem naj bo pregled novejšega stanja in zadostne informacije za razumevanje ter pregled rezultatov dela, predstavljenih v članku. - Teorija. - Eksperimentalni del, ki naj vsebuje podatke o postavitvi preskusa in metode, uporabljene pri pridobitvi rezultatov. - Rezultati, ki naj bodo jasno prikazani, po potrebi v obliki slik in preglednic. - Razprava, v kateri naj bodo prikazane povezave in posplošitve, uporabljene za pridobitev rezultatov. Prikazana naj bo tudi pomembnost
rezultatov in primerjava s poprej objavljenimi deli. (Zaradi narave posameznih raziskav so lahko rezultati in razprava, za jasnost in preprostejše bralčevo razumevanje, združeni v eno poglavje.) - Sklepi, v katerih naj bo prikazan en ali več sklepov, ki izhajajo iz rezultatov in razprave. - Literatura, ki mora biti v besedilu oštevilčena zaporedno in označena z oglatimi oklepaji [1] ter na koncu članka zbrana v seznamu literature. Enote - uporabljajte standardne SI simbole in okrajšave. Simboli za fizične veličine naj bodo v ležečem tisku (npr. v, T, n itd.). Simboli za enote, ki vsebujejo črke, naj bodo v navadnem tisku (npr. ms-1, K, min, mm itd.) Okrajšave naj bodo, ko se prvič pojavijo v besedilu, izpisane v celoti, npr. časovno spremenljiva geometrija (ČSG). Pomen simbolov in pripadajočih enot mora biti vedno razložen ali naveden v posebni tabeli na koncu članka pred referencami. Slike morajo biti zaporedno oštevilčene in označene, v besedilu in podnaslovu, kot sl. 1, sl. 2 itn. Posnete naj bodo v ločljivosti, primerni za tisk, v kateremkoli od razširjenih formatov, npr. BMP, JPG, GIF. Diagrami in risbe morajo biti pripravljeni v vektorskem formatu, npr. CDR, AI. Vse slike morajo biti pripravljene v črnobeli tehniki, brez obrob okoli slik in na beli podlagi. Ločeno pošljite vse slike v izvirni obliki Pri označevanju osi v diagramih, kadar je le mogoče, uporabite označbe veličin (npr. t, v, m itn.). V diagramih z več krivuljami, mora biti vsaka krivulja označena. Pomen oznake mora biti pojasnjen v podnapisu slike. Tabele naj imajo svoj naslov in naj bodo zaporedno oštevilčene in tudi v besedilu poimenovane kot Tabela 1, Tabela 2 itd.. Poleg fizikalne veličine, npr t (v ležečem tisku), mora biti v oglatih oklepajih navedena tudi enota. V tabelah naj se ne podvajajo podatki, ki se nahajajo v besedilu. Potrditev sodelovanja ali pomoči pri pripravi članka je lahko navedena pred referencami. Navedite vir finančne podpore za raziskavo. REFERENCE Seznam referenc MORA biti vključen v članek, oblikovan pa mora biti v skladu s sledečimi navodili. Navedene reference morajo biti citirane v besedilu. Vsaka navedena referenca je v besedilu oštevilčena s številko v oglatem oklepaju (npr. [3]
SI 161
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, SI 161-162
ali [2] do [6] za več referenc). Sklicevanje na avtorja ni potrebno. Reference morajo biti oštevilčene in razvrščene glede na to, kdaj se prvič pojavijo v članku in ne po abecednem vrstnem redu. Reference morajo biti popolne in točne. Navajamo primere: Članki iz revij: Priimek 1, začetnica imena, priimek 2, začetnica imena (leto). Naslov. Ime revije, letnik, številka, strani. [1] Zadnik, Ž., Karakašič, M., Kljajin, M., Duhovnik, J. (2009). Function and Functionality in the Conceptual Design Process. Strojniški vestnik – Journal of Mechanical Engineering, vol. 55, no. 7-8, p. 455-471. Ime revije ne sme biti okrajšano. Ime revije je zapisano v ležečem tisku. Knjige: Priimek 1, začetnica imena, priimek 2, začetnica imena (leto). Naslov. Izdajatelj, kraj izdaje [2] Groover, M. P. (2007). Fundamentals of Modern Manufacturing. John Wiley & Sons, Hoboken. Ime revije je zapisano v ležečem tisku. Poglavja iz knjig: Priimek 1, začetnica imena, priimek 2, začetnica imena (leto). Naslov poglavja. Urednik(i) knjige, naslov knjige. Izdajatelj, kraj izdaje, strani. [3] Carbone, G., Ceccarelli, M. (2005). Legged robotic systems. Kordić, V., Lazinica, A., Merdan, M. (Editors), Cutting Edge Robotics. Pro literatur Verlag, Mammendorf, p. 553-576. Članki s konferenc: Priimek 1, začetnica imena, priimek 2, začetnica imena (leto). Naslov. Naziv konference, strani. [4] Štefanić, N., Martinčević-Mikić, S., Tošanović, N. (2009). Applied Lean System in Process Industry. MOTSP 2009 Conference Proceedings, p. 422-427. Standardi: Standard (leto). Naslov. Ustanova. Kraj.
SI 162
[5] ISO/DIS 16000-6.2:2002. Indoor Air – Part 6: Determination of Volatile Organic Compounds in Indoor and Chamber Air by Active Sampling on TENAX TA Sorbent, Thermal Desorption and Gas Chromatography using MSD/FID. International Organization for Standardization. Geneva. Spletne strani: Priimek, Začetnice imena podjetja. Naslov, z naslova http://naslov, datum dostopa. Rockwell Automation. Arena, from http://www.arenasimulation.com, accessed on 200909-27. AVTORSKE PRAVICE Avtorji v uredništvo predložijo članek ob predpostavki, da članek prej ni bil nikjer objavljen, ni v postopku sprejema v objavo drugje in je bil prebran in potrjen s strani vseh avtorjev. Predložitev članka pomeni, da se avtorji avtomatično strinjajo s prenosom avtorskih pravic SV-JME, ko je članek sprejet v objavo. Vsem sprejetim člankom mora biti priloženo soglasje za prenos avtorskih pravic, katerega avtorji pošljejo uredniku. Članek mora biti izvirno delo avtorjev in brez pisnega dovoljenja izdajatelja ne sme biti v katerem koli jeziku objavljeno drugje. Avtorju bo v potrditev poslana zadnja verzija članka. Morebitni popravki morajo biti minimalni in poslani v kratkem času. Zato je pomembno, da so članki že ob predložitvi napisani natančno. Avtorji lahko stanje svojih sprejetih člankov spremljajo na http://en.sv-jme.eu/. PLAČILO OBJAVE Avtorji vseh sprejetih prispevkov morajo za objavo plačati prispevek v višini 180,00 EUR (za članek dolžine do 6 strani) ali 220,00 EUR (za članek dolžine do 10 strani) ter 20,00 EUR za vsako dodatno stran. Dodatni strošek za barvni tisk znaša 90,00 EUR na stran.
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, SI 163-165 Osebne vesti
Doktorati, magisterij in diplome DOKTORAT Na Fakulteti za strojništvo Univerze v Mariboru je z uspehom obranila svojo doktorsko disertacijo: dne 12. oktobra 2010 Darja JAUŠOVEC z naslovom: "Vpliv protimikrobne obdelave na biorazgradljivost celuloznih tekstilnih substratov" (mentor: izr. prof. dr. Bojana Vončina). V doktorski disertaciji je bil proučen vpliv protimikrobnega sredstva 3-(trimetoksisilil)propildimetiloktadecil amonijevega klorida (TMPAC) na biorazgradljivost dveh celuloznih substratov. V prvem delu naloge je bila preiskana biorazgradljivost TMPAC obdelane bombažne tkanine v primerjavi z neobdelano ob uporabi vrstične elektronske mikroskopije, FT-IR ATR spektroskopije, diferenčne dinamične kalorimetrije in z določanjem izgube mase. Iz rezultatov je bilo razvidno, da protimikrobno sredstvo TMPAC zmanjšuje stopnjo razgradnje bombažne tkanine, kar je bilo dokazano predvsem z morfološkimi in kemijskimi spremembami med procesom razgradnje. Kemijske spremembe so bile raziskane s pomočjo FT-IR ATR spektroskopije in dokazane zlasti s prisotnostjo amidnih in karboksilnih funkcionalnih skupin. Amidne funkcionalne skupine so nastale kot posledica prisotnosti proteinov nastalih z mikrobno rastjo, medtem, ko so karboksilne funkcionalne skupine nastale kot posledica oksidativne razgradnje celuloze. Dejstvo, da protimikrobno sredstvo TMPAC znižuje stopnjo razgradnje bombažne tkanine je bilo pojasnjeno z močno hidrofobnim značajem TMPAC obdelane površine, kar smo dokazali z večjim stičnim kotom TMPAC obdelanega bombaža v primerjavi z neobdelanim. V drugem delu naloge je bila proučena encimska razgradnja modelnega celuloznega filma obdelanega s protimikrobnim sredstvom TMPAC, v primerjavi z neobdelanim filmom ob uporabi mikroskopije na atomsko silo in elipsometrije. Uporabljene celulaze so bile pridobljene iz gliv Trichoderma viride in Aspergillus niger. Po dodatku encimov k modelnemu celuloznemu filmu, je po začetni adsorpciji encimov na substrat sledila nadaljnja razgradnja celuloze. Encimska razgradnja celuloze je bila dokazana s konstantno izgubo v
masi filma ter z ne-monotonim obnašanjem v debelini filma, ki dokazuje, da encimi ne razgrajujejo samo površine filma ampak tudi penetrirajo v film. Učinkovitost uporabljenih celulaz je bila različna in veliko višja stopnja razgradnje je bila opažena pri uporabi celulaze Trichoderma viride. Stopnja razgradnje se je ob uporabi te celulaze očitno znižala, ko je bil celulozni film predhodno obdelan s protimikrobnim sredstvom TMPAC, medtem, ko pa protimikrobno sredstvo ni imelo pomembnega učinka na razgradnjo ob prisotnosti celulaze Aspergillus niger. Dokazano je bilo, da v odvisnosti od tipa celulaze, protimikrobno sredstvo TMPAC zavira encimsko učinkovitost na fazni meji trdno-tekoče. TMPAC povzroča hidrofobnost modelnega celuloznega filma in s tem zavira adsorpcijo encimov na površino substrata. MAGISTRSKA DELA Na Fakulteti za strojništvo Univerze v Mariboru so z uspehom zagovarjali svoja magistrska dela: dne 12. oktobra 2010 Zlatko BOTAK z naslovom: »Integrirani model inteligentnega CNC programiranja in izbire rezalnega orodja« (mentor: prof. dr. Jože Balič); dne 12. oktobra 2010 Aljoša HORVAN z naslovom: »Avtomatizacija programiranja v CAM sistemih z uporabo tehnoloških gradnikov in baze pravil« (mentor: prof. dr. Jože Balič); dne 15. oktobra 2010 Miran PUC z naslovom: »Tehnologija preoblikovanja visokotrdnostne pločevine v vročem« (mentor: izr. prof. dr. Ivo Pahole); SPECIALISTIČNO DELO Na Fakulteti za strojništvo Univerze v Mariboru je z uspehom zagovarjal svoje specialistično delo: dne 15. junija 2010 Dušan MESNER z naslovom: »Oblikovanje delovnih mest ob montažni liniji za sestavo hladilno zamrzovalnih aparatov« (mentor: prof. dr. Andrej Polajnar).
Osebne vesti
SI 163
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, SI 163-165
DIPLOMIRALI SO Na Fakulteti za strojništvo Univerze v Ljubljani so pridobili naziv univerzitetni diplomirani inženir strojništva: dne 28. oktobra 2010: Marko IVANUŠIČ z naslovom: »Problematika upogibanja tankostenskih cevi« (mentor: prof. dr. Karl Kuzman, somentor: doc. dr. Tomaž Pepelnjak); Andrej JARC z naslovom: »Verifikacija orbitalnega varjenja cevi iz nerjavnega jekla« (mentor: prof. dr. Janez Tušek); Rok SUHOLEŽNIK z naslovom: »Vlečenje vratu pločevinastih izdelkov ob pogojih maloserijske proizvodnje« (mentor: prof. dr. Karl Kuzman); dne 2. novembra 2010: Jernej BASLE z naslovom: »Ugotavljanje natančnosti obdelovalnih strojev« (mentor: prof. dr. Janez Kopač, somentor: doc. dr. Peter Krajnik); Jože BOVHA z naslovom: »Brezžični uporabniški vmesnik SCARA robota LAKOS« (mentor: izr. prof. dr. Peter Butala); Luka REDNAK z naslovom: »Optimiranje kogeneracijskega postrojenja za sežiganje komunalnih odpadkov« (mentor: prof. dr. Janez Oman); Jan ŽILIĆ z naslovom: »Vzdrževanje luških dvigal« (mentor: doc. dr. Jožef Pezdirnik); dne 3. novembra 2010: Anže KALAN z naslovom: »Maček laboratorijskega stebrnega žerjava« (mentor: doc. dr. Boris Jerman); Gorazd KRESE z naslovom: »Vpliv latentnih obremenitev na rabo električne energije za hlajenje stavb« (mentor: prof. dr. Vincenc Butala, somentor: doc. dr. Matjaž Prek); Gregor OGRADI z naslovom: »Vpliv števila udarnih obremenitev na dinamične karakteristike elastomerno-termoplastičnih kompozitov« (mentor: prof. dr. Igor Emri); Matija TRDAN z naslovom: »Transportni sistem za sušenje in manipuliranje z lesnimi sekanci« (mentor: doc. dr. Boris Jerman). *
SI 164
Na Fakulteti za strojništvo Univerze v Mariboru so pridobili naziv univerzitetni diplomirani inženir strojništva: dne 27. oktobra 2010: Uroš AČKO z naslovom: »Membranska pilotna naprava za pripravo de-ionizirane vode« (mentor: prof. dr. Matjaž Hriberšek, somentor: dr. Sani Bašič); dne 28. oktobra 2010: Andrej CELCER z naslovom: »Razvoj škarij za rezanje navojnih palic« (mentor: izr. prof. dr. Bojan Dolšak, somentor: mag. Jasmin Kaljun); Danijel DURONJIČ z naslovom: »Analiza in izboljšanje izsekovanja odprtin na votlih profilih iz aluminijevih zlitin« (mentor: izr. prof. dr. Ivan Pahole, somentor: doc. dr. Leo Gusel); Peter TAVČER z naslovom: »Trdnostna analiza nosilnega okvirja pritrdišč varnostnih pasov« (mentor: prof. dr. Zoran Ren); Bojan ZORKO z naslovom: »Termodinamični preračun vijačnega in batnega kompresorja« (mentor: prof. dr. Milan Marčič); * Na Fakulteti za strojništvo Univerze v Ljubljani so pridobili naziv diplomirani inženir strojništva: dne 14. oktobra 2010: Branislav DIMIĆ z naslovom: »Metrološka vilidacija hladilne komore za hranjenje zdravil« (mentor: izr. prof. dr. Ivan Bajsić); Blaž PFAJFAR z naslovom: »Izločevalnik vode in olja v sistemih stisnjenega zraka« (mentor: prof. dr. Mirko Čudina); Valter PISK z naslovom: »Oblikovanje Ucelice« (mentor: doc. dr. Janez Kušar, somentor: prof. dr. Marko Starbek); Benjamin RAZTRESEN z naslovom: »Določanje inicialne kavitacije v ventilu s pomočjo merjenja hrupa« (mentor: prof. dr. Mirko Čudina); dne 15. oktobra 2010: Jože AJDOVNIK z naslovom: »Zagotavljanje kakovosti preventivnega vzdrževanja v proizvodnji kartonske embalaže« (mentor: prof. dr. Mirko Soković); Milan GALUN z naslovom: »Odpravljanje napak pri izdelavi žičnih uparjalnikov za hladilno zmrzovalne naprave« (mentor: prof. dr. Mihael Junkar, somentor: doc. dr. Henri Orbanić);
Osebne vesti
Strojniški vestnik - Journal of Mechanical Engineering 56(2010)11, SI 163-165
Luka GRUDEN z naslovom: »Razvoj letal za opazovanje in nadzor« (mentor: pred. mag. Andrej Grebenšek, somentor: doc. dr. Tadej Kosel); Bojan HLAČA z naslovom: »Avtomatizacija linije za kontrolo točnosti kota vzmeti« (mentor: prof. dr. Janez Diaci, somentor: doc. dr. Primož Podržaj); Primož ROGINA z naslovom: »Problematika izpadanja sornika iz bata hermetičnega kompresorja« (mentor: prof. dr. Mirko Soković); dne 18. oktobra 2010: Jernej BOŠTJANČIČ z naslovom: »Optimizacija nosilnih plošč hidravličnih cilindrov pri stiskalnici za plastiko« (mentor: doc. dr. Boris Jerman); Damijan FLIS z naslovom: »LENS tehnologija in toplotne razmere pri izdelavi medicinskega implantata« (mentor: doc. dr. Andrej Bombač, somentor: prof. dr. Janez Kopač); Iztok PLANTARIČ z naslovom: »Nadzor procesa izdelave vakuumske črpalke« (mentor: prof. dr. Janez Kopač, somentor: prof. dr. Mirko Soković); Jure SRNEC z naslovom: »Izdelava fiksacijske ploščice s konvencionalnimi izdelovalnimi tehnologijami« (mentor: prof. dr. Janez Kopač); Mitja VESELIČ z naslovom: »Odzračevanje orodij za visokotlačno litje aluminijevih zlitin« (mentor: prof. dr. Janez Kopač, somentor: viš. pred. Jože Jurkovič);
* Na Fakulteti za strojništvo Univerze v Mariboru so pridobili naziv diplomirani inženir strojništva: dne 4. oktobra 2010: Bojana KUŠIĆ z naslovom: »Rekristalizacija zlitine Au-0,5La« (mentor: prof. dr. Ivan Anžel); Vid PODGORŠEK z naslovom: »Značilnost litine NIRESIST-D2 ter tržne priložnosti NIRESIST - litin s kroglastim grafitom« (mentor: izr. prof. dr. Franc Zupanič); dne 7. oktobra 2010: Žiga KADIVNIK z naslovom: »Primerjava postopkov selektivnega laserskega sintranja in brizganja plastičnih mas pri izdelavi končnih izdelkov« (mentor: izr. prof. dr. Igor Drstvenšek, somentor: doc. dr. Marjan Leber); dne 28. oktobra 2010: Srečko GJERKEŠ z naslovom: »Uvajanje programskih orodij CAD/CAM v podjetju "Medicop"« (mentor: izr. prof. dr. Ivan Pahole, somentor: prof. dr. Jože Balič); Jani MERKAČ z naslovom: »Energetski pregled podjetja Petrol energetika d.o.o. Ravne na Koroškem« (mentor: doc. dr. Matjaž Ramšak); Benjamin ROŽMAN z naslovom: »Energetski pregled in racionalizacija razsvetljave« (mentor: doc. dr. Matjaž Ramšak, somentor: prof. dr. Leopold Škerget); David SENJOR z naslovom: »Uvajanje visokohitrostnih obdelav za izdelke večjih dimenzij« (mentor: izr. prof. dr. Ivan Pahole, somentor: prof. dr. Franci Čuš).
Osebne vesti
SI 165
56 (2010) 11
Platnica SV-JME 11-2010_5k_tisk.pdf 1 16.11.2010 13:37:18
C
M
Y
CM
MY
CY
CMY
Journal of Mechanical Engineering - Strojniški vestnik
K
11 year 2010 volume 56 no.