JOURNAL of AUTOMATION, MOBILE ROBOTICS & INTELLIGENT SYSTEMS
Editor-in-Chief Janusz Kacprzyk
Executive Editor: Anna Ładan aladan@piap.pl
(Systems Research Institute, Polish Academy of Sciences; PIAP, Poland)
Associate Editors: Mariusz Andrzejczak (PIAP, Poland) Katarzyna Rzeplińska-Rykała (PIAP, Poland)
Co-Editors: Dimitar Filev (Research & Advanced Engineering, Ford Motor Company, USA)
Kaoru Hirota
Webmaster: Tomasz Kobyliński tkobylinski@piap.pl
(Interdisciplinary Graduate School of Science and Engineering, Tokyo Institute of Technology, Japan)
Witold Pedrycz
Proofreading: Urszula Wiączek
(ECERF, University of Alberta, Canada)
Roman Szewczyk (PIAP, Warsaw University of Technology, Poland)
Editorial Office: Industrial Research Institute for Automation and Measurements PIAP Al. Jerozolimskie 202, 02-486 Warsaw, POLAND Tel. +48-22-8740109, office@jamris.org
Copyright and reprint permissions Executive Editor
Editorial Board: Chairman: Janusz Kacprzyk (Polish Academy of Sciences; PIAP, Poland) Plamen Angelov (Lancaster University, UK) Zenn Bien (Korea Advanced Institute of Science and Technology, Korea) Adam Borkowski (Polish Academy of Sciences, Poland) Wolfgang Borutzky (Fachhochschule Bonn-Rhein-Sieg, Germany) Oscar Castillo (Tijuana Institute of Technology, Mexico) Chin Chen Chang (Feng Chia University, Taiwan) Jorge Manuel Miranda Dias (University of Coimbra, Portugal) Bogdan Gabryś (Bournemouth University, UK) Jan Jabłkowski (PIAP, Poland) Stanisław Kaczanowski (PIAP, Poland) Tadeusz Kaczorek (Warsaw University of Technology, Poland) Marian P. Kaźmierkowski (Warsaw University of Technology, Poland) Józef Korbicz (University of Zielona Góra, Poland) Krzysztof Kozłowski (Poznań University of Technology, Poland) Eckart Kramer (Fachhochschule Eberswalde, Germany) Andrew Kusiak (University of Iowa, USA) Mark Last (Ben–Gurion University of the Negev, Israel) Anthony Maciejewski (Colorado State University, USA) Krzysztof Malinowski (Warsaw University of Technology, Poland)
Andrzej Masłowski (PIAP, Poland) Tadeusz Missala (PIAP, Poland) Fazel Naghdy (University of Wollongong, Australia) Zbigniew Nahorski (Polish Academy of Science, Poland) Antoni Niederliński (Silesian University of Technology, Poland) Witold Pedrycz (University of Alberta, Canada) Duc Truong Pham (Cardiff University, UK) Lech Polkowski (Polish-Japanese Institute of Information Technology, Poland) Alain Pruski (University of Metz, France) Leszek Rutkowski (Częstochowa University of Technology, Poland) Klaus Schilling (Julius-Maximilians-University Würzburg, Germany) Ryszard Tadeusiewicz (AGH University of Science and Technology in Kraków, Poland)
Stanisław Tarasiewicz (University of Laval, Canada) Piotr Tatjewski (Warsaw University of Technology, Poland) Władysław Torbicz (Polish Academy of Sciences, Poland) Leszek Trybus (Rzeszów University of Technology, Poland) René Wamkeue (University of Québec, Canada) Janusz Zalewski (Florida Gulf Coast University, USA) Marek Zaremba (University of Québec, Canada) Teresa Zielińska (Warsaw University of Technology, Poland)
Publisher: Industrial Research Institute for Automation and Measurements PIAP
If in doubt about the proper edition of contributions, please contact the Executive Editor. Articles are reviewed, excluding advertisements and descriptions of products. The Editor does not take the responsibility for contents of advertisements, inserts etc. The Editor reserves the right to make relevant revisions, abbreviations and adjustments to the articles.
All rights reserved ©
1
JOURNAL of AUTOMATION, MOBILE ROBOTICS & INTELLIGENT SYSTEMS VOLUME 3, N° 3, 2009
CONTENTS REGULAR PAPERS
75
Fuzzy-based positioning for mobile robots W. Shen, J. Gu, M. Meng
83
Stiffness analysis of multi-chain parallel robotic systems with loading A. Pashkevich, A. Klimchik, D. Chablat, Ph. Wenger
3
Inversion of fuzzy neural networks for the reduction of noise in the control loop for automotive applications M. Nentwig, P. Mercorelli
13
Robotic approaches to seismic surveying C.M. Gifford, A. Agah 26
A realization of an FPGA sub system for reducing odometric localization errors in wheeled mobile robots J. Kurian, P.R. Saseendran Pillai
90
Two smart tools for control charts analysis A. Hamrol, A. Kujawińska 96
34
Using visual and force information in robot-robot cooperation to build metallic structures J. Pomares, P. Gil, J.A. Corrales, G.J. García, S.T. Puente, F. Torres
Investigation on numerical solution for a robot arm problem R. Ponalagusamy, S. Senthilkumar
SPECIAL ISSUE SECTION
102
Contemporary Approach to Production Processes Management Guest Editors: Andrzej Masłowski, Józef Matuszek
110
43
116
Specificity of bottlenecks in conditions of unit and small-batch production J. Matuszek, J. Mleczko Review, classification and comparative analysis of Maintenance Management Models M.L. Campos, A.C. Márquez Benchmarking of ERP systems evaluation: case study S. Kłos, J. Patalas-Maliszewska
EDITORIAL A. Masłowski, J. Matuszek 47
Increasing flexibility and availability of manufacturing systems - dynamic reconfiguration of automation software at runtime on sensor faults A. Wannagat, B. Vogel-Heuser
123
Digital factory M. Gregor, Š. Medvecký, J. Matuszek, A. Štefánik
54
DEPARTMENTS
Intelligent control system for HSM A.J. Vallejo, R. Morales-Menendez, H. Elizalde-Siller 133
64
An automatic method to identify human alternative movements: application to the ingress movement M.O. Ait El Menceur, Ph. Pudlo, J.-F. Debril, Ph. Gorce, F.-X. Lepoutre 70
Early detection of bearing damage by means of decision trees B. Kilundu, P. Dehombreux, Ch. Letot, X. Chiementin 2
135
IN THE SPOTLIGHT EVENTS
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
N° 3
2009
FUZZY-BASED POSITIONING FOR MOBILE ROBOTS Received 25th February 2009; accepted 12th March 2009.
Weimin Shen, Jason Gu, Max Meng
Abstract: This paper proposes fuzzy-based positioning algorithms for an iRobot B21r mobile robot, which is equipped with a 180° scanning laser rangefinder and other sensors, in an indoor environment. A novel, dynamic error model for the laser rangefinder is built with consideration of the detection distance and the detection angle. A new concept, the virtual angular point, is introduced in this paper as one of the features for positioning a mobile robot. To position a mobile robot, three kinds of feature points, such as break points, real angular points, and virtual angular points, are employed in this paper. Based on fuzzy evaluation for the accuracy of each feature point, positions obtained by two arbitrary points are fused together by the weighted mean technique, in which weight is determined by the uncertainty represented by fuzzy numbers. Experimental study has been carried out to verify the effectiveness and the accuracy of the algorithms. Keywords: position estimation, laser rangefinder, virtual angular points, fuzzy logic.
1. Introduction Accurate perception of the position is essential in the application of a mobile robot. In particular, when a mobile robot is applied into autonomous tasks, it is required to know precisely where it is in order to navigate successfully to desired locations in its environment. This problem is the so-called “the first-location problem” [1]. Theoretical work, practical work, and different approaches on the subject have been reported, but it is still on the cutting-edge of research directions in the field of mobile robotics. Positioning of a mobile robot is the foundation for other application areas, such as trajectory planning [2], obstacle avoidance [3], and robot navigation [4]. Interaction between the mobile robot and objects in its surroundings is performed by using the interoceptive and exteroceptive sensors mounted on the mobile robot. To obtain the position of the mobile robot, many different sensors, systems, and techniques have been developed [5], [6], [7]. Traditionally, position feedback can be achieved through odometry sensors. It is the most widely used navigation method, since odometry provides good short-term accuracy, is inexpensive, and allows very high sampling rates. However, the fundamental idea of odometry is the integration of incremental motion information over time, which leads inevitably to the accumulation of errors that cause large position errors and increase proportionally with the distance traveled by the
robot [8]. Despite these limitations, most researchers agree that odometry is an important part of a robot navigation system, and that navigation tasks will be simplified if odometric accuracy can be improved [9], [10], [11]. Sonar sensors are also popular perception systems in mobile robotics. This kind of sensor has been used by various researchers [12], [13], [14], primarily due to their low cost and their ease of integration. Sonar sensors are based on a time-of-flight principle using an ultrasonic wave. Over the past decade, much research has been conducted investigating applicability in such areas as world modeling and collision avoidance, position estimation, and motion detection [15], [16], [17]. The major drawback of sonar sensors is the poor angular resolution due to the relatively large beam angle. In addition, the distance and spatial resolutions of sonar sensors are limited. They require significant post processing of data to provide accurate position updating [18]. As an alternative to the sonar sensors, the laser rangefinder is also a time-of-flight sensor that achieves significant improvements over the ultrasonic range sensor due to the use of laser light instead of sound. In recent years, the laser rangefinder has proved more popular in positioning mobile robots [19] and [20]. This is due to the fact that the laser rangefinder can provide dense data about the environment, so it is possible to extract suitable features from the reading of the laser rangefinder, and those features can be used for positio-ning a robot [21]. Vision systems are often used for recognition of landmarks in the environment [4], [22]. They are frequently used in a stereo vision head. Visual sensing provides a tremendous amount of information about a robot's environment, and it is potentially the most powerful source of information among all the sensors used on robots to date [23]. Due to the wealth of information, however, extraction of visual features for positioning is not an easy task and this method is hard to use in a real time application. Each sensor has its advantages and disadvantages. For different mobile robot tasks, different sensors are used. Sometimes, those sensors can be fused together to get more highly accurate positions. Many techniques and methods are used for understanding the environment from sensor readings. Different solutions have been adopted in the robotics literature [24], [25], [26], [27]. In general, there are two approaches. One is the grid-based approach, and the other is the feature-based approach. The grid-based approach uses a 2D array to represent the environment. This lowlevel grid-based approach proves to be very useful for map building using ultrasonic sensors, because ultrasonic sensors have a large opening angle and their range Articles
3
Journal of Automation, Mobile Robotics & Intelligent Systems
data are seriously corrupted by reflection. The featurebased approach represents the structure of the environment by geometrical primitives. They are represented by a set of parameters describing their shape, their position in the environment, and their position uncertainty. In the feature-based approach, the laser rangefinder scans are segmented into a set of features such as break points (a break point is defined as when the laser sensor reading is not continuous around it), corners or angular points (an angular point is defined as a transition point from one line to another line around which the data points are continuous), line segments, etc. There is a substantial body of previous research in the area of feature-based positioning by laser rangefinders [28], [29]. In [30], break points are used as the feature to position the mobile robot, however, they cannot provide an accurate position, because one pair of break points in separate scans are not at the same point in the environment. Other features such as corner points or line segments, have the same problem. With the increase in the detection distance and the decrease in the detection angle, errors in the laser rangefinder increase dramatically. One error model for the laser rangefinder in [1], the SP model, does not consider uncertainty changing due to the detection distance and the detection angle. To our knowledge, there is no published work to date developing an efficient method for positioning which takes into consideration of the detection distance and the detection angle. However, fuzzy logic can facilitate to describe uncertainties, such as big uncertainty, medium uncertainty, and small uncertainty. That is the reason why fuzzy logic is used in the paper to describe uncertainties of features. The aim of this paper is to propose algorithms based on fuzzy logic to position the iRobot B21r mobile robot, which is equipped with a 180째 scanning laser rangefinder and other sensors. The B21r mobile robot is ideal for research and development across a broad range of indoor robotics applications because it is easy to control and has various types of sensors, including inertial sensors, infrared sensors, tactile sensors, sonar sensors, a scanning laser rangefinder, and a stereo camera. However, the limitations of these sensors make obtaining a highly accurate position extremely challenging. In this paper, readings from the laser rangefinder and the inertial sensors will be fused together to get a precise position. The novelties of this research are a new error model for the laser rangefinder taking into consideration the detection distance and the detection angle, a new concept, the virtual angular point, and the position fusion technique based on the weighted mean technique and fuzzy uncertainty. This paper is organized as follows. In Section 2, a novel error dynamic model for the laser rangefinder is put forward, including fuzzy-based error description for the features. Section 3 gives feature extraction algorithms and feature points pairing. Section 4 presents the positioning algorithms based on the weighted mean technique and fuzzy uncertainty. Section 5 details experiments designed to verify the effectiveness and accuracy of the method, and finally in Section 6 concluding remarks are discussed.
VOLUME 3,
Articles
2009
2. Laser rangefinder error model and Fuzzy Based Uncertainty Description Typically, the model of a laser rangefinder with uncertainty regarding the detection distance and the scanning angle can be shown in Fig. 1. The uncertainty associated with the location of a 2D laser reading is represented, by the covariance matrix of its perturbation vector, where a zero-mean Gaussian error distribution is assumed [1]. In general, a scanning laser rangefinder collects scans, i.e., sets of m readings , where represents the detection distance to an object placed in the way of the laser beam in the direction determined by a scanning angle . The scanning angle takes m discrete values ranging from 0 to 180 degrees, so that , and , and the indices i are additive modulo m. Let denote the angle between x axis and the target plane (as shown in Fig. 2), and denote the detection angle (which is defined as the angle between the laser beam and the target plane, as shown in Fig. 2), so that we have, , which can be shown in Fig. 2. will include large uncertainty at small values , that is, at sharp angles of observation [31].
Fig. 1. Laser rangefinder model with uncertainty along to the detection angle and the detection distance.
Fig. 2. The detection angle in the ith scan. 4
N째 3
Journal of Automation, Mobile Robotics & Intelligent Systems
2.1. A Novel 2D Laser Rangefinder Error Model In the feature-based approach, features are extracted from the laser rangefinder reading. These features could be line segments, corners, break points, etc. Fig. 3 gives an example of a 180째 scanning of the gym at Sexton Campus. From this figure we can determine that laser rangefinder readings can be used to represent an indoor environment in the form of polygonal shapes.
VOLUME 3,
N째 3
2009
as seen in Fig. 4. First, when the detection distance increases, the uncertainty of the laser rangefinder will increase accordingly. Second, when the detection angle is far from 90째, the readings of the laser rangefinder will contain large uncertainty for the discrete scanning. The uncertainty associated with the laser rangefinder is defined by: (1) The relationship of the above equation is hardly represented by mathematic formulas, so fuzzy logic could be the best to represent this kind of uncertainty. In this paper, fuzzy logic is used to describe the uncertainty for the point, the virtual angular point, and the line feature (the line feature is defined as a sequence of points tracing out a line in the environment). Therefore, each feature is evaluated by fuzzy uncertainty. When the positions are fused, fuzzy uncertainty will be used as weights, namely, less uncertainty will have larger weight. By this way, the accuracy of position will be improved.
Fig. 3. Features in an 180째 scanning set of the laser rangefinder reading. It is obvious that break points, angular points, virtual angular points, and line segments can be used as features. In an indoor environment, angular points could be the wall corner or the interfaces of objects with the wall. A break point can be defined as a point before or after a gap in the sequence of data points. Break points may locate at different positions in different laser scan when the robot moves. However, because of the uncertainty in the laser rangefinder, the same break points cannot be detected by the different scanning data; even a break point detected by the first scan cannot be detected by the second scan. As a result, break points can only provide rough information. The angular points have the same problem as the break points. The virtual angular point refers to an intersectional point of two arbitrary lines. Virtual angular points do not exist in the real world and cannot be detected by the laser rangefinder. In general, we can get highly accurate slopes of line segments instead of the starting point, the ending point, and the length. More importantly, we can obtain some points with the short detection distance and the big detection angle, that is, the uncertainties in those points are very small, but they might only belong to one section of other feature segments, enabling us to continue using them for positioning in our algorithms. Using virtual angular points, we can get more accurate features to position a mobile robot. We know that the line segments have much more exact tangents than point features, due to a line including more points. Because of light refection noise and other variables, some points scanned by a laser rangefinder, which do not exhibit a local alignment within a tolerance, will be removed from the raw sensed data [32]. However, since the laser rangefinder has uncertainty itself as mentioned at the beginning of this section, there are two main reasons that will affect the laser rangefinder's accuracy,
Fig. 4. Laser rangefinder scans the object, and there is uncertainty in laser reading (red area). 2.2. Fuzzy-Based Uncertainty Description for the Point Feature Uncertainty of each point is determined by the fuzzy logic system with two inputs and one output. The two inputs are the detection distance and the detection angle, and the one output is the uncertainty associated with this point. The reading matrix from a laser rangefinder is given by: (2) The detection distance between the laser rangefinder and the object can be obtained from the laser rangefinder reading directly. That is . The detection angle is formulated as: (3) The membership functions of those three variables, the detection distance, the detection angle, and the Articles
5
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
uncertainty, are defined by triangular functions, because they only need vertex points to store them and thus to minimize the computer storage, and the sum of the triangular functions is equal to 1 that will simplify the expression. The membership functions are shown in Fig. 5, Fig. 6, and Fig. 7, respectively.
N째 3
2009
with the point can be given by:
(4)
Fig. 5. The membership function of the detection angle.
2.3. Fuzzy-Based Uncertainty Description for the Line Feature In general, some methods are used for extracting the line feature from the laser rangefinder reading. After this processing, a series of points construct one line. The accuracy description regarding the raw points from the laser rangefinder reading can be obtained by the method mentioned in the above subsection. Still the fuzzy logic system is designed for describing the uncertainty of the line feature. Generally, one line feature in a 2D environment can be expressed by: (5) There are two inputs, which are the mean of the uncertainty of points which construct the line feature, and the mean of distances between those points and the corresponding line feature.
Fig. 6. The membership function of the detection distance. The first input can be given by:
(6) m1 is the number of points to construct the line. The distance from a point to a line can be given by: (7) Therefore, the second input can be given by: Fig. 7. Points Uncertainty. The presented fuzzy logic decision system uses 15 rules, listed in Table 1. The proposed fuzzy logic decision system adopts the Mamdani-style inference engine and the max-min method for defuzzification. Table 1. The fuzzy rules for accuracy estimation.
Therefore, the output of the uncertainty associated 6
Articles
(8) The membership functions of two input variables are defined by triangular functions by the same reasons mentioned in the above subsection, shown in Fig. 8 and Fig. 9, respectively. The membership function of the output is shown in Fig. 7.
Fig. 8. The membership function of the mean of points accuracy.
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
N째 3
2009
(10) The membership functions of the input variables are defined by triangular functions, as shown in Fig. 7 and Fig.10, respectively. The membership function of the output is shown in Fig. 7. Fig. 9. The membership function of the mean of the distance between the point and the line. The presented fuzzy decision system uses nine rules, listed in Table 2. Table 2. The fuzzy rules for the line feature accuracy estimation.
Fig. 10. The membership function of the mean of the distance between the point and the line. The presented fuzzy decision system uses 45 rules, listed in Table 3. By the same inference engine and defuzzification, the output of uncertainty can be given by:
By the same inference engine and defuzzification, the output of uncertainty can be given by:
(11) (9) 2.4. Fuzzy-Based Uncertainty Description for the Virtual Angular Points Feature According to the definition of the virtual angular point, we know that two arbitrary lines can form one virtual angular point. So, the accuracy of virtual angular points depends on the accuracy of those two lines. Another property of such a point feature is that the small variation of the tangent of one line causes a big distance error if the point is far away from the central point of the line. It means that if the virtual angular point is too far away from the line segments, this point may be greatly inaccurate. Therefore, the uncertainty of the virtual angular points depends on three factors, two lines' uncertainty, and the distance between the virtual angular point and the center points of the line segments. The first input and the second input can be obtained through the above subsection directly, denoted by fL1 and fL2, respectively. The third input is the mean of the distances between the virtual angular point and the center points of two line segments, which form the virtual angular point. Let (xv; yv) denote the virtual angular point, and (xc1; yc1), (xc2; yc2) denote the center points of two line segments, respectively, then we have:
(12)
Table 3. The fuzzy rules for the virtual angular point accuracy estimation.
Articles
7
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
3. Feature extraction and data association The procedure for the mobile robot's feature extraction and points pairing includes the laser rangefinder reading filtering, which removes erroneous points and assigns the accuracy estimation for each point, finding break points, clustering, which classifies the points into several clusters, finding angular points, line feature extracting, finding virtual angular points, and points pairing, which is used for the positioning algorithm. 3.1. Filtering Through the discussion of the above section, we know that the accuracy of laser rangefinder readings depends mainly on the detection distance, the detection angle, and the uncertainty associated with the sensor itself. Before the laser rangefinder reading is processed, it needs to remove some erroneous points, which have a very sharp detection angle. Assume that a point has a detection angle .The rule for filtering is given by:
N째 3
2009
the computation of segment endpoints as the intersection points with neighboring line segments. The final result of this process is a set of line features (short ones in the case of non-structured environments) that approximate the contour of the surrounding obstacles. 3.6. Finding Virtual Angular Points Arbitrarily, two lines with different tangents can form a virtual angular point, and those two lines are given by:
(15) Therefore, the intersectional point of those two lines can be obtained by:
(16) If , then needs to be removed.
is an erroneous point, this point
where, is the threshold for the detection angle, we choose 22.5째 in the paper. At the same time, as illustrated in Equation 4, each point will be assigned an uncertainty value in terms of the detection distance and the detection angle based on fuzzy logic. 3.2. Finding Break Points Generally, in a total of 180 points of laser rangefinder reading per scan, break points are those points satisfying the following conditions.
(13) Where, T1; T2 are the threshold of the distance between two points. 3.3. Clustering After break points are detected, the scan is broken at those points, thereby finding occlusions. The starting point and the sending point in one cluster should be break points. Line segments and angular points are classified into different clusters.
However, the number of virtual angular points will increase dramatically as they appear on the intersection of every straight line in an environment with more straight edges. For this situation, we can still get good features of virtual angular points by fuzzy-based uncertainty description. For example, if the uncertainty for a virtual angular point is too big, this point will be removed. By this method, the number of virtual angular points will be decreased. Then, use well-chosen features to position the mobile robot by the weighted mean technique. 3.7. Data Association Assume points in the ith scan and the (i + 1)th scan are , , respectively, where , which need to pair, and the increment of the inertial sensors is . The strategy of pairing is first to map one point in the ith scan to the (i + 1)th scan with the parameters obtained from the inertial sensors, second to find the point in the (i + 1)th scan that is closest to this mapping point. If there is only one such point, then those two points are matched. We use the following homogenous transformation to map one point in the ith scan into the (i + 1) scan:
(17) 3.4. Finding Angular Points Assume the current point needs to be checked. The angular points can be formulated as:
Where, is the angle between the threshold of such an angle.
and ,
and
Articles
,
(14)
(18)
is
Therefore, if d < T, then and are one pair of points in the ith scan and (i + 1)th scan, where T is the threshold of the distance.
3.5. Line Feature Extraction Line features are selected by determining the best fit for all points within the clustering segmented groups. This is accomplished in two steps: a) the least squares line fitting of each segmented group within a cluster, and b) 8
Let d denote the distance between and , we have:
4. Feature-based positioning for mobile robots Any two pairs of points can position the robot. Use the following formulas to illustrate this point, shown in
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
Fig. 11. The expression of the transform matrix in the form of the homogeneous matrix is given by:
(19) So that the relationship between one pair of points can be given by:
N째 3
2009
According to Equation 4, each point of break points and angular points has its fuzzy uncertainty description. The fuzzy uncertainty description for virtual angular points is given by Equation 11, which is determined by Equation 9. Assume that after points pairing, there are i pairs of break points, angular points, and virtual angular points, denoted by , and fuzzy uncertainty associated with those points denoted by fi1 and fi2, so that we can get the ith position from this pair of the points, and the accuracy associated with the ith position is given by:
(20)
(23)
The relationship between another pair of points can be obtained in the same manner, so we have:
Therefore, the position obtained from those pairs of points is denoted by . Total number of positions by paired points is denoted by m2. Then, the final position is fused by:
(21) (24) So that:
5. Experiments
(22)
The iRobot B21r mobile robot is an indoor mobile robot system developed by iRobot Corporation. The mobile robot possesses a synchronized drive mode with four steer- and drive-wheels, achieving a maximum speed of 1m/s. An on-board host computer implements the control software required to control both the internal navigation parameters of the vehicle and the interaction of the mobile robot with its surrounding environment using its exteroceptive sensors. The mobile robot is equipped with incremental encoders which return the rotation angle of the wheel, from which an estimation of the relative displacement of the vehicle can be obtained. The exteroceptive sensors mounted on the robot are: a sonar ring, formed by 48 Polaroid ultrasonic sensors, which return distance information from the surrounding obstacles; a binocular stereo rig, formed by two off-the-shelf CCD cameras; and a laser rangefinder which delivers accurate, low noise range information from an actively scanned infrared laser beam.
Fig. 12. iRobot running in the gym. Fig. 11. Vector transformation with the translation (a; b) and the rotation .
The mobile robot was programmed to follow a circle in the gym at Sexton Campus, which is shown in Fig. 12. Articles
9
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
Why we choose a circle curve is that it is a typical and effective curve to reflect the slippage and integration drifting phenomena in a mobile robot. Suppose that the conditions of the floor are the same everywhere. We define the starting point and measure the ending point. Because of the evenly slippery floor, the actual path of the mobile robot is a spiral curve. Therefore, by the starting point and the ending point, the actual path can be calculated. The laser rangefinder reading is filtered, so that incorrect points are removed and other points are assigned th uncertainty values. The filtering result of the 25 scan is shown in Fig 13. Based on the break points finding th algorithms, the break points in the 25 scan are found and shown in Fig. 14. Fig. 15 shows the angular points th th finding result in the 25 scan. The 25 scan is classified into 10 clusters, as shown in Fig. 16. Line features, which th are shown in Fig. 17, are found in the 25 scan. Here, only lines which have at least four points are picked, because they are useful to decrease the complexity in finding the virtual angular points. Virtual angular points are found and shown in Fig. 18. Finally, the position estimation is calculated, shown in Fig. 19 and Fig. 20. From this figure, in the short distance, the inertial sensors can give high accuracy, and the position given by the inertial sensors is better than that of the algorithms in the paper, but with the increasing in distance, the inertial sensors contain large errors in the position, so that the algorithms in the paper is much better than the odometry method.
th
Fig. 13. Data filtering at 25 scan.
th
Fig. 14. Break points finding at 25 scan.
10
Articles
N째 3
2009
th
Fig. 15. Angular points finding at 25 scan.
Fig. 16. Data clustering.
th
Fig. 17. Line features extracting at the 25 scanning.
th
Fig. 18. Virtual angular points finding at the 25 scanning.
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
N° 3
2009
Although the experiment setting is a little simple, the complicated environment and obstacles in the environment will be considered in the future work.
Fig. 19. The Comparison of the real trajectory, inertial sensor reading, and fuzzy-based positioning.
AUTHORS Weimin Shen* - Department of Electrical and Computer Engineering, Dalhousie University, Halifax, B3J 1Z1, Canada. E-mail: weimin.shen@dal.ca. Jason Gu - Department of Electrical and Computer Engineering, Dalhousie University, Halifax, B3J 1Z1, Canada. E-mail: Jason.Gu@dal.ca. Max Meng - Department of Electronic Engineering, The Chinese University of Hong Kong, Shatin, N. T., Hong Kong. E-mail: max@ee.cuhk.edu.hk. * Corresponding author
References [1]
[2]
[3]
Fig. 20. The Comparison of the real trajectory, inertial sensor reading, and fuzzy-based positioning.
[4]
6. Conclusion and future work In this paper, the proposed research built a new dynamic error model with consideration of the detection distance and the detection angle. A new concept, “virtual angular point”, was defined and used for positioning. Fuzzy-based weight mean position fusion was presented. The procedure for positioning was put forward. The proposed research achieved significant improvement for the mobile robot positioning. The procedure of our positioning can be discussed as follows.
[5]
[6]
[7]
1. Before processing the laser rangefinder reading, it is important to filter data. It can decrease complexity of the whole positioning, and improve accuracy. Through fuzzy-based uncertainty descriptions, it is easy and efficient to present the uncertainty associated with laser measurement. 2. Virtual angular points have higher accuracy than break points and angular points. It is an innovative method to map the two different coordinate frames. 3. During the phase of fusing break points, angular points, and virtual angular points, the fuzzy method was implemented to decide weighted coefficients for corresponding data, then to obtain the weighted mean position and orientation of the mobile robot. This intuitive method can be easily understood, and the uncertainty is simple to express.
[8] [9]
[10]
[11]
[12]
Castellanos J.A., Tardos J.D., Mobile robot localization and map building: a multi-sensor fusion approach, Kluwar Academic Publishers: Massachusetts, 1999. Kunwar F., Benhabib B., “Rendezvous-Guidance Trajectory Planning for Robotic Dynamic Obstacle Avoidance and Interception”, IEEE Transactions on Systems, Man, and Cybernetics - Part B: Cybernetics, vol. 36, no. 6, 2006, pp. 1432-1441. Diaz de León J.L., Sossa J.H., “Automatic Path Planning for a Mobile Robot Among Obstacles of Arbitrary Shape”, IEEE Transactions on Systems, Man, and Cybernetics - Part B: Cybernetics, vol. 28, no. 3, 1998, pp. 467-472. DeSouza G.N., Kak A.C., “Vision for Mobile Robot Navigation: A Survey”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 2, 2002, pp. 237267. Borentein J., Everret H.R., Feng L., Wehe D., “Mobile Robot Positioning Sensors and Techniques”, the Journal of Robotic Systems, vol. 14, no. 4, 1997, pp. 213249. Skrzypczynski P., ”On Qualitative Uncertainty in Range Measurements from 2D Laser Scanners”, Journal of Automation, Mobile Robotics and Intelligent Systems, vol. 2, no. 2, 2008, pp. 35-43. Lee H., Kim M.N., Cho H., “A New 3D Sensor System by Using Virtual Camera Model and Stereo Vision for Mobile Robots”, Journal of Automation, Mobile Robotics and Intelligent Systems, vol. 2, no. 4, 2008, pp. 38-44. Everett H.R., Sensors for mobile robots: theory and application. A K Peters: Massachusetts, 1995. Martinelli A., “The Odometry Error of a Mobile Robot with a Synchronous Drive System”, IEEE Transactions on Robotics and Automation, vol. 18, no. 3, 2002, pp. 399-405. Borenstein J., “Experimental Results from Internal Odometry Error Correction with the OmniMate Mobile Robot”, IEEE Transactions on Robotics and Automation, vol. 14, no. 6, 1998, pp. 963-969. Siemiątkowska B., Gnatowski M., Zychewicz A., “Fast Method of 3D Map Building Based on Laser Range Data”, Journal of Automation, Mobile Robotics and Intelligent Systems, vol. 1, no. 2, 2007, pp. 35-39. Wijk O., Christensen H.I., “Triangulation-based Fusion Articles
11
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
12
of Sonar Data with Application in Robot Pose Tracking”, IEEE Transaction on Robotics and Automation, vol. 16, no. 6, 2000, pp. 740-752. Kuc R., Barshan B., “Bat-Like Sonar for Guiding Mobile Robots”, IEEE Control Systems Magazine, vol. 12, no. 4, 1992, pp. 4-12. Brudka M., Pacut A., “Intelligent Robot Control Using Ultrasonic Measurements”, IEEE Transactions on Instrumentation and Measurement, vol. 51, no. 3, 2002, pp. 454-459. Ohya I., Kosaka A., Kak A., “Vision-Based Navigation by a Mobile Robot with Obstacle Avoidance Using SingleCamera Vision and Ultrasonic Sensing”, IEEE Transactions on Robotics and Automation, vol. 14, no. 6, 1998, pp. 969-978 . Tsai C., “A Localization System of a Mobile Robot by Fusing Dead-Reckoning and Ultrasonic Measurements”, IEEE Transactions on Instrumentation and Measurement, vol. 47, no. 5, 1998, pp. 1399-1404. Gutierrez-Osuna R., Janet J.A., Luo R.C., “Modeling of Ultrasonic Range Sensors for Localization of Autonomous Mobile Robots”, IEEE Transactions on Industrial Electronics, vol. 45, no. 4, 1998, pp. 654-666. Song K.T., Tang W.H., “Environment perception for a mobile robot using double ultrasonic sensors and a CCD camera”, IEEE Transactions on Industrial Electronics, vol. 43, no. 3, 1996, pp. 372-379. Srinivasan V., Lumia R., “A pseudo-interferometric laser range finder for robot applications”, IEEE Transactions on Robotics and Automation, vol. 5, no. 1, 1989, pp. 98-105. Larsson U., Forsberg J., Wernersson A., “Mobile robot localization: integrating measurements from a time-offlight laser”, IEEE Transactions on Industrial Electronics, vol. 43, no. 3, 1996, pp. 422-431. Gamini Dissanayake M.W.M., Newman P., Clark S., Durrant-Whyte H.F., Csorba M., “A solution to the Simulta-neous Localization and Map Building (SLAM) Problem”, IEEE Transaction on Robotics and Automation, vol. 17, no. 3, 2001, pp. 229-241. Schmitt T., Hanek R., Beetz M., Buck S., Radig B., “Cooperative probabilistic state estimation for visionbased autonomous mobile robots”, IEEE Transactions on Robotics and Automation, vol. 18, no. 5, 2002, pp. 670-684. Baumgartner E.T., Skaar S.B., “An autonomous visionbased mobile robot”, IEEE Transactions on Automatic Control, vol. 39, no. 3, 1994, pp. 493-502. Georgiev A., Allen P.K., “Localization methods for a mobile robot in urban environments”, IEEE Transactions on Robotics and Automation, vol. 20, no. 5, 2004, pp. 851-864. Briechle K., Hanebeck U.D., “Localization of a mobile robot using relative bearing measurements”, IEEE Transactions on Robotics and Automation, vol. 20, no. 1, 2004, pp. 36-44. Lee J.M., Son K., Lee M.C., Choi J.W., Han S.H., Lee M.H., “Localization of a mobile robot using the image of a moving object”, IEEE Transactions on Industrial Electronics, vol. 50, no. 3, 2003, pp. 612-619. R. Talluri, J.K. Aggarwal, “Position estimation for an autonomous mobile robot in an outdoor environment”,
Articles
[28]
[29]
[30]
[31]
[32]
N° 3
2009
IEEE Transactions on Robotics and Automation, vol. 8, no. 5, 1992, pp. 573-584. Barber R., Mata M., Boada M.J.L., Armingol J.M., Salichs M.A., “A Perception System based on Laser Information for Mobile Robot Topologic Navigation”. Proceeth dings of the 2002 28 Annual Conference of the IEEE Industrial Electronics Society (IECON 02), vol. 4, Spain, November 2002, pp. 2779-2784. Lionis G.S., Kyriakopoulos K.J., “A Laser Scanner based Mobile Robot SLAM Algorithm with Improved Convergence Properties”, Proceedings of the 2002 IEEE/ RSJ International Conference on Intelligent Robots and Systems, Lausannem, Switzerland, October 2002, pp. 582587. Zhou X.W., Ho Y.K., Chua C.S., Zou Y., “The Localization of Mobile Robot Based on Laser Scanner”, 2000 Canadian Conference on Electrical and Computer Engineering, vol. 2, Halifax, March 2000, pp. 7-10. Dubrawski A., Siemiatkowska B., “A Method for Tracking Pose of a Mobile Robot Equipped with a Scanning Laser Rangefinder”, Proceedings of the 1998 IEEE International Conference on Robotics and Automation, Leuven, Belgium, May 1998, pp. 2518-2523. Gonzalez J., Ollero A., Reina A., “Map Building for a Mobile Robot Equipped with a 2D Laser Rangefinder”, 1994. IEEE International Conference on Robotics and Automation, May 1994, pp. 1904-1909.
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
N째 3
2009
ROBOTIC APPROACHES TO SEISMIC SURVEYING Received 5th December 2008; accepted 14th April 2009.
Christopher M. Gifford, Arvin Agah
Abstract: Due to the remoteness and harshness of some environments, integrating mobile robotics and seismic surveying to automate the process becomes very attractive. Because robotic applications to seismic surveying have been extremely limited, this paper represents a base for sparking novel techniques that can potentially be employed in any environment and potentially on other planets. It also presents a categorization of techniques involving robotic seismic automation. Traditional seismic methods are analyzed in terms of robotic automation possibilities and compared in terms of strengths, weaknesses, reliability, relative cost, and complexity. Futuristic seismic methods such as Hybrid Streamers and a Multi-Robot Seismic Surveying Team are also discussed in detail, along with simulation results from a multi-robot grid formation study. Keywords: robotic seismic, seismic automation, hybrid streamers, mobile robots, seismic surveying .
the most costly portion of a survey. For harsh environments, this becomes extremely important for safety reasons. Furthermore, robotics increases precision and introduces repeatability into a time-consuming and complex human task. The focus of this paper involves robotic deployment and retrieval of seismic sensors. Comparing existing technology associated with seismic, mobile robotics, and robotic manipulation provided insight into what would be reliable under many conditions. Because seismic deployment is labor-intensive, expensive in terms of time and cost, and possibly dangerous, autonomously performing such tasks using mobile robots can be beneficial. Therefore, the goal is to combine robotics research with seismic systems to autonomously image the subsurface. A classification of robotic deployment and retrieval techniques for seismic sensors is presented in this paper, accompanied by related challenges and results from a multi-robot grid formation simulation study.
2. Background 1. Introduction At the University of Kansas, the Center for Remote Sensing of Ice Sheets (CReSIS) [9] performs polar research to gather data and model ice sheets to better understand global warming and its possible effects. We have designed, built, and utilized mobile robots to autonomously traverse polar terrain in Greenland and Antarctica. The problem we are faced with is to increase efficiency of seismic data acquisition in these types of environments. Integration of automated technology into seismic methods can potentially improve and enhance the process. One of the sensors used to perform this research is a seismic sensor, or geophone. These highly sensitive geophones detect vibrations in the ground which can be recorded as images. These images, for example, can show characteristics of the subsurface, detect cracking (fault) locations, as well as provide information on what is beneath the ice sheets. Although research focused on a polar environment, the presented techniques could be employed in and applied to any environment. Research in the field of robotics has been focusing on accurate sensing and autonomy, mostly in normal environments such as factories and homes. Robotic applications involving seismic surveying in harsh environments have, however, been limited. Not only are navigation and actuation in severe environments difficult problems, autonomous tasks are even more challenging [28]. Another important aspect of integrating robotics and seismic surveying is that it limits human involvement,
This section provides an overview of seismic sensors, seismic surveying, polar mobility, and our experience with autonomous polar robots. Integration of these efforts is the focus of the remainder of the paper. 2.1. Seismic Sensors and Surveying Seismic sensors, also known as geophones, are extremely sensitive devices which transfer vibration waves as a series of analog signals, based on the composition of the material beneath the surface and the travel times of the measured seismic waves. They are activated by a seismic source, which can range from striking the ground to a very large explosion. The source sends elastic vibration energy down into and through the subsurface so as to eventually reflect and refract back after interaction with the internal layers. Based on the travel times, wave velocities, and received signals from a series of geophones, seismologists can digitize, filter, and analyze the results to learn such facts as water table depth, fault location, and rock layer boundaries. When attempting to reconstruct the paths that the waves traveled, both refracted and reflected paths can provide structural information of the subsurface [24]. Refracted paths represent those that are principally horizontal, such as traveling between two rock layers. Reflected paths travel vertically and involve waves traveling initially downward that are reflected back to the surface by rock layer interaction. The physical properties of the rocks and layers affect travel times of seismic waves. These travel times, along with the waveform and spectra, Articles
13
Journal of Automation, Mobile Robotics & Intelligent Systems
are then used to deduce information about the subsurface and internal layering. Various styles and models of seismic sensors exist for many applications on land, snow, and at sea. Most models employ a coil hanging from a spring in a magnetic field. When the case and spike are moved, the mass induces small currents into the coil as it moves about in the magnetic field. A geophone element is what contains this technology, which is then placed in a case and attached to a spike to plant into the surface. Primarily used for oil and gas exploration, seismic sensors can be utilized to determine subsurface composition at many scales. They are also available at several frequencies for differing situations, so as to capture lower or higher frequencies. Generally, the higher the frequency, the more expensive the unit becomes due to required sensitivity. A single geophone and linear array of deployed geophones are shown in Figure 1. Deployment of geophones translates into how each is inserted into the ground (or alternatively, rests on the surface). During manual deployments involving tens or hundreds of geophones, each is typically stepped on or hand-pressed into the ground. If necessary, holes are dug prior to deployment to create a shelter for the sensor to record its data. Instruments may rest on the surface, rather than being inserted or buried, if the surface is hard. Many factors affect deployment and the resulting quality of recorded data. The most important characteristic a geophone must exhibit when deployed is how well it is coupled with the ground. Coupling directly affects the data quality and frequencies that can be recorded. For desired coupling, the geophone spike must be tightly surrounded by the ground or snow and, in general, must be accurately deployed in all directions to acquire reliable data. Seismic arrays can be formed to acquire a map of the subsurface, allowing detailed imaging at many resolutions and depths. Higher frequencies and close (sub-meter to tens of meters) spacing results in a highly detailed, shallow image of the subsurface. Deeper imaging requires sparse deployment and long distances from a powerful source, with spacing ranging from hundreds to thousands of meters. Furthermore, high frequency acquisition translates into more accurate images. In order to be reliable, geophones must be arranged in a centimeter-level precision grid of equal spacing while being oriented no more than 10째 from the Earth's gravitational vertical. Achieving this level of precision requires tedious detail that can be cumbersome for a human, but also remains an extremely difficult task for mobile robots to perform. If geophones are positioned in a straight line, a seismic survey will result in a two-dimensional (2D) image of the subsurface. Similarly, if the geophones are aligned in a square or rectangular grid pattern, a three-dimensional (3D) view of subsurface characteristics can result. A fourth dimension, namely time, can be introduced to image movement of the subsurface and flow of materials. 2.2. Mobility and the MARVIN II polar rover Applications for automation have increased over the years. Robots have been used for planetary exploration, homeland defense, and surveillance operations. Environ14
Articles
VOLUME 3,
N째 3
2009
ments can range from indoors (factory, home, or museum) to outdoors (deserts and remote locations, such as polar regions). The main application for this work is for a polar environment, where robots typically employ tracks for reliable mobility. Although wheels are the most common form of locomotion, they perform poorly over uneven terrain. Traction can also be an issue on predominantly ice or snow surfaces as wheels offer less contact surface area. Unless the wheels can pivot, obstacles with height of more than the radius of the robot's wheels can cause difficulty. They are, however, mechanically simple and easy to construct. Tracks represent a more complex and heavier mobility option, but are inherently less susceptible to environmental hazards and can negotiate larger obstacles. The ability to travel on snow and ice makes this the desired option for polar travel, as they exhibit a larger contact surface area with the ground. Tracks are, however, inefficient due to friction during turning and slippage within the tracks themselves. The MARVIN II autonomous polar robot at CReSIS is a fully-tracked, automated All-Terrain Vehicle (ATV) that was built for the purpose of towing radar sleds and gathering data in polar environments. Autonomous navigation is performed using a high-precision GPS, with which it attempts to drive as straight as possible between a series of waypoints. The path precision of this robot is on the meter level, but can achieve decimeter accuracy for data after post-processing [3]. MARVIN II is mentioned in later sections as the main robot for certain robotic approaches to seismic surveying. Figure 2 shows the second-generation MARVIN II polar robot in Antarctica in 2006, and Table 1 lists platform specifications. It has been successfully deployed to support radar experiments in Antarctica, as well as long-term survival research for polar environments [2]. Figures 3 and 4 show the robot involved in towed radar experiments in Antarctica during the 2005-2006 field season.
Fig. 1. Conventional spiked geophone (left) and several deployed by inserting the spikes into the surface (right).
Fig. 2. The MARVIN II polar robot in Antarctica in 2006, used to autonomously gather radar data of ice sheets.
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
3. Related work Very little work involving robotic deployment and retrieval of seismic sensors has been done to date. However, work done in regular environments can provide helpful information. As stated in [19], the future of seismic surveying on land is the elimination of cables. By doing this, surveying becomes more cost-effective and efficient. Seismic networks can also be of less weight and easily scalable in terms of network size, structure, and shape. By increasing overall productivity and abilities of a network, data acquisition will improve in the long run. The following related works have high correlation with the research presented in this paper. The University of Kansas Geology Department recently developed an “autojuggie” [29] capable of planting 72 geophones in 2 seconds using a hydraulic press and structured array system. Several variations of the autojuggie have also been developed and field-tested [27]. These variations include automated deployment using farm equipment and deployment of closely-spaced lines of seismic sensors for ultra-shallow imaging. Structures were built to simultaneously press all sensors into the ground, and simultaneously retrieve all geophones when finished. Care was also taken to try to reduce crosstalk between sensors through the deployment structures. These approaches are still human-operated in that they use existing farm equipment as a means for deployment and retrieval. Scalability and robustness of this approach are limited. Land streamers are a method inherited from the marine seismic community, which deploy a series of geophones by dragging them along the surface. Acquisition takes place when stopped, where all geophones typically rest on metal plates rather than being physically inserted into the ground. This increases deployment efficiency by reducing the time required for insertion and orientation of the sensors, as well as reducing transportation time from one site to another. In [26], multiple land streamers were pulled alongside each other at the same time using an ATV. Individual land streamers were spaced equidistant from one another on a towing structure so as to create a wider 2D array. Results were acceptable for relaxed seismic requirements, but would not be applicable under higher frequency situations. Other efforts have also been published [15],[21],[25] that employed single streamers in a polar setting, or specifically designed for shallow data acquisition [11],[12]. Survey requirements and weather conditions dictated the geophone spacing, streamer length, and materials used to construct the streamers. Several streamer designs have been attempted in these works, ranging from the 1970's to the present. The Kansas Geological Survey made their land streamer more rugged by encasing it and all wiring in a fire hose [20]. NASA, working with Georgia Tech and Metrica, Inc., developed an Extra-Vehicular Activity Robotic Assistant [7] capable of being handed a geophone and inserting it into soil using a seven degree-of-freedom manipulator and a three-fingered gripper [22]. The 4-wheeled mobile robot could not perform the full deployment task and was not made to retrieve the planted geophones. The main purpose of this robot was to assist activity-suited humans in the field by performing some tasks on its own. A trailer containing the geophones was pulled so the human could
N° 3
2009
hand them to the robot or store other various supplies. Table 1. MARVIN II Platform Specifications. Dimensions (LxWxH) Mobility Track Width Engine Tank Size Transmission Ground Clearance Weight Hauling Payload Towing Capacity
2.4 m x 1.6 m x 1.8 m Tracks (1600 in² ground contact) 394 mm (15.5 in) 34 HP Diesel (950 cc, 3-cylinder) 10 gallons (4-8 hours runtime) Hydrostatic 230 mm (9 in) 717 kg (1580 lb) 454 kg (1000 lb) 454 kg (1000 lb)
Fig. 3. MARVIN II polar robot preparing for a bi-static radar experiment. For agricultural applications, robotic pickers, planters, row croppers, and harvesters have been incorporated into existing farm equipment to increase autonomy and control. Vision, size recognition, and color comparisons are being incorporated for better accuracy. Other robotic agriculture applications and designs are outlined in [13], [14]. As seismic sensors can be distributed like a sensor network, the field of wireless sensor networks [4],[5] will play a pivotal role in the future of wireless seismic.
4. Robotic approaches to seismic surveying Many possible mechanisms exist for deployment and retrieval of seismic sensors [16],[17]. Automating the process makes detailed imaging much more reliable. Five categories of robotic approaches were analyzed in terms of cost, complexity, advantages, and disadvantages. The last two categories have not yet been attempted, and represent promising future methods for seismic surveying. 1. Individual Deployment 2. Array Deployment 3. Land Streamers 4. Hybrid Streamers 5. Multi-Robot Seismic Surveying Team 4.1. Individual Deployment Individual deployment covers those methods that deploy and retrieve a single geophone at a time. This mechanism could be a robotic arm, crane-like apparatus, airpowered device, planter, or any other form of pick-andplace device. In many planting, weeding, and picking Articles
15
Journal of Automation, Mobile Robotics & Intelligent Systems
projects, robotic manipulators are utilized to help automate the process. This type of mechanism is responsible for pressing into, orienting, and pulling from the ground all seismic sensors and placing them in a transport area, charging station, or organized rack. Size, shape, and weight influence the overall design of platform(s) performing the task. Issues with this category involve orientation, positioning, and weather.
Fig. 4. CReSIS MARVIN II polar robot turning while pulling a radar apparatus. Autonomously dealing with and keeping track of the tangling maze of seismic cable also represents a formidable challenge. A positive aspect of this approach is the millimeter repeatability and precision that manipulators offer. However, finding and retrieving geophones, manipulator payload, required pushing power, and gripping the geophone are all major difficulties inherent to this approach. 4.2. Array Deployment Array deployment involves an array to deploy and retrieve a set of geophones, their cables, and all necessary storage equipment. Seismic sensors would be pre-set into the array, taken to the field location, and simultaneously (e.g., hydraulically) pressed into the ground at equal spacing, tilt, and elevation. When ready for retrieval, the structure is raised to remove the sensors from the ground. Multiple arrays could be pieced together to record larger areas. The design could also permit variable sensor spacing to perform different resolution imaging. Geophone spacing, orientation, and deployment depth are therefore controlled for the entire seismic array. Scalability in terms of size and imaging resolution is lacking due to being pre-built, however, and this approach is also still wired. 4.3. Land Streamers The idea of land streamers came from marine seismic surveying, which involves constantly towing marine streamers under water with use of pulse guns for the sound source. Land streamers are a non-insertion seismic method where geophones are wired in series and towed on the surface to acquire seismic data. When the recording location is reached, the towing vehicle stops so that seismic acquisition can take place. One or more of these streamers can be towed in parallel to cover larger areas and perform 2D or 3D imaging. Autonomous seismic acquisition can then be accomplished with, for example, the MARVIN II robot. The Webots [10] simulation environment was used to test GPS 16
Articles
VOLUME 3,
N° 3
2009
waypoint navigation, driving, and turning algorithms, whereas MSC visualNastran [23] was employed to simulate pulling and drag abilities of our autonomous rovers. Figure 5 shows a MSC visualNastran simulation involving three streamers, where each box represents an enclosed geophone. This mechanism could extend to cover a long distance behind the rover as well as widen coverage width by using multiple streamers. An attractive aspect of this category is the ability to choose and change spacing of sensors within and between streamer lines. Other advantages of this approach are its ease of transport, efficiency, simplicity, and no need for geophone insertion. The main advantage to these types of systems is speed and the amount of seismic data that can be recorded with fewer personnel. The unattractive characteristic of this approach is its lower coupling. This may cause the geophones to miss higher frequencies, resulting in less detailed seismic images. Some research has shown that, in some environments, performance between conventional geophones and land streamers are very similar. 4.4. Hybrid Streamers It has been proposed that a hybrid combination of land streamers with increased coupling would be a good alternative [16]. There are several design options to increase hybrid streamer coupling: Employ a trenching or plowing attachment to prepare the ground to drag the streamers below the surface for wind protection and to rest flat for orientation purposes; Add weight to each streamer node; Change plate size and/or geometry; Increase the surface area the plates have with the ground; Heat streamer plates for snowy/polar environments so the melt can refreeze to ice, giving a more rigid surface contact for the plates; and Drill the geophone into the ground like a threaded screw.
Fig. 5. Simulation image of an autonomous robot towing a three-streamer array, used for studying towing of streamers and how turning affects strain of the towing structure and travel of the streamer components. Accordingly, a furrowing, plowing, or trenching apparatus could be attached to a mobile robot. The robot would power all equipment, have seismographs onboard for seismic data conversion and storage, and have a data cable which would act as both the data transmission and communication medium for the entire system.
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
The simulation images in Figure 6 illustrate several variations and configurations that could be utilized. One or more robots could be used and each could tow one or more parallel streamer lines. A single robot can tow a single hybrid streamer, or multiple robots can tow several parallel hybrid streamers and work together to image larger areas. The advantages of such an approach are better coupling, faster travel, and the potential to collect much more data with far fewer personnel involved. Complex coordination, communication, and node collisions are essentially avoided and there is no added attachment/detachment complexity for the streamer to the robot. Disadvantages to this approach are a single point of failure and overcoming coupling issues. Hybrid streamers represent a new seismic technique that has not yet been fully designed or attempted in modern surveying. CReSIS is in the process of designing and implementing these hybrid techniques [8] for polar deployment.
N째 3
2009
4.5. Multi-Robot seismic surveying team Based on the demonstrated success of multi-robot systems (distributed robotics) [1],[6], we have proposed the use of a multi-robot seismic surveying team. The multi-robot seismic surveying approach involves a team of several autonomous, mobile robots that are smaller in size to deploy geophones and traverse the environment. They work together to precisely align into a seismic grid pattern. Each robot represents a mobile node that dep-
loys and retrieves its own geophone. Power is provided by onboard sources, where each robot contains the necessary digitizing, storage, and communication hardware for seismic acquisition. A mobile robot can inject into or place a geophone onto the ground while protecting the deployed sensors from the wind and weather using an environmental enclosure. Team size can be relatively small, such as a 25robot team forming a 5x5 seismic grid, or extremely large, consisting of potentially hundreds of robots forming grids of any size, shape, and spacing for different seismic resolution applications. There are various ways that the team could move into position. Robots could move one at a time in a certain fashion, by rows or columns, or dynamically align while all moving at once. Positioning one robot at a time takes longer, but could help increase accuracy and reduce collisions [18]. Dynamically forming the seismic grid would take less time and would likely be a more flexible solution, but would suffer from inherently being less precise. Figure 7 demonstrates that a team of 25 mobile robots could be transported to a location on a trailer by a larger robot. Once there, the team could get off the trailer and begin forming the desired shape at the desired spacing. For example, Figure 8 illustrates a shape formation scenario in simulation. The robots coordinate which GPS positions they travel to based on a desired grid shape and spacing, as provided by the main robot.
(a)Single robot, single hybrid streamer;
(b)Single robot, hybrid streamer array;
(c)Multiple robots, single hybrid streamers;
(d)Multiple robots, hybrid streamer arrays.
Fig. 6. Simulation images illustrating variations of hybrid streamers towed by mobile robots. Articles
17
Journal of Automation, Mobile Robotics & Intelligent Systems
Fig. 7. Simulation image of a team of 25 mobile robots leaving a trailer pulled by a larger robot.
VOLUME 3,
N째 3
2009
seismic data. This is the most desirable approach based on its mobility and ability to image at any resolution, shape, and scale. This might also provide faster network assembly, especially for a large and remote team. Dropping the robot team from an aerial vehicle to assemble, record, and perform multiple missions represents a futuristic option in this category. A design has been proposed for such a mobile robot team, as well as precise grid formation schemes such that the team could form a precise seismic grid one at a time or in a dynamic fashion [16], [18]. This category of seismic sensing has not been formally performed, but is currently being studied at CReSIS.
5. Multi-Robot grid formation simulation study The results presented in this section are based on a simulation study, in which each mobile robot has its own GPS receiver and the ability to communicate with a larger, main robot. Each team robot is small, uses four wheels for mobility, and has an onboard battery. Precise robot positioning and grid alignment are achieved using a GPS-coordinate based incremental algorithm. This algorithm essentially removes collisions while forming grids of certain shapes and spacings, one robot at a time. As discussed later, this is an example of sacrificing overall time to essentially eliminate robot collisions. More detail can be found in [16],[18]. A larger robot is assigned to transport the team to a remote location on a trailer, if the team is small. A GPS base station is located nearby or onboard the carrier robot so each team robot can use GPS techniques (e.g., RTK or DGPS) for distance correction and precise positioning. If the team is too large, it can perform the egress (traveling out to the recording location) and ingress (traveling back to base) on its own using GPS waypoints.
Fig. 8. Simulation images showing a team of mobile robots forming a square seismic grid, one-by-one from top-right to bottom-left. Figure 9 shows completely formed grids, explaining that spacing can be dense 9a or more sparse 9b. Many grid shapes can be formed, ranging from lines to rectangles to squares. Figures 9a and 9b show a square grid pattern while Figure 9c shows a rectangular seismic grid. These simulation images show that spacing as well as grid geometry can be varied depending on the survey requirements. The advantages of the multi-robot seismic sensor network approach are that it would be faster than a human team for large arrays, removes cumbersome wires from the system, and allows safe remote sensing while being able to dynamically adjust to the environment. This distributed methodology removes the single point of failure. This is also a new seismic method that has not yet been attempted, mainly because it remains too challenging at this time. The main bottlenecks lie in highly precise alignment of a team of mobile robots at any scale and any environment [18], along with properly aggregating the 18
Articles
5.1. Communication, context, and coordinates Communication between the main robot and the team robots is established using wireless radio. The main robot is in control of the entire operation, telling each robot specifically where to go and when. Therefore, the main robot broadcasts a robot ID and context information to all robots. The robot called upon performs the desired action and reports back when it has completed that action (e.g., traveled to the provided waypoint). If a certain time buffer has been exceeded for a robot to report back, that robot has failed, is stuck, or has moved out of communication range. Actions could then potentially take place to find or replace this robot. Here, the main robot is in complete control of computing the grid formation's shape, spacing, and the accompanying GPS coordinates for each mobile robot in the team. Individual robots then only need to consider traveling to the communicated GPS coordinate and fine-tuning for precision. This gives the main robot the ability to dynamically change grid shape and spacing so that a team could perform multiple formation sessions, each potentially of different shape and spacing. Previous figures have shown highly precise grid formations, demonstrating the differences in spacing and formation shape.
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
5.2. Precision and positioning By knowing relative positions of all team robots with respect to the main robot, spacing between robots can also be guaranteed within a threshold. Thus, given that there are high-precision GPS receivers and a nearby base station, it can be guaranteed that the resulting robot grid will be of high precision with a limited amount of positioning error. GPS error for each team robot and the main robot results in errors in precise absolute positions. Post-processing of location data (GPS log) can provide positioning accuracy to within several centimeters of the actual robot location, given that GPS devices capable of this level of accuracy are used. In the simulations, GPS receivers have accuracy on the level of several centimeters with randomized positioning error. GPS coordinates are ordered by the main robot in a manner to make sure that collisions are essentially avoided, positioning robots from the furthest locations to the closest locations, row by row, from one diagonal corner of the grid (top-right) to the opposite diagonal corner of the grid (bottom-left). 5.3. Assumptions As there are a large number of variables in such a simulation, some assumptions have to be made to make comparisons more reliable: All GPS receivers are precise to several cm; Robots are initially placed 0.5 meters apart; All robots turn at 5% of full speed; All driving speeds are constant (non-changing); Robots can stop immediately (instantly); Battery usage: 99% motors, 1% CPU; No wheel slippage or wind force. a) 5x5 square robot array, close spacing;
N° 3
2009
c) 3x8 rectangular robot array, close spacing.
Fig. 9. Simulation images of completed robot seismic grids, demonstrating abilities to form various shapes of various spacings. All simulations are setup to start with all 25 mobile robots positioned 0.5 meters apart in both X and Y directions. Keeping this initial formation constant ensures proper comparison of variables and results against one another. Some grid shapes do not involve all 25 robots. Robots that are not used can be there as backups, replacement in case of failure, or for direct use in a subsequent, different grid formation. 5.4. Experiments The goal of this research is to gain a better understanding of relationships between traveling speed, grid spacing, formation time, energy usage, grid shape, and positioning error. Using an incremental deployment process, high precision can be attained so that these factors can be reliably compared and true relationships studied. Experimenting with several grid formation shapes can provide information on which are more efficient in terms of robot energy usage and average travel time. The grid spacing between robots was kept constant at 10 meters (a grid with 10 meter spacing) for all experiments to concentrate on other relationships. As square, rectangular, and linear formations are the building blocks for most grid applications, the following formations were simulated: 1. Squares: 5x5, 4x4, 3x3, 2x2 2. Rectangles: 8x3, 6x4, 4x3, 3x2 3. Lines: 4x1, 8x1, 16x1, 24x1
b) 5x5 square robot array, larger spacing;
For each of these grid shapes, the driving speed was varied to study its effects on positioning accuracy, average robot travel time, and average battery usage. The speed spectrum was divided into five sections: Very Slow, Slow, Normal, Fast, and Very Fast. In terms of accuracy, driving as slow as possible (e.g., 1% of full speed) would produce the highest precision alignment possible at the cost of increased formation time. On the other hand, driving too fast to the destination would likely yield a noticeably higher level of error. Thus, it would be more advantageous to compare the speeds of Slow, Normal, and Fast. The following speed variations were incorporated into simulations: 1. Slow: 25% of full speed; 2. Normal: 50% of full speed; 3. Fast: 75% of full speed. Articles
19
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
Each combination of these variations was simulated four times. This was done to gain a more representative average for battery usage, travel time, formation time, and error. GPS error was random and therefore resulted in different error results for each iteration. A total of 12 shapes × 3 speed variations × 4 repetitions = 144 simulations were performed to gather the data for comparison. Positioning error is calculated for each robot using the Manhattan Distance between each robot's final location and the desired grid GPS coordinate. Using this error measure, positioning error for each robot, overall team error, and average error given the driving parameters can be computed. For example, if the coordinate (x1 ,y1) is the desired coordinate and (x2 ,y2) is the final robot position, the Manhattan Distance (positioning error) is |x1 – x2| + |y1 – y2|. 5.5. Results and analysis This section discusses the simulation results, comparing formation shapes and speeds with respect to total formation time and positioning error (accuracy tradeoff). Percentages are presented as the increase/decrease amounts for comparisons. The desired grid spacing was kept constant at 10 meters for all simulation experiments. 5.5.1. Formation completion time Table 2 shows speed variation results for formation completion time in terms of percentages. The variations of doubling speed, tripling speed, increasing speed from Slow to Normal, and increasing speed from Normal to Fast are listed. All variations showed a decrease in time (–) with increasing speed. This table shows that, in general, the more linear the shape was going away from the robot team, the longer it took to completely form. Also, a larger speed-up was experienced when increasing the driving speed from Slow to Normal compared to the same speed increase of 25% from Normal to Fast. Formation time was approximately 42% slower for square formations, 41% slower for rectangular formations, and 40% slower for linear formations when increasing speed from Slow to Normal compared to increasing from Normal to Fast. Table 3 shows the five fastest and slowest grid shapes and speeds found during simulation. Driving slower causes formation time to increase because the mobile robots take longer to reach their destination. This table also shows that formations that are more linear took longer to complete, especially when more robots were involved. For example, an 8x3 rectangular grid took longer to form than a 6x4 rectangular grid, even though they both used 24 robots, because the formation extended the furthest from the initial team location. Figures 10 to 12 show formation time versus speed comparisons for square, rectangular, and linear grids. These graphs validate the previously discussed analysis.
20
Articles
N° 3
2009
Table 2. Formation Time: Speed Variation Results. Decrease in formation time (-) resulted from speed increase. Variation Double Speed Triple Speed Slow to Normal Normal to Fast
Square -41% -55% -41% -24%
Rectangle -43% -57% -43% -25.5%
Linear -45% -59% -45% -27%
Table 3: Formation Time: Five Fastest and Five Slowest. Fastest Times Rank Grid Shape 1 2x2 Square 2 4x1 Line 3 2x2 Square 4 4x1 Line 5 3x2 Rectangle
Speed Fast Fast Normal Normal Fast
Slowest Times Rank Grid Shape 32 6x4 Rectangle 33 8x3 Rectangle 34 16x1 Line 35 24x1 Line 36 24x1 Line
Speed Slow Slow Slow Normal Slow
5.5.2. Robot travel time and energy usage Average robot travel time was directly related to total formation time. In fact, the percentages provided for formation time speed increases were exactly the same as those for average robot travel time. The maximum and minimum travel times varied depending on shape and spacing. Increasing travel speed lowered average robot travel time just as it lowered formation time for all shapes. As grid spacing increased, average robot travel time increased. Those robots that had to travel the furthest to reach their desired location used more energy, as energy/ battery usage was directly related to travel time. In general, the longer it takes to complete the formation, the higher the average robot travel time will be. 5.5.3. Positioning precision For square formations, positioning error was somewhat erratic. For larger square dimensions, doubling speed increased positioning error by much less compared to smaller square dimensions. Therefore, the results showed that doubling travel speed was a safer option for larger square dimensions. Tripling speed for square formations was also variable, ranging from an increase of 40% to 60% positioning error. Simulation results showed that, yet again, larger square dimensions were less susceptible to more of an increase in positioning error when speed was increased. Increasing speed from Normal to Fast showed the opposite, however, where smaller square dimensions were much less affected by positioning error. Positioning error was somewhat erratic for rectangle formations as well. When speed was doubled, positioning error increased in the range from 25% to 33% and increased the most for the longest (most linear in shape) and shortest (smallest, most square in shape). Tripling speed behaved in a similar manner, where positioning
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
error increased in the range from 28% to 57%. The longest and shortest grid formations again had the largest positioning error increases. Increasing speed from Normal to Fast exhibited a trend, however, where the grids that were longest (least like a square) had the least positioning error increase (10% compared to 19% for the smallest, square-like grids). Linear formations provided more trend-like results. Doubling speed increased positioning error much less for longer lines (20%) compared to shorter lines (37%). Therefore, longer grid lines were less susceptible to error increase when speed was increased. Tripling speed behaved similarly for all simulated linear formations, where longer lines had error increased by about 45% and shorter lines had error increased by about 52%. On the other hand, increasing speed from Normal to Fast steadily increased positioning error more for longer lines (21%) compared to shorter lines (9%). Table 4 shows the five fastest and slowest grid shapes and speeds found during simulation. This table also shows that overall precision was decreased as more robots were involved in the grid formation. As described in the next section, these results confirm that travel speed greatly effects positioning error (precision). Figures 13 to 15 illustrate error versus speed comparisons for square, rectangular, and linear grids. These graphs validate the previously discussed analysis.
N째 3
2009
5.5.4. Speed comparison As shown in previous sections, speed greatly affected grid precision, formation time, average travel time, and energy usage. The faster a robot traveled, decreases were experienced in precision, formation time, average travel time, and energy usage. Therefore, the slower a robot travels, the more precise the grid will be. However, slower speeds also translated to more energy use and longer travel/formation times. Table 4. Positioning Error: Five Lowest and Five Highest. Lowest Errors Rank Grid Shape 1 4x1 Line 2 8x1 Line 3 3x2 Rectangle 4 2x2 Square 5 16x1 Line
Speed Slow Slow Slow Slow Slow
Highest Errors Rank Grid Shape 32 6x4 Rectangle 33 4x4 Square 34 5x5 Square 35 6x4 Rectangle 36 5x5 Square
Speed Normal Normal Normal Speed Speed
Fig. 10. Square Grid: Formation Time Versus Speed Comparison.
Fig. 11. Rectangular Grid: Formation Time Versus Speed Comparison. Articles
21
Journal of Automation, Mobile Robotics & Intelligent Systems
VOLUME 3,
N째 3
2009
Fig. 12. Linear Grid: Formation Time Versus Speed Comparison.
Fig. 13. Square Grid: Error Versus Speed Comparison.
Fig. 14. Rectangular Grid: Error Versus Speed Comparison. Driving slow decreased overall positioning error (increased precision), as seen in Table 4, where the most precise (lowest error) formations all traveled at a Slow speed. The least precise (highest error) formations all traveled at higher speeds. The reason why moving faster created more error is that the faster a robot moved, the more ground that is covered between sensor updates, especially with the presence of random GPS error. Thus, in general, increasing travel speed lessened precision but not in a truly linear fashion. Effects of speed can also be seen in Tables 2, 3, and 5. 22
Articles
5.5.5 Doubling shape dimensions Table 5 shows the effects of doubling shape dimensions and increasing speed on formation time, average travel time, and overall positioning error. The results are expressed as percentages, where all increased (+). The shape dimensions used for this experiment were 2x2 to 4x4 (square), 3x2 to 6x4 (rectangle), and 8x1 to 16x1 (line). This table shows that, in general, doubling shape dimensions translated to an increase in all major parameters. Interestingly, driving faster proved to steadily de-
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
N째 3
2009
Fig. 15. Linear Grid: Error Versus Speed Comparison. crease the effects of increased time and error. Also, the more linear the shape, the effects of doubling dimensions becomes more for formation and average travel times. As for positioning error, an increase in speed had more of an effect on square formation shapes compared to the others. Rectangular and linear shapes were not affected as much, where linear formations stayed relatively the same. The odd result in this table is the slight increase in positioning error from Normal to Fast speeds. The random GPS error introduced for the experiments may have caused this. Note that this table is not stating that an increase in speed translates to a decrease in positioning error. Rather, these results express the direct comparison between results for shape dimensions that are double in size. For example, increasing square dimensions from 2x2 to 4x4 at Normal speed provided a 15% increase (+) in positioning error.
6. Comparison and discussion Each of the presented robotics-based seismic surveying categories has their own set of distinct advantages and disadvantages. Depending on the desired application, environment, and scale of the surveying mission, certain categories may be more beneficial. Individual deployment would likely be more suitable for an environment with a soft surface that has little wind and weather that could damage the deployment and retrieval mechanism. Keeping the deployment mechanism simple (e.g., linear actuator) also helps reduce complexity inherent to manipulators with several degrees of freedom. A flat surface would also reduce complexity, removing the need to worry about geophone orientation. Array deployment is best suited for shallow surveys and environments that have a softer surface. Smaller surveys with a static number of geophones, array shape, and spacing fit this category very well. Limiting the number of channels in the survey also makes the array structure lighter and easier to carry or tow from one location to another. Land streamers are ideal for missions which allow lower coupling and can afford to potentially sacrifice some of the higher frequency signals from the seismic source. As they are efficient and can be towed on the surface, this category can be tailored to perform 2D or 3D seismic data collection with reasonable scale.
Table 5. Effects of Doubling Shape Dimensions. Increase in time and error as grid shape doubled. Square: 2x2 to 4x4 Formation Time Avg. Travel Time Positioning Error Rectangle: 3x2 to 6x4 Formation Time Avg. Travel Time Positioning Error Linear: 8x1 to 16x1 Formation Time Avg. Travel Time Positioning Error
Slow +518% +55% +34% Slow +548% +62% +36% Slow +241% +71% +8%
Normal +478% +45% +15% Normal +508% +52% +29% Normal +227% +64% +5%
Fast +450% +38% +1% Fast +479% +45% +21% Fast +216% +58% +6%
Hybrid streamers exhibit the ability to acquire large amounts of data in less time with fewer personnel involved. Eliminating or greatly reducing the need for geophone insertion makes some of the techniques much less complex and more reliable. No matter the environment, these streamers can be towed along the ground by a robotic platform to autonomously gather seismic data. Benefits of increased coupling and protection make it more suitable for missions that require fast, efficient, better coupling, and higher frequency acquisition. Hybrid streamers are a step above current technology and could therefore provide results in the near future. A mobile robot seismic surveying team represents the most futuristic and advanced method of robotically acquiring seismic data on a large scale. Compared to other methods, a team of mobile robots can dynamically adjust itself to form arrays of nearly unlimited size, shape, and spacing. They also exhibit the ability to decentralize the process, and potentially repair the grid if one or more robots fail or become stuck. This category does however represent the most expensive in terms of upfront cost for a complete team of geophone-deploying mobile robots. Significant design and testing time would also be necessary to determine what size of mobile robot would be needed to be effective. Simulation results showed that controlled, incremental deployment essentially eliminated collisions at the tradeoff of a large increase in deployment time and grid precision. Subshapes or groups of robots could be deploArticles
23
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
yed at the same time to decrease overall formation time, but then introduces a level of risk into the system due to possible robot collisions. Dynamically forming the grid at-once represents the fastest and likely the most energy-efficient manner of grid formation. However, it exhibits a higher collision risk along with inherently being less precise. These results show several major patterns that effect grid precision. The faster the robots traveled, the quicker the formation was formed and average robot travel time was decreased. The longer robots must travel, more error was introduced. Energy usage was highly related to average robot travel time, where energy usage increased with travel time. More robots caused more overall grid positioning error. The results confirmed that higher precision could be attained by driving at slower speeds. This demonstrated the tradeoff between formation time, precision, and collision risk. Out of the discussed robotic techniques, hybrid streamers and a team of mobile robots are more robust, offer more advantages in terms of time and space efficiency, and require limited levels of human intervention compared to other methods. Although these methods may incur higher deployment costs, the volume and quality of data will be increased. A team approach is unique in that each geophone is independently mobile, rather than transporting all geophones on the same vehicle or towing them in a tethered fashion. Investigations can also take place in the areas of efficient formation change, traveling from one location to another (flocking), and other intelligent techniques to enhance the process.
ACKNOWLEDGMENTS The authors would like to thank Professor Georgios Tsoflias and Anthony Hoch at the University of Kansas and CReSIS for helpful discussions on seismic surveying. This material is based upon work supported by the National Science Foundation under Grant No. ANT- 0424589. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
AUTHORS Christopher M. Gifford* - Center for Remote Sensing of Ice Sheets, University of Kansas, 316 Nichols Hall, 2335 Irving Hill Rd, Lawrence, KS, USA 66045. E-mail: cgifford@cresis.ku.edu. Arvin Agah - Center for Remote Sensing of Ice Sheets, University of Kansas, 316 Nichols Hall, 2335 Irving Hill Rd, Lawrence, KS, USA 66045. E-mail: agah@ku.edu. * Corresponding author
References [1]
24
Articles
2009
plemented on a small scale, including a geophone deployment mechanism for each platform. Designs will be validated using a single robot. Experiments will then take place with a small team, but will not collect field data in the near future. Upcoming field seasons will be the testing grounds for these two new robotics-based seismic methods.
7. Conclusions Integrating robotics into traditional seismic surveying helps in several ways. In addition to adding precision and removing the human element, more flexible and scalable seismic solutions can be created. Integrating several methods is likely best. For example, we have seen that hybrid streamers using heated spikes is a very plausible solution. A team of mobile robots that drill geophones into the ground could also become a reality for future missions. The main contributions of this paper were introducing the integration of robotics and seismic surveying, outlining challenges related to robotics based seismic, and presenting a categorization of approaches in which mobile robots are utilized. Simulation results also relate several aspects of multi-robot grid formation to energy usage, position error, and formation time. This research will hopefully allow the field of seismology to expand in terms of robotic automation. Future work consists of extending the ideas of hybrid streamers and a multi-robot seismic team into designs. Hybrid streamer components will need to be designed and tested. Hybrid streamers will be fully implemented and deployed to autonomously collect seismic data in a future field season. A hybrid streamer system is currently being tested in snowy environments to determine length, plate design, overall weight, drag force, and other possibilities to increase coupling of a hybrid streamer. This system is going to be towed by the MARVIN II polar robot in the field in the near future. A multi-robot seismic team will be designed and im-
N° 3
[2]
[3]
[4]
[5]
[6]
[7]
[8]
Agah A., Bekey G.A., “Phylogenetic and Ontogenetic Learning in a Colony of Interacting Robots,” Autonomous Robots, vol. 4, no. 1, 1997, pp. 85-100. Akers E.L., Stansbury R.S., Agah A., “Long-Term Survival of Polar Mobile Robots”. In: International Conference on Computing, Communications and Control Technologies (CCCT 2006), Orlando, Florida, July 2006. Akers E.L., Stansbury R.S., Agah A., Akins T.L., “Mobile Robots for Harsh Environments: Lessons Learned from Field Experiments”. In: Proceedings of 11th International Symposium on Robotics and Applications (ISORA 2006), Budapest, Hungary, July 2006. Akyildiz I.F., Su W., Sankarasubramaniam Y., Cayirci E., “A Survey on Sensor Networks,” IEEE Communications Magazine, vol. 4, no. 8, August 2002, pp. 102-114. Arampatzis T., Lygeros J., Manesis S., “A Survey of Applications of Wireless Sensors and Wireless Sensor Networks”. In: Proceedings of the Mediterranean Conference on Control and Automation, Limassol, Cyprus, June 2005. Bekey G.A., Agah A., Group Behavior of Robots, 2nd ed., ser. Shimon Y. Nof (Ed.) Handbook of Industrial Robotics. New York, New York: John Wiley & Sons Inc., 1999, pp. 439-445. Burridge R.R., Graham J., Shillcutt K., Hirsh R., Kortenkamp D., “Experiments with an EVA Assistant Robot”. In: Proceedings of the 7th International Symposium on Artificial Intelligence, Robotics and Automation in Space (i-SAIRAS-03), 2003. Gifford C.M., Agah A., Tsoflias G.P., “Hybrid Streamers for Polar Seismic,” Eos Trans. AGU, vol. 87, no. 52, Fall
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
[9] [10] [11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22] [23] [24] [25]
[26]
Meet. Suppl., Abstract C41B-0335. CReSIS, Center for Remote Sensing of Ice Sheets, 2006. [Online]. Available: http://www.cresis.ku.edu/ Cyberbotics, “Webots 5,” 2006. [Online]. Available: http://www.cyberbotics.com/ van der Veen M.V., Green A., “Land Streamer for Shallow Seismic Data Acquisition: Evaluation of Gimbal-Mounted Geophones,” Geophysics, vol. 63, 1998, pp. 14081413. van der Veen M.V., Wild P., Spitzer R., Green A., “Design Characteristics of a Seismic Land Streamer for Shallow Data Acquisition”. In: Extended Abstracts of the 61st European Association of Geoscientists and Engineers (EAGE) Conference and Technical Exhibition, 1999, pp. 40-41. Edan Y., “Design of an Autonomous Agricultural Robot”, International Journal of Applied Intelligence, Special Issue on Autonomous Systems, vol. 5, no. 1, 1995, pp. 41-50. Edan Y., Bechar A., “Multi-Purpose Agricultural Robot”. In: Proceedings of the International Association of Science and Technology for Development (IASTED) International Conference on Robotics and Manufacturing, Banff, Canada, July 1998, pp. 205-212. Eiken O., Degutsch M., Riste P., Rod K., “Snowstreamer: An Efficient Tool in Seismic Acquisition”, First Break, vol. 7, no. 9, 1989, pp. 374-378. Gifford C.M., Robotic Seismic Sensors for Polar Environments, Master's Thesis, Department of Electrical Engineering and Computer Science, University of Kansas, August 2006. Gifford C.M., Agah A., “Robotic Deployment and Retrieval of Seismic Sensors for Polar Environments” In: Proceedings of the 4th International Conference on Computing, Communications and Control Technologies (CCCT), vol. II, Orlando, FL, July 2006, pp. 334-339. Gifford C.M., Agah A., “Precise Formation of MultiRobot Systems”. In: Proceedings of the IEEE International Conference on Systems of Systems Engineering (SoSE), San Antonio, TX, April 2007, Paper no. 105, pp. 16. Hollis J., Iseli J., Williams M., Hoenmans S., “The Future of Land Seismic”, Hart's E & P, vol. 78, no. 11, November 2005, pp. 77-81. Kansas Geological Survey, “Land-Streamer,” 2006. [Online]. Available: http://www.kgs.ku.edu/Geophysics2/ Equip/LandStreamer/LandS4.htm King E.C., Bell A.C., “A Towed Geophone System for use in Snow-Covered Terrain,” Geophysical Journal International, vol. 126, no. 1, 1996, pp. 54-62. Metrica, Inc., Mars Manipulator, 2006. [Online]. Available: http://www.metricanet.com/mars.htm MSC Software, visualNastran 4D R2 User Manual, 2002. Sheriff R.E., Geldart L.P., Exploration Seismology, 2nd ed. Cambridge University Press, 1995. Sen V., Stoffa P.L., Dalziel I.W.D., Blankenship D.D., Smith A.M., Anandakrishnan S., “Seismic Surveys in Central West Antarctica: Data Processing Examples from the ANTALITH Field Tests”, Terra Antarctica, vol. 5, no. 4, 1999, pp. 761-772. Speece M.A., Miller C.R., Mille P.F., Link C.A., Flynn K. F., Dolena T. M., “A Rapid-Deployment, Three Dimen-
[27]
[28]
[29]
N° 3
2009
sional (3-D), Seismic Reflection System”, Montana Tech. Prototype Design Proposal, 2004. Spikes K., Steeples D., Ralston M., Blair J., Tian G., “Common Midpoint Seismic Reflection Data Recorded with Automatically Planted Geophones”, 2001. [Online]. Available: http://www.dot.ca.gov/hq/esc/ geotech/gg/geophysics2002/037spikes_cmp_auto_pl ants.pdf Stansbury R.S., Akers E.L., Harmon H.P., Agah A., “Survivability, Mobility, and Functionality of a Rover for Radars in Polar Regions”, International Journal of Control, Automation, and Systems, vol. 2, no. 3, 2004, pp. 334-353. Tsoflias G.P., Steeples D.W., Czarnecki G. P., Sloan S.D., Eslick R.C., “Automatic Deployment of a 2-D Geo-phone Array for Efficient Ultra-Shallow Seismic Ima-ging”, Geophysical Research Letters, 33, L09301, 2006.
Articles
25
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
N째 3
2009
A REALIZATION OF AN FPGA SUB SYSTEM FOR REDUCING ODOMETRIC LOCALIZATION ERRORS IN WHEELED MOBILE ROBOTS Received 14th August 2008; accepted 8th December 2008.
James Kurian, P.R. Saseendran Pillai
Abstract: This paper introduces a simple and efficient method and its implementation in an FPGA for reducing the odometric localization errors caused by over count readings of an optical encoder based odometric system in a mobile robot due to wheel-slippage and terrain irregularities. The detection and correction is based on redundant encoder measurements. The method suggested relies on the fact that the wheel slippage or terrain irregularities cause more count readings from the encoder than what corresponds to the actual distance travelled by the vehicle. The standard quadrature technique is used to obtain four counts in each encoder period. In this work a three-wheeled mobile robot vehicle with one driving-steering wheel and two-fixed rear wheels in-axis, fitted with incremental optical encoders is considered. The CORDIC algorithm has been used for the computation of sine and cosine terms in the update equations. The results presented demonstrate the effectiveness of the technique. Keywords: FPGA, position estimation, robot localization, odometric correction.
1. Introduction Mobile robots and automated guided vehicles (AGV) operating in industrial and other environment require accurate sensing of vehicle position and attitude. In the real world environment, it is still a very challenging problem. In this era the autonomous or semi autonomous robot vehicles find applications in automated inspection systems [1], floor sweepers [2], hazardous environments [3], autonomous truck loading systems [4], agriculture tasks, delivery in establishments like manufacturing plants, office buildings, hospitals [5], etc. and providing services for the elderly [6]. In addition to this, autonomous vehicles are widely utilized in undersea exploration and military surveillance systems [7],[8]. Automated Guided Vehicles (AGVs), such as the cargo transport systems are heavily used in industrial applications. Mobile robots are also finding their way into a growing number of homes, providing security, automation [9], [10], and even entertainment. In all these applications, some type of localization system is essential. In order to navigate to their destination, the robots must have some means of estimating where they are and in which direction they are heading. The knowledge of position and attitude information is not exclusive to the realm of mobile robots. Information about the location of an inanimate object, for example a cargo pallet, can streamline inventory and enable warehouse automation. A variety of tech26
Articles
niques have been developed and used successfully to provide the position and attitude information. However, many of these existing positioning systems have inherent limitations of their own in the workspace. In mobile robot applications, two basic position estimation methods are employed concurrently, viz., the absolute and relative positioning [11]. Absolute positioning methods usually rely on the use of appropriate exteroceptive sensing techniques, like navigation beacons [12],[13], active or passive landmarks [14], map matching [15], or satellite-based navigation [16] signals. Navigation beacons and landmarks normally require costly in-stallations and maintenance, while map-matching methods are usually slower and demand more memory and computational overheads. The satellite-based navigation techniques are used only in outdoor implementations and have poor accuracy. Relative position estimation is based on proprioceptive sensing systems like odometry [17], Inertial Navigation System (INS) [18] or optical flow techniques [19], where the error growth rate of these systems are usually unacceptable. The vehicle performs self-localization by using relative positioning technique, called dead reckoning. For implementing a navigational sys-tem most of the mobile robots use position odometric system together with traditional inertial navigation systems employing gyros or accelerometers or both. The odometric system provides accurate and precise intermediate estimation of position during the path execution. The Inertial Navigation System (INS) is complex and expensive and requires more information processing for extracting the required position and attitude information. The localization based on INS uses accelerometers or gyros, where the accelerometer data must be integrated twice to yield the position information, thereby making these sensors extremely sensitive to drift. A very small error in the rate information furnished by the INS, can lead to unbounded growth in the position errors with time and distance. Rate information from the gyros can be integrated to estimate the position and yields better accuracy than accelerometers. Though the odometric system is simple, inexpensive and accurate over short distances, it is prone to several sources of errors due to wheel slippage, variations in wheel radius, body deflections, surface roughness and undulations. Odometric system is reliable and reasonably accurate, on smooth flat terrain, and in the absence of wheel slippage, since a wheel revolution corresponds to linear travel distance. On paths having terrain irregularities the odometric systems are not considered to be useful, because the measured rotations of the wheels do not accurately reflect
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
N째 3
2009
the distance travelled due to wheel slippage and motion over humps and cracks. Such odometric errors need to be corrected in practical mobile robotic applications. This paper presents the realization of a new, simple and efficient system to reduce the errors caused by terrain irregularities like humps, cracks or other disturbances, which contribute to errors in an odometric system in a mobile robot vehicle. Redundant odometric sensors are considered in this technique. In addition to this we present an adaptive speed measurement and standard quadrature technique to improve the position and speed resolution. The system has been realized in an FPGA and the results are presented.
2. The odometric system In this work a three-wheeled mobile robot vehicle with one driving-steering wheel and two-fixed rear wheels in-axis, fitted with incremental optical encoders is considered. The incremental encoders are the most frequently adopted position transducers for the mobile robotic applications. The low level information provided by the encoder in the form of two pulse trains namely Ch_A and Ch_B are 90째 out of phase (quadrature) and depending on the direction of rotation, one of these pulses will lead or lag the other. These low level signals are passed to the control system, which computes the actual position, speed and acceleration information needed for the controller. Incremental encoders, which have been used for this purpose, are characterized by high accuracy, high resolution, high noise immunity, low maintenance and low cost and hence are generally preferred.
Fig. 1. Functional block diagram of the optical incremental Encoder Pulse Processing Module (EPPM), which computes the velocity information in period mode and pulse mode depending upon the current speed of the vehicle. It also provides the incremental position (distance) update of the wheel. 2.2. Speed/velocity estimation The upper part of the encoder pulse-processing module in Fig. 1 shows the velocity measurement sections. Two major techniques for extracting the speed data from an incremental encoder are pulse counting and period measurement [21], [22], [23] of the pulses. In pulse counting mode the number of pulses C in an observation time window T is counted and the speed is approximated to the discrete incremental ratio as: S=
The speed error due to one bit quantization dS =
2.1. Position measurement The simplified block diagram of the Encoder Pulse Processing Module (EPPM) is shown in Fig. 1. The encoder pulses Ch_A and Ch_B are fed to a quadrature decoder circuit, which decodes the direction indicator bit and generates pulses during the rise and fall fronts of both channels. A 20-bit up/down counter is implemented and the direction bit would set the counter as up or down depending upon the direction of movement of the wheel. The computational unit resets the counter after reading the position or position increment. The position measurement involves counting the encoder pulses with the help of the quadrature decoder circuit and the direction of movement of the wheel can also be decoded from the encoder pulses. The count is incremented or decremented depending on which pulse leads the other. The present position P (or position increment) and the position error noise dP due to quantization are given by P=
pCR pR ; dP = 2NG 2NG
Where C is the counter value, R is the radius of the wheel in metres, N is the number of pulses per revolution of the encoder and G is the gear ratio between the encoder shaft and robot wheel coupling.
pCR 2NTG pR 2NTG
The other method for obtaining the speed is to measure the time between two successive pulses from the encoder. In this method a stable high frequency clock (f) and gating circuit is normally used at the front end of the counter. The encoder pulses control the gating circuit, so the count value depends on the number of encoder pulses per revolution and the clock frequency. For a given system, the clock frequency and the number of encoder pulses per revolution are constant and hence the count value is inversely proportional to the speed of the vehicle and the relationship is given by S=
pf 2NCG
In pulse counting scheme the pulses from the quadrature decoder (x4) circuit is gated with the time base and fed to the counter and the final count value is latched. Instead of the direct pulses from the encoder, the quadrature-decoded pulses are used, as the pulse rate is four times higher. This reduces the switching between period measurement and pulse counting schemes. The implementation of the period measurement scheme is very similar to pulse counting except that the counting clock is from the reference clock generator and the gating pulse is from the quadrature decoder. Hence the count value is a measure of the velocity as it counts the number of period mode clocks between successive pulse Articles
27
Journal of Automation, Mobile Robotics & Intelligent Systems
intervals. The system can switch between these two velocity-measuring modes by monitoring the count register value because period counting is more accurate at low speeds and pulse counting is more accurate at high speeds. The 3D plot in Fig. 2a shows the variation of error with respect to the time window T and encoder pulse per revolution N and in Fig. 2b shows the error variation with respect to encoder pulse per revolution and the period mode oscillator frequency. From these plots it is very evident that the proper switching between these two modes facilitates the accurate measurement of velocity information. If the pulse count value in the pulse mode counter is less than a reference value then the velocity measuring scheme switches to the period measuring mode and vice versa. The speed register value along with the mode bit (period/pulse) can be read to get the velocity information. (a)
VOLUME 3,
N째 3
2009
magnet DC motors with inbuilt encoder, which measures the angular increments for the measurement of the steering angle and the distance moved by the vehicle. The rear wheels are also attached with encoders to estimate the position and attitude of the vehicle. A typical pose of the mobile robot vehicle with a steering angle f and an orientation q with respect to the absolute reference frame OXY is shown in Fig. 3. The symbols used in the equations are defined below: Xk (x, y,q)- position and attitude vector q - the estimated attitude of the vehicle with respect to the fixed reference qR - the estimated attitude of the vehicle from the rear wheels encoder counts with respect to the fixed reference qF - the estimated attitude of the vehicle from the front wheels encoder counts (steering & driving) with respect to the fixed reference f - the steering angle with respect to axis of symmetry R - the wheel radius of the vehicle nL - nR - nF - nS - the encoder incremental pulse counts from the left, right, front-wheel and steering encoders respectively. N - the number of pulses per revolution of the encoder. L - the distance between the rotation axis of the front (driver) wheel and the axis of the back wheel. D - the distance between rear wheels
(b)
Fig. 2. 3D surface plot showing the role of (a) quantization error plotted against encoder pulse per revolution and observation time window in pulse counting mode and (b) relative error plotted against encoder pulse per revolution and oscillator clock frequency in period counting mode.
3. Position and attitude estimation The kinematics and navigation equations for a threewheeled mobile vehicle with one driving-steering wheel and two fixed rear wheels in-axis is considered in this study. The odometric navigational system is implemented using four optical incremental encoders. The driving steering wheel (front) is attached with geared permanent 28
Articles
Fig. 3. The Kinematic scheme of the three wheeled mobile robot vehicle having attitude q, which is the angle between the absolute reference frame OXY and the mobile reference frame PUV. The origin P is attached to the midpoint of the axes joining the rear wheels and the axis of symmetry of the vehicle. f is the steering angle. 3.1. Position updates from encoder data The better a vehicle's odometry, the better will be its ability to navigate and lesser will be the requirement for frequent position updates with respect to external sensors. For the computation of the position and attitude let us consider the pulse counts from the two independent optical encoders attached to the rear non-driven idler wheels of the vehicle which have less coupling with the steering and driving system and very less slippage between point of contact and the floor. The update equations for this model are as follows [20]:
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
pR (nR(k) + nL(k))cos q(k) N pR y(k+1) = y(k) + (nR(k) + nL(k))sin q(k) N 2pR (nR(k) – nL(k)) q(k+1) = q(k) + D N x(k+1) = x(k) +
(1) (2) (3)
Normally these readings are very accurate and stable compared to the position and attitude computation with respect to the front wheel encoder data because it is less prone to slippage or skidding.
N° 3
2009
distance than actual. The front wheel encoders produce the data corresponding to the actual distance moved by the vehicle. Fig. 4b represent a situation where the rear wheel encoders produce over counts and over attitude errors and Fig. 4c shows the front wheel encoder pose over a hump that may cause over count from the encoder. (a)
The distance moved by the wheel's point of contact could also be derived by considering the vehicle's front driving steering wheel's incremental pulse count data. The steering rotation is limited to ±40° about the axis of symmetry of the vehicle. The encoder attached to the steering system generates pulses corresponding to the steering movement and the steering angle f can be computed. From these data the position and attitude of the vehicle can be estimated as follows: 2pR ) nF(k) cosq(k) cosf(k) N 2pR y(k+1) = y(k) + ( ) nF(k) cosq(k) sinf(k) N 2pR q(k+1) =q(k) + ( ) nF(k) sinLf(k) N x(k+1) = x(k) + (
(4)
(b)
(5) (6)
The pulse count received from certain encoders may indicate an over count due to terrain irregularities and operating conditions. So the least value of x(k+1), y(k+1) and q(k+1) estimated from the equations (1) to (6) can be used for computing the pose of the vehicle. The necessary hardware is designed and developed for the independent computation and comparison of the position and attitude values from the rear wheel and front wheel encoder data. The digital comparators manage the switching of multiplexers that selects the least values among the computed values. 3.2. The error reduction technique Conventional systems use data from a set of encoders to compute the posture of the vehicle. By utilizing the equations (1)-(3) and reading encoder pulse counts of the rear wheels one can compute the position and attitude of the robot vehicle. Using equations (4)-(6), the posture of the vehicle can be computed by utilizing the encoder data available from the front wheel. The technique presented here utilises both the sets of encoder values and computes the position and orientation of the vehicle. Thus redundant information for the computation of the posture of the vehicle is available. The wheel slippage, motion over humps, cracks or any other terrain disturbances cause more pulses than what corresponds to the actual distance travelled [24]. This may lead to over incremental update values of position and orientation. Independently computing the position increments and angular increments of the system and dropping the higher values eliminate this error. The new position and orientation are updated with the minimum (lower) values. Fig. 4 shows three typical postures of the vehicle over a hump. The Fig. 4a corresponds to an error condition of over counts from rear wheel encoders that records more
(c)
Fig. 4. Plan view of three typical postures of a vehicle over a hump shows an error condition of over counts when (a) both the rear wheels are over the hump, (b) when one of the rear wheels and (c) the front wheel is over the hump.
4. Implementation The simplified functional block diagram of the error reduction system is shown in Fig. 5, which has two computation units and two switching units. The two computation units independently calculate the position and orientation incremental values by reading the corresponding encoder pulse counters from the encoder pulse proArticles
29
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
cessing modules. The switching units compare the incremental values and the least value will be selected for updating.
N° 3
2009
then, much research has been carried out on this algorithm, with a thorough survey of this work with respect to FPGAs being published by Andraka [26]. All the trigonometric functions can be computed or derived from functions using vector rotations. The CORDIC algorithm is derived from the Givens rotation transforms: xi+1 = xi cosq(i) - yi sinq(i) yi+1 = yi cosq(i) + xi sinq(i) which rotates a vector in Cartesian plane by an angle q(i). These can be rearranged so that: xi+1 = cosq(i)[ xi - yi tanq(i)]
Fig. 5. The simplified functional diagram of the FPGA sub system, which computes two, sets of position increments and two attitude values. The switching, control and communication blocks are also shown. The system implementation is achieved by using an Altera development board consisting of Cyclone-II EP2C70F672-C6 FPGA and associated components. The user interface termination provided on header connectors is used for realizing the system. The system is implemented with the help of the Quartus-II FPGA/CPLD design package and ModelSim simulation tools [27]. Fig. 5 shows the simplified functional block diagram of the system. Two-encoder pulse processing modules (EPPM) used to process the pulses from the rear wheel encoders place the velocity and position information in the appropriate registers. The other two encoders' pulses from the front driving steering wheel are not utilized for velocity calculations. The average velocity values measured from the rear wheels are available through the SPI interface of the system. The computation unit solves the equations (1)-(6) for the calculation of position and attitude incremental update values. The sine and cosine calculation modules are necessary to speed up the process. So the outputs from the computation units are two sets of X axis and Y axis increments and two attitude values(q). These values are stored in the corresponding registers and the switching module selects the least value set of position and attitude increments. All these values can be read through the SPI interface module. 4.1. The sine/cosine module For the computation of Sine and cosine terms the famous CORDIC (COordinate Rotational DIgital Computer) algorithm is used. The CORDIC algorithm is an iterative technique based on the rotation of a vector, which allows many transcendental, and trigonometric functions to be computed. The highlight of this method is that it is achieved using only shifts, additions/subtractions and table look-ups, which map well with hardware and are ideal for FPGA implementation. The original work on CORDIC was carried out by Jack Volder [25] in 1959 for computing trigonometric functions as a part of developing a digital solution to real time navigation problems. Since 30
Articles
yi+1 = cosq(i)[ yi + xi tanq(i)] However the rotation angles q(i) are restricted to -i tanq(i) = ±2 , then the multiplication by the tangent term is reduced to simple shift operation. Arbitrary angles of rotation are obtainable by performing a series of successive smaller elementary rotations. The iterative rotation can now be expressed as: -i
xi+1 = Ki[ xi - yi . di . 2 ] -i
yi+1 = Ki[ yi + xi . di . 2 ] where: -1
1
-i
Ki = cos(tan 2 ) =
-2 i
Ö1+2
di = ±1 Removing the scaling factor Ki from the iterative equations yields a shift-add algorithm for vector rotation. The product of Ki's can be applied elsewhere in the system. The product approaches a constant value as the number of iterations goes to infinity. The exact gain of the system depends on the number of iterations and is: -2 i
An= Õn Ö1+2
The set of all possible decision vectors in an angular measurement system is based on binary arctangents. Conversions between this angular system and the other can be accomplished with the help of a look-up table, which can be easily implemented in an FPGA. The angle accumulator adds a third difference equation to the CORDIC algorithm: -1
-i
zi+1 = zi - di .tan (2 ) The CORDIC rotator is normally operated in one of the two modes. The first, called rotation mode, rotates the input vector by a specified angle and the second mode called vectoring mode rotates the input vector to the X axis while recording the angle required to make that rotation.
Journal of Automation, Mobile Robotics & Intelligent Systems
In this system the rotation mode is used to compute the sine and cosine values to solve the position and attitude update equations. In this mode the angle accumulator is initialised with the desired rotation angle. The rotation decision at each iteration is made to diminish the magnitude of the residual angle in the angle accumulator. The decision at each iteration is therefore based on the sign of the residual angle after each step. For rotation mode the CORDIC equations are: -i
VOLUME 3,
N° 3
2009
The rotation mode CORDIC operation can simultaneously compute the sine and cosine of the input angle. By setting x0 = 1 ; y0 = 0 ; z0 = the input angle; and multiplying the result with (1/ An) the sine and cosine values can be computed. Fig. 6 shows the angle error plot against the variations in residual angle with respect to the number of iterations. The value of 1/ An for 16 iterations is around 0.60725293510314. More than ten iterations give a satisfactory resolution and 1/ An is nearly a constant.
xi+1 = xi - yi . di . 2
-i
yi+1 = yi + xi . di . 2 -1
-i
zi+1 = zi - di .tan (2 ) where di =-1 if zi<0, +1 otherwise Which provides the following results: xn = An[x0 cos z0 - y0 sin z0] yn = An[y0 cos z0 + x0 sin z0] zn = 0 -2 i
An= Ă&#x2022;n Ă&#x2013;1+2
Fig. 6. The 3D plot showing the variations in residual angle, which is a measure of computational error against input angle and, the number of iterations.
Fig. 7. Screen shot showing the various waveforms with system clock of 50MHz of encoder pulse processing module.
Fig. 8. Screen shot showing the various results of computation along with the control and status signal associated with the sequential CORDIC module. Articles
31
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
4.2. The Realization Details and Results The entire sub system has been designed and implemented in Altera Cyclone-II EP2C70F672-C6 FPGA. The main task is the implementation of the computational unit consisting of the CORDIC processors for the sine and cosine calculation. The design has been divided into various blocks like the Encoder Pulse Processing Module (EPPM), computation unit for solving the update equations incorporating the CORDIC processor, switching module for selecting least values and SPI communication interface for establishing the link with the control processor. There are various read/write registers involved in the design for scaling, initialisation and data storing. The entire design entry process is carried out with the help of VHDL and the design is synthesized with the above mentioned target device in Quartus II software. The CORDIC algorithm is implemented in sequential manner and the number of iterations is 16. The sequential CORDIC design performs one iteration per clock cycle. With a system clock frequency of 50 MHz, the total time taken by this module is less than 400 ns, which is a very small value as most of the vehicle control system require only one sample per 100 μs or less. So the results are ready with the sub system after initialisation of the system registers and the role of the controller is to read the update values and velocity information through the SPI. Fig. 7 shows the screen shot during simulation of the Encoder pulseprocessing module. Fig. 8 shows the various waveforms during calculation of sine and cosine values of 30 degrees, 45 degrees and 60 degrees. The inputs to the CORDIC module ANGLEin and LOAD signals are pulsed and RESULT_READY indicates the availability of the results in the 16- bit registers after sixteen clock pulses. The photograph of the sub system is shown in Fig. 9.
AUTHORS James Kurian*, P.R. Saseendran Pillai - Department of Electronics, Cochin University of Science and Technology, Cochin-682022, Kerala, INDIA. E-mail: james@cusat.ac.in. * Corresponding author
References [1]
[3]
[4]
[5]
[6]
Fig. 9. Photograph of the sub system implemented in Altera Cyclone-II EP2C70F672-C6 - FPGA development board.
5. Conclusions
[7]
An efficient and novel technique to reduce the odometric error and the implementation details of the same in an FPGA for reducing the odometric localization errors caused by over count readings of an optical encoder ba-
[8]
Articles
2009
sed odometric system in a mobile robot due to wheelslippage and terrain irregularities has been presented in this paper. The standard quadrature technique is used to obtain four counts in each encoder period. By using this system one can reduce the odometric error so as to increase the travel distance between absolute position updates thereby lowering the installation and operating costs for the system. For the velocity measurement unit, a change in the direction bit inside a time window may cause error. This should be eliminated. With the use of this sub system, most of the odometric computational burden can be released from the control processor associated with the mobile robot vehicle. The uneven loading on rubber wheels which, most of the mobile robot vehicles use, may cause erroneous readings in count values due to change in wheel diameter. Even a tilt of the vehicle may cause uneven loading. This approach may not be recommended for mobile robot vehicles having chances of frequent skidding during path execution. Under most situations this approach is quite satisfactory, simple and efficient.
[2]
32
N° 3
Siegel M., Gunatilake P., “Remote Inspection Technologies for Aircraft Skin Inspection”. In: IEEE Workshop on Emergent Technologies and Virtual Systems for Instrumentation and Measurement, Niagara Falls, Ontario, Canada, 15th-17th May 1997, pp. 69-78. Prassler E., Ritter A., Schaeffer C., Fiorini P., “A Short History of Cleaning Robots”, Autonomous Robots, Special Issue on Cleaning and Housekeeping Robots, vol. 9, issue 3, December 2000. Iborra A., Pastor J., Alvarez B., Fernandez C., Merono J., “Robots in Radioactive Environments”, IEEE Robotics and Automation Magazine, vol. 10, no.4, December 2003, pp. 12-22. Anthony Stentz, John Bares, Sanjiv Singh and Patrick Rowe, “A Robotic Excavator for Autonomous Truck Loading”, Autonomous Robots, vol. 7, 1999, pp. 175-186. Rossetti M. D., Kumar A. and Felder R., “Mobile Robot Simulation of Clinical Laboratory Deliveries”. In: Proceedings of the 30th Conference on Winter Simulation, Washington, D.C., United States, 1998, pp. 1415-1422. Montemerlo M., Pineau J., Roy N., Thrun S., Verma V., “Experiences with a Mobile Robotic Guide for the Elderly”. In: Proceedings of the 18th AAAI National Conference on Artificial Intelligence, Edmonton, Canada, 2002, pp. 587-592. Bahl R., “Object Classification using Compact Sector Scanning Sonars in Turbid Waters”. In: Proc. 2nd IARP Mobile Robots for Subsea Environments, Monterey, CA USA, vol. 23, 1990, pp. 303-327. Healey A.J., “Application of Formation Control for
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
Multi-Vehicle Robotic Minesweeping”. In: Proceedings of the 40th IEEE Conference on Decision and Control, vol.2, 2001, pp. 1497-1502. Everett H.R., Gage D.W., “From Laboratory to Warehouse: Security Robots Meet the Real World”, International Journal of Robotics Research, Special Issue on Field and Service Robotics, vol. 18, no. 7, July 1999, pp. 760-768. Pastore T.H., Everett H.R., and Bonner K., “Mobile Robots for Outdoor Security Applications”, American Nuclear Society 8th International Topical Meeting on Robotics and Remote Systems (ANS'99), Pittsburgh, PA, USA April 1999. Available at: http://handle.dtic.mil/100.2/ADA422047 Borenstein J., Everett H., Feng L., Wehe D., “Mobile Robot Positioning Sensors and Techniques”, Journal of Robotic Systems, Special Issue on Mobile Robots, vol. 14, no. 4, 1997, pp. 231-249. Kleeman L., “Optimal Estimation of position and Heading for Mobile Robots Using Ultrasonic Beacons and Dead-reckoning”. In: Proceedings of the IEEE International Conference on Robotics and Automation, Nice, France, 1992, pp. 2582-2587. Leonard J.J., Durrant-Whyte H.F., “Mobile Robot Localization by Tracking Geometric Beacons”, IEEE Transactions on Robotics and Automation, vol. 7, no. 3, 1991, pp. 376-382. Wijk O., Christensen H.I., “Localization and navigation of a mobile robot using natural point landmarks extracted from sonar data”, Robotics and Autonomous Systems, vol. 31, 2000, pp. 31-42. L´Opez-s´ Anchez M., Esteva F., L´Opez De M`Antars R., Silerra C., “Map Generation by Cooperative Low-Cost Robots in Structured Unknown Environments”, Autonomous Robots, vol. 5, 1998, pp. 53-61. Sukkarieh S., Nebot E.M., Durrant-Whyte H.F., “A High Integrity IMU/GPS Navigation Loop for Autonomous Land Vehicle Applications”, IEEE Transactions on Robotics and Automation, vol. 15, no. 3, 1999, pp. 572-578. Borenstein J., Feng L., “Measurement and Correction of Systematic Odometry Errors in Mobile Robots”, IEEE Transactions on Robotics and Automation, vol.12, no. 6, 1996, pp. 869-880. Barshan B., Durrant-Whyte H.F., “Inertial Navigation Systems for Mobile Robots”, IEEE Transactions on Robotics and Automation, vol.11, no. 3, 1995, pp. 328-342. Lee S., Song J.-B., “Robust Mobile Robot Localization using Optical Flow Sensors and Encoders”. In Proc. of the IEEE Int. Conf. on Robotics & Automation, April 2004, pp. 1039-1044. De Cecco M., “Sensor fusion of inertial-odometric navigation as a function of the actual manoeuvres of autonomous guided vehicles”, IOP Meas. Sci. Technol., vol.14, 2003, pp. 643-653. Faccio M., et al., “An embedded system for position and speed measurement adopting incremental encoders”. In: Proc. of the IEEE Ind. Appl. Conf., vol. 2, 2004, pp. 1192-1199. Ekekwea N., Etienne-Cummingsa R., and Peter Kazanzidesb, “A wide speed range and high precision position and velocity measurements chip with serial peripheral interface”, ELS. INTEGRATION, the VLSI journal, vol. 41,
[23]
[24]
[25]
[26]
[27]
N° 3
2009
2008, pp. 297-305. Hebert B., Michel Brule M., Dessaint L.-A., “A High Efficiency Interface for a Biphase Incremental Encoder with Error Detection”, IEEE Transactions on Industrial Electronics, vol. 40, no. 1, 1993, pp. 155-156. Ojeda L., Borenstein J., “Reduction of Odometry Errors in Over-constrained Mobile Robots”. In: Proceedings of the UGV Technology Conference at the 2003 SPIE AeroSense Symposium, Orlando, FL, USA, 21st-25th April 2003, vol. 5083, pp. 431-439. Volder J., “The CORDIC Trigonometric Computing Technique”, IRE Trans Electronic Computing, vol. EC-8, September 1959, pp. 330-334. Andraka R., “A survey of CORDIC algorithms for FPGA based computers”. In: Proceedings of the 1998 ACM/ SIGDA sixth international symposium on Field Programmable Gate Arrays, 22nd-24th February, 1998, pp. 191200. Altera Corporation, “Cyclone II DSP Development Board Reference Manual”, 101 Innovation Drive, San Jose, CA 95134, USA. http://www.altera.com
Articles
33
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
N° 3
2009
INVESTIGATION ON NUMERICAL SOLUTION FOR A ROBOT ARM PROBLEM Received 5th February 2009; accepted 23rd March 2009.
Ramasamy Ponalagusamy, Sukumar Senthilkumar
Abstract: The aim of this article is focused on providing numerical solutions for a Robot arm problem using the Runge-Kutta sixth-order algorithm. The parameters involved in problem of a Robot control have also been discussed through RKsixth-order algorithm. The prĂŠcised solution of the system of equations representing the arm model of a robot has been compared with the corresponding approximate solutions at different time intervals. Experimental results and comparison show the efficiency of the numerical integration algorithm based on the absolute error between the exact and approximate solutions. The stability polynomial for the test equation ( is a complex Number) using RK-Butcher algorithm obtained by Murugesan et al. [1] is not correct and the stability regions for RK-fourth order (RKAM) and RK-Butcher methods have been presented incorrectly. They have made a mistake in determining the range for real parts of ( is a step size) involved in the test equation for RKAM and RK-Butcher algorithms. In the present paper, a corrective measure has been taken to obtain the stability polynomial for the case of RK-Butcher algorithm, the ranges for the real part of and to present graphically the stability regions of the RKAM and the RKButcher methods. The stability polynomial and stability region of RK-Sixth order are also reported. Based on the numerical results it is observed that the error involved in the numerical solution obtained by RK-Sixth order is less in comparison with that obtained by the RK-Fifth order and RK-Fourth order respectively. Keywords: Runge-Kutta (RK) method, RK-Arithmetic mean, RK- Fifth order algorithm, RK-Sixth order algorithm, Ordinary Differential Equations (ODE), robot arm problem.
1. Introduction Extensive research work is still being carried out on variety of aspects in the field of robot control, especially about the dynamics of robotic motion and their governing equations. The dynamics of Robot arm problem was initially discussed by Taha [3]. Research in this area is still active and its applications are enormous. This is because of its nature of extending accuracy in the determination of approximate solutions and its flexibility. Many investigations [4-8] have analysed the various aspects of linear and non-linear systems. Most of the Initial Value Problems (IVPs) are solved through Runge-Kutta (RK) methods which in turn being applied to compute numerical solutions for variety of problems that are modelled as and the differential equations are discussed by Alexander and Coyle [9], Evans 34
Articles
[10], Hung [11], Shampine and Watts [12], [20]. Shampine and Watts [12], [23], [26] have developed mathematical codes for the Runge-Kutta fourth order method. Nanayakkara et al. [25] proposed a method for the identification of complex non-linear dynamics of a multi-link robot manipulator using Runge-Kutta-Gill neural networks (RKGNNs) in the absence of input torque information. Runge-Kutta formula of fifth order has been developed by Butcher [13-15]. The applications of Non-linear Differential Algebraic Control Systems to Constrained Robot Systems have been discussed by Krishnan and Mcclamroch [7]. Also, Asymptotic observer design for Constrained Ro-bot Systems have been analysed by Huang and Tseng [22]. Said Oucheriah [4] discussed about the robust tracking and model following the uncertain dynamic delay systems by memoryless linear controllers. David Lirn and Homayoun Seraji [5] presented the configuration control of a mobile dexterous robot. Polycarpou and Loannou [6] discussed about a robust adaptive non-linear control design. Zluhua Qu [8] analysed robust control of a class of non-linear uncertain systems because of the-linear and coupled characteristics nature the design of a robot control system is made complex. Using fourth order RK method based on Heronian mean (RKHeM) an attempt has been made to study the parameters concerning the control of a robot arm model along with the Single TermWalsh Series (STWS) technique [24]. Hung [27] discussed about the dissipitivity of Runge-Kutta methods for dynamical systems with delays. Yang et al. [29] addressed the placement of an openloop robotic manipulator in a working environment is characterized by defining the position and orientation of the manipulator's base with respect to a fixed reference frame. Jong-Seok Rho et al. [30] presented a disk-type travelling wave B14 rotary ultrasonic motor (RUSM). Also, they proposed the analysis and design methodology of the B14 RUSM using a numerical method (3-D FEM) combined with an analytic method taking the contact mechanism into consideration in a linear operation. Mohamed Bakari et al. [31] designed a two-arm mobile delivery platform for application within nuclear decommissioning tasks. They examined the modelling and development of a real-time control method using Proportional-Integral-Derivative (PID) and Proportional-Integral-Plus (PIP) control algorithms in the host computer with National Instruments functions and tools to control the manipulators and obtain feedback through wireless communication. The dynamics of a robot can be described by a set of coupled non linear equations in the form of gravitational
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
torques, Coriolis and centrifugal forces. The significance of these forces is dependent in the physical parameters of the robot, the load it carries and the speed, which the robot operates with. If accuracy is required then compensation for these parameter variations and disturbances becomes much more serious. Therefore, the design of the control system becomes much more complex. The theory of variable structure system (VSS) [28] is developed and applied to solve wide variety of applications in the control process essentially; it is a system with discontinuous feedback control. Operating such a system in sliding mode makes it insensitive to parameter variations and disturbances.
N째 3
2009
where
The rest of the article is organized as follows. Section 2 provides the formulation of the robot arm problem and its sub-section gives an idea about the basics of robot arm model problem with variable structure control and controller design. Section 3 deals with the RK-sixth order algorithm in detail and section 4 discusses about the stability analysis of the RK-sixth order algorithm. Finally discussion and conclusion are given in section 5.
The values of the robot parameters used are In the case of problem of set point regulation the state vectors are represented as
2. Formulation of the problem
where:
2.1. Robot arm model and essential of variable structure It is well known that non-linearity and coupled characteristics are involved in designing a robot control system and its dynamic behaviour. A set of coupled nonlinear second order differential equations in the form of gravitational torques; Coriolis and Centrifugal forces represent the dynamics of the robot. The importance of the above three forces are dependent on the two physical parameters of the robot namely the load it carries and the speed at which the robot operates. The design of the control system becomes more complex when the end user needs more accuracy based on the variations of the parameters mentioned above. A detailed version of a robot's structure with proper explanation is given in [21]. Keeping the objective solving the robot dynamic equations in real time computation in view, an efficient numerical technique is required. Taha [3] discussed about the dynamics of robot arm problem and it can be represented in the following form.
are the angles at joints 1 and 2 respectively, and are constants. Hence, equation (2) may be written in state space representation as:
(3)
(1)
(4) Here, the robot is simply a double inverted pendulum and the Lagrangian approach is used to develop the equations. It is observed that by selecting the suitable parameters, the non-linear equations (4) of the two-link robotarm model may be reduced to the following system of linear equations [3]:
where indicates the coupled inertia matrix, represents the matrix of Coriolis and centrifugal forces. denotes the gravity matrix, is the input torques applied at various joints. For a robot with two degrees of freedom, by considering lumped equivalent massless links, i.e. it means point load or in this case the mass is concentrated at the end of the links, the dynamics are represented by
(2)
(5) where the values of the parameters concerning the joint1 are given by:
and the values of parameters concerning the joint-2 are given by: Articles
35
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
N째 3
2009
For Equation (7) the form of the the RK-sixth-order algorithm is stated as follows Choosing (constant) and (constant), it is not possible to find the complementary functions of equation (5) because the nature of the roots of auxiliary equations (A. Es) of (5) is unpredictable. Owing to this reason and for simplicity, we take . Considering, the initial conditions are
,
The corresponding exact solution is given by:
(8) The corresponding RK-sixth-order array to represent Equation (8) takes form as follows:
(9) (6) The exact solutions (Equation (6)) are similar to the solutions presented in [2].
3. Outline of RK-Sixth order Algorithm The general s-stage RK method for solving an Initial Value Problem (7) with the initial condition
is defined by
where
be the the form,
with c and b are s-dimensional vectors and matrix. The RK- Sixth order array is of
4. Stability Analysis It is of importance to mention that one has to determine the upper limit of the step-size (h) in order to have a stable numerical solution of the given ordinary differential equation with IVP. Keeping this in view, we consider the test equation where is a complex constant and it has been used to determine the stability region of the RK-Sixth order method.
36
Articles
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
N째 3
2009
. Taking
, we have
. Articles
37
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
N째 3
2009
(10)
Substituting the expressions of and into equation (9) one can obtain as,
(11) From equation (11), the stability of the polynomial becomes, (12) In a similar manner, the stability polynomial for the test equation ( is a complex constant) using the RK-Butcher method has been obtained as (13) At this juncture, it is pertinent to point out that Murugesan et al. [1], Murugesh and Murugesan [32], Park et al. [33], Park et al. [34] and Sekar et al. [2] have obtained in-correct stability polynomial for the same test equation by adapting the RK-Butcher method and it is given by
Murugesan et al. [1], Murugesh and Murugesan [32], Park et al. [33], Park et al. [34]and Sekar et al. [2] have made a wrong comparative study of the stability regions of the RKAM and the RK-Butcher methods using uncorrected version of the stability polynomial (Equation (14)) obtained by them. Further, the stability region of the RKAM is also wrongly illustrated. For more detail, see Figure. 1 presented by Murugesan et al. [1], Murugesh and Murugesan [32], Park et al. [33], Park et al. [34] and Sekar et al. [2]. Also, they have made a critical mistake to determine the range for the real part of in the cases of RKAM and the RK-Butcher methods. The wrong range for the real part of in RK-fourth order (RKAM) is -3.463 < Re(z) < 0.0 and -2.780 < Re(z) < 0.0 in the RK-Butcher method. Similar types of severe mistakes have been detected in the paper authored by (Sekar et. al. [2]). In view of this, we have presented the corrected version of the stability region of the RKAM and the RK-Butcher methods which are shown in Figures 2 and 3.
(14) The stability polynomial for the same test equation using the RKAM method is given by (15) Fig. 2. Corrected stability region for RK Fourth Order Method (RKAM): horizontal and vertical axes represent Real Part of and Imaginary Part of .
Fig. 1. Uncorrected stability region for RKAM and RKButcher Algorithm drawn by Murugesan et. al. [1]; horizontal and vertical Axes represent Real Part of and Imaginary Part of . 38
Articles
Fig. 3. Corrected stability region for RK-Butcher Fifth Order Method: horizontal and vertical axes represent Real Part of and Imaginary Part of .
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
In these stability regions, the range for the real part of in RKAM is -2.780 < Re(z) < 0.0 and -3.463 < Re(z) < 0.0 in the RK-Butcher algorithm. [4]
5. Discussion and Conclusion Using equations (5) and (9), the discrete and exact solutions of the robot arm model problem have been computed for different time intervals are depicted in Tables 1-4. The values of and are calculated for time t arranging from 0.0 to 1. The absolute error between the exact and discrete solutions for the RK methods based on RK-Fourth-order, RK-Fifth-order and RKSixth-order are calculated. For time t = 0.0, 0.25, 0.05, 0.75 and 1.0 the values are tabulated in Tables 1-4, respectively. It is pertinent to point out here that the obtained discrete solutions for the Robot Arm model problem using the RK-Sixth-order algorithm guarantees more accurate values compared to the classical RKAM method and RK-Fifth-order method. The numerical solutions computed by the RK-Sixth-order algorithm are in very close to the exact solutions of the robot arm model problem whereas the RKAM method gives rise to a considerable error. Hence this RK-Sixth-order algorithm is found to be more suitable for studying the Robot Arm model problem. It is of interest to mention that effort has been made to obtain the true stability polynomial for the test equation (considered in the present paper) using RK-Butcher method and the correct range for the real part of in cases of RK-Butcher algorithms and RKAM algorithms. Further, stability polynomial for the test equation by adapting RK-Sixth order formula has been obtained.
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
ACKNOWLEDGMENTS This research work was fully supported as a part of Technical Quality Improvement Programme [TEQIP], sponsored by Government of India, National Institute of Technology, Tiruchirappalli-620 015, Tamilnadu, India. Telephone No: +91-04312501801/10; Fax no.: +91-0431-2500133. URL: http://www.nitt.edu.
[13]
[14]
[15]
AUTHORS Ramasamy Ponalagusamy, Sukumar Senthilkumar* Department of Mathematics, National Institute of Technology, Tiruchi-rappalli-620 015, Tamilnadu, India. E-mail: rpalagu@nitt.edu, ssenthilkumar1974@yahoo.co.in. * Corresponding author
[16]
[17]
References [1]
[2]
[3]
Murugesan K., Sekar S., Murugesh V., Park J.Y., “Numerical solution of an Industrial Robot arm Control Problem using the RK-Butcher Algorithm”, International Journal of Computer Applications in Technology, vol.19, no. 2, 2004, pp. 132-138. Sekar S., Murugesh V., Murugesan K., “Numerical Strategies for the System of Second order IVPs Using the RK-Butcher Algorithms”, International Journal of Computer Science and Applications, vol. 1, no. 2, 2004, pp. 96-117. Taha Z., “Approach to Variable Structure Control of In-
[18]
[19]
[20]
N° 3
2009
dustrial Robots”, ed. by Warwick K. and Pugh A., Robot Control-theory and Applications, Peter Peregrinus Ltd, North-Holland, 1988, pp.53-59. Oucheriah S., “Robust Tracking and Model Following of Uncertain Dynamic Delay Systems by Memory-less Linear Controllers”, IEEE Transactions on Automatic Control, vol. 44, no. 7, 1999, pp. 1473-1481. Lim D., Seraji H., “Configuration Control of a Mobile Dexterous Robot: Real Time Implementation and Experimentation”, Journal of Robotics Research, vol. 16, no. 5, 1997, pp. 601-618. Polycarpou M.M., Loannou P.A., “A Robust Adaptive Non-Linear Control Design”, Automatica, vol. 32. no. 3, 1996, pp. 423-427. Krishnan H., Harris Mcclamroch N., “Tracking in NonLinear Differential-Algebraic Control Systems with Applications to Constrained Robot Systems”, Automatica, vol.30, no. 12, 1994, pp. 1885-1897. Zhihua Qu, “Robot Control of a Class of Non-Linear Uncertain Systems”, IEEE Transactions on Automatic Control, vol. 37, nol. 9, 1992, pp. 1437-1442. Alexander R.K., Coyle, J.J., “Runge-Kutta Methods for Differential-Algebric Systems”, SIAM Journal of Numerical Analysis, vol. 27, no.3, 1990, pp. 736-752. Evans D.J., “A New 4th Order Runge-Kutta Method for Initial Value Problems with Error Control”, International Journal of Computer Mathematics, vol. 39, issue 3&4, 1991, pp. 217-227. Hung C., “Dissipativity of Runge-Kutta Methods for Dynamical systems with Delays”, IMA Journal of Numerical Analysis, vol.20, no.1, 2000, pp. 153-166. Shampine L.F., Watts H.A. “The Art of a Runge-Kutta Code. Part I”, Mathematical Software, vol.3. 1977, pp. 257-275. Butcher J.C., “On Runge Proccesses of Higher Order”, Journal of Australian Mathematical Society, vol.4. 1964, p.179. Butcher J.C., The Numerical Analysis of Ordinary Differential Equations: Runge-Kutta and General Linear Methods, John Wiley & Sons, UK, 1987. Butcher J.C., “On Order Reduction for Runge-Kutta Methods Applied to Differential-Algebraic Systems and to Stiff Systems of ODEs”, SIAM Journal of Numerical Analysis, vol.27, 1990, pp. 447-456. Fehlberg E., “Low order Classical Runge-Kutta Formulas with Step-size Control and Their Application to Some Heat Transfer Problems”, NASA Tech. Report., 315 (1969), Extract Published in: Computing, vol.6, 1970, pp. 66-71. Forsythe G.E, Malcolm M.A, Moler C.D., Computer Methods for Mathematical Computations, Prentice-Hall of India, Englewood Cliffs, NJ, 1977, p. 135. Bader M., “A Comparative Study of New Truncation Error Estimates and Intrinsic Accuracies of some Higher order Runge-Kutta Algorithms”, Computational Chemistry, vol. 11, 1987, pp. 121-124. Bader M., “A New Technique for the Early Detection of Stiffness in Coupled Differential Equations and Application to standard Runge-Kutta Algorithms”, Theoretical Chemistry Accounts, vol. 99, 1998, pp. 215-219. Shampine L.F., Gordon. M.K., Computer Solutions of Ordinary Differential Equations, W.H. Freeman: San FranArticles
39
Journal of Automation, Mobile Robotics & Intelligent Systems
[21] [22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
[34]
40
cisco CA, 1975, p. 23. Taha Z., Dynamics and Control of Robots, Ph.D Thesis, University of Wales, 1987. Huang H.P., Tseng W.L., “Asymptotic Observer Design for Constrained Robot Systems”, IEE Proceedings on Control Theory and Applications D, vol. 138, issue 3, 1991, pp.211-216. Shampine L.F., Watts. H.A. (1985), “Some practical Runge-Kutta formulas”, Mathematics of Computations, vol.46, no. 173, pp. 135-150. Paul Dhayabaran D., Henry Amirtharaj E.C., Murugesan K., Evans D.J., “Numerical Solution Robot Arm Model using STWS RKHEM Techniques”, in: G.R. Liu, V.B.C. Tan and X. Han, Computational Methods, LNCS, ISBN 978-14020-3952-2 (Print) 978-1-4020-3953-9 (Online), 2006, pp. 1695-1699. Nanayakkara T., Watanabe K., Kiguchi K., Izumi K, “Controlling multi-link manipulators by fuzzy selection of dynamic models”. In: 26th Annual Conference of the IEEE - IECON 2000, vol.1, 2000, pp.638-643. Shampine L.F., Watts H.A., “The Art of writing a RungeKutta code. Part II”, Applied Mathematical Computations, vol.5, 1979, pp. 93-121. Hung C., “Dissipitivity of Runge-Kutta methods for dynamical systems with delays”, IMA.Journal of Numerical Analysis, vol.20, 2000, pp.153-166. Young K.K.D., ”Controller Design for a Manipulator Using Theory of Variable Structure Systems”, IEEE Transactions on Systems, Man and Cybernetics, vol. SMC-8, no.2, 1978, pp. 101-109. Jingzhou (James) Yang, Wei Yu, Joo Kim, Karim AbdelMalek, “On the placement of open-loop robotic manipulators for reachability”, Mechanism and Machine Theory, vol. 44, issue 4, 2009, pp. 671-684. Jong-Seok Rho, Kwang-Il Oh, Hong-Seok Kim, aHyunKyo Jung, “Characteristic Analysis and Design of a B14 Rotary Ultrasonic Motor for a Robot Arm Taking the Con-tact Mechanism into Consideration“, IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, vol. 54, 2007, pp. 715-728. Bakari M.J., Zied Kh.M., Seward D.W., “Development of a Multi-Arm Mobile Robot for Nuclear Decommissioning Tasks”, International Journal of Advanced Robotic Systems, vol.4, 2007, pp. 397-406. V. Murugesh and K. Murgesan, RK-Butcher Algorithms for Singular System based Electronic Circuit, International Journal of Computer Mathematics, vol. 86, no. 3, 2009, pp. 523-536. J.Y. Park, David J. Evans, K. Murugesan, S. Sekar, V. Murugesh, “Optimal Control of Singular Systems using the RK-Butcher Algorithm”, International Journal of Computer Mathematics, vol. 81, no. 2, 2004, pp. 239-249. J.Y. Park, K. Murugesan, David J. Evans, S. Sekar, V. Murugesh,“Observer Design of Singular Systems (Transistor Circuits) using the RK-Butcher Algorithm”, International Journal of Computer Mathematics, vol. 82, no. 1, 2005, pp. 111-123.
Articles
VOLUME 3,
N° 3
2009
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
N° 3
2009
SPECIAL ISSUE SECTION
Contemporary Approach to Production Processes Management Guest Editors: Andrzej Masłowski Józef Matuszek
Journal of Automation, Mobile Robotics & Intelligent Systems
VOLUME 3,
N° 3
2009
Editorial Oscar Castillo*, Patricia Melin
Special issue section on Contemporary Approach to Production Processes Management We would like to warmly welcome and thank you for reaching for this issue of the JAMRIS â&#x20AC;&#x201C; Journal of Automation Mobile Robotics Intelligent Systems. The chosen topic Contemporary Approach to Production Processes Management is not accidental. The conditions and requirements ruling enterprises' functioning are more and more complicated demanding, changing and challenging each day. The companies have to constantly search for and use new ways, methods of managing their inner processes, implement new manufacturing systems to be competitive and be able to exist on the market. It's inevitable however time consuming and costs generating. In the papers there are described the issues of scheduling of production tasks implementing methods of artificial intelligence, theories of neuron nets, automation processes, control of the performed tasks. Among the mentioned issues there are also presented methods of modeling and simulating of performed activities on the basis of virtualization of production systems using the philosophy of digital factory. What's worth mentioning is the fact that all the paper included in this issue were presented on a conference Intelligent Manufacturing Systems '2008. We are indebted to the authors and reviewers for their efforts, outstanding contributions and assistance in the preparation of this special issue. We would like to express our special, sincere gratitude to the editors of JAMRIS for giving your consent to publishing all this papers and reviewers for their efforts in making the reviews and comments on the papers. In the first paper A. Wannagat and B. Vogel-Heuser are focused on sensor failures and ways of their management by increasing flexibility and availability of self-adapting manufacturing systems. This proposal introduces a conceptual design of self-adapting system software to manage sensor failures in factory automation. The approach reconfigures the arrangement of software modules in real time to preserve the required stability of production processes without interrupts. Reconfiguration will be decided by rules from a knowledge base system. This paper discusses conventional, object oriented and agent based concepts, and focuses on modeling of these concepts. For discussion purposes, a real industrial application - a continuous thermo-hydraulic press will be presented as application example. The second paper by A. Vallejo, R. Morales-Menendez, H. Elizalde-Siller covers the issue of intelligent control system over next-generation of High-Speed Machining (HSM) systems Next-generation of High-Speed Machining (HSM) systems demand advanced features such as intelligent control under uncertainty. These require, in turn, an efficient administration and optimization of all resources in the system towards a previously identified objective. This work presents an optimization system based on Markov Decision Process (MDP). The intelligent control system guides the actions of the operator in peripheral milling processes. Early results suggest that MDP framework can cope with this application; this makes room up to several benefits. Future work will address the full integration of the developed optimization scheme within a commercial machining center. M.O. Ait El Menceur, P. Pudlo, J.-F. Debril, P. Gorce, F.-X. Lepoutre deal with alternative movement techniques identification. Few studies in literature propose quantitative techniques to achieve this purpose for example a biomechanical index based technique named JCV. This method finds its limits when dealing with three-dimensional complex movements. In the present study authors propose a modification of this method so that it will be applied to complex movements. They consider a non-habitual end effector (Midpoint between Hips). They obtain other indices, 3BJCV, to which a non-supervised clustering technique is applied. They have applied this method to the ingress movements of 37 young and elderly subjects with or without Editorial
43
Journal of Automation, Mobile Robotics & Intelligent Systems
VOLUME 3,
N° 3
prosthesis entering in a minivan vehicle. The proposed method allows the identification of the two big classes of ingress movements observed by Ait El Menceur et al. B. Kilundu, P. Dehombreux, Ch. Letot, X. Chiementin present a procedure for early detection of rolling bearing damages on the basis of vibration measurements. First, an envelope analysis is performed on bandpass filtered signals. For each frequency range, a feature indicator is defined as sum of spectral lines. These features are passed through a principal component model to generate a single variable, which allows tracking change in the bearing health. Thresholds and rules for early detection are learned thanks to decision trees. Experimental results demonstrate that this procedure enables early detection of bearing defects. In the fifth paper, A. Pashkevich, A. Klimchik, D. Chablat, P. Wenger present a new stiffness modeling method for multi-chain parallel robotic manipulators with flexible links and compliant actuating joints. In contrast to other works, the method involves a FEA-based link stiffness evaluation and employs a new solution strategy of the kinetostatic equations, which allows computing the stiffness matrix for singular postures and to take into account influence of the external forces. The advantages of the developed technique are confirmed by application examples, which deal with stiffness analysis of a parallel manipulator of the Orthoglide family. M. Nentwig and P. Mercorelli deal with a robust throttle valve control, which has been an attractive problem since throttle by wire systems were established in the mid-nineties. Control strategies often use a feed-forward controller, which use an inverse model; however, mathematical model inversions imply a high order of differentiation of the state variables resulting in noise effects. In general, neural networks are a very effective and popular tool for modeling. The inversion of a neural network makes it possible to use these networks in control problem schemes. This paper presents a control strategy based upon an inversion of a feed-forward trained local linear model tree. The local linear model tree is realized through a fuzzy neural network. Simulated results from real data measurements are presented, and two control loops are explicitly compared. In the next paper, A Hamrol and A. Kujawińska deal with the analysis of process stability with the use of process control charts. A new idea of pattern recognition and two original methods of data processing, called OTT and MW have been described. The software application CCAUS (Control Charts - Analysis Unnatural Symptoms) supporting process control charts analysis with OTT and MW has been presented as well. Also the paper contains the results of the verification of the proposed methods performed on the basis of data obtained from two machining operations. In the eight paper, J. Pomares, P. Gil, J.A. Corrales, G. J. García, S.T. Puente, F. Torres present a cooperative robot-robot approach to construct metallic structures is presented. In order to develop this task, a visual-force control system is proposed. The visual information is composed of an eye-in-hand camera, and a time of flight 3D camera. Both robots are equipped by a force sensor at the end-effector. In order to allow a human cooperate with both robots, an inertial motion capture system and an indoor localization system are employed. This multisensorial approach allows the robots to cooperatively construct the metallic structure in a flexible way and sharing the workspace with a human operator. J. Matuszek and J. Mleczko: as constant fight for the client led by fulfilling customers' demands by the shortening time of order's performance as well as delivery cycle etc. define the reality of companies' existence nowadays, the companies are forced to meet short deadlines with keeping the product price competiveness condition at the same time. That is hardly possible without a proper APS (Advanced Planning System) class advanced planning support system. Though expensive, it's being used in conditions of unit and small-batch production and this paper has been drawn on the basis of the research on overloads-of moving bottlenecks in the mentioned above conditions. It has been proved that due to vast amount of resources and tasks some especially small and medium-sized enterprises (SME) are not able to deal with such big data range and to optimize their production processes. Therefore, the author took on building a heuristic algorithm, which could find a good enough solution and based on TOC (Theory Of Constraints) assumptions and their verification he conducted some tests in real production systems. The mentioned above method found its application in the industrial scale, as extension of the ERP class system. M. López Campos and A. Crespo Márquez present a chronological tour for the most important models of maintenance management, describes them in a general way and classifies them according to their functioning under declarative models and under process oriented models. It distinguishes in addition the innovations proposed by every author and compares the elements that form every model, with some of the points that the norm ISO 9001:2000 mentions, as well as with other criteria considered suitable to the case. From this analysis are derived the results between which are distinguished some desirable characteristics for 44
Editorial
2009
Journal of Automation, Mobile Robotics & Intelligent Systems
VOLUME 3,
N° 3
2009
a modern and efficient maintenance management model. In addition is discussed the application of these models to support industrial needs, as well as its future challenges. S. Kłos and J. Patalas deal with a computer-based information system for enterprise integration, with enterprise resource planning (ERP) systems, which support management processes. Today ERP implementation is a strategic decision due to the fact that it influences enterprise's development by giving all the necessary information from all areas of enterprise functioning, therefore should also evolve together with enterprises. Oscar Castillo*, Patricia In the last paper, M. Gregor, Št. Medvecký, J. Matuszek, and Melin A. Štefánik present the results of research and development of the Digital Factory solutions in industry, which cover design of assembly system, its processes, simulations model, ergonomic analysis etc. In the paper are presented the solutions developed in the framework of co-operation with industrial partners like Volkswagen Slovakia, Thyssen Krupp PSL, Whirlpool. The paper contains results of research realized in 3D laser scanning and digitization of large size objects of the current production systems. The developed and validated methodology shows the procedure of 3D laser scanning application by the digitization of production halls, machine tools, equipment, etc. This procedure was tested and validated in chosen industrial companies. The paper presents achieved benefits and future research goals as well.
Guest Editors: Andrzej Masłowski Industrial Research Institute for Automation and Measurements - PIAP, Al. Jerozolimskie 202, 02-486 Warszawa, Poland. E-mail: amaslowski@piap.pl Józef Matuszek Faculty of Mechanical Engineering and Computer Science, University of Bielsko-Biała, Poland. Willowa 2, 43-309 Bielsko-Biała, Poland. E-mail: jmatuszek@ath.bielsko.pl
Editorial
45
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
N째 3
2009
INCREASING FLEXIBILITY AND AVAILABILITY OF MANUFACTURING SYSTEMS - DYNAMIC RECONFIGURATION OF AUTOMATION SOFTWARE AT RUNTIME ON SENSOR FAULTS Andreas Wannagat, Birgit Vogel-Heuser
Abstract: This proposal introduces a conceptual design of selfadapting system software to manage sensor failures in factory automation. The approach reconfigures the arrangement of software modules in real time to preserve the required stability of production processes without interrupts. Reconfiguration will be decided by rules from a knowledge base system. This paper discusses conventional, object oriented and agent based concepts, and focuses on modelling of these concepts. For discussion purposes, a real industrial application - a continuous thermo-hydraulic press will be presented as application example. Keywords: agents, programmable logic controllers, industrial production systems, availability.
1. Introduction Industrial production systems have very high requirements regarding their robustness against defects and failures. The production process shall not be interrupted or even hampered. This paper describes an agent-based approach that reconfigures automation software at runtime to compensate failed sensors and actuators. It uses physical dependencies between process values in a production environment to install virtual sensors and it reconfigures the system input to an optimal presentation of the current process state. In plant automation, software systems are
being used to control technical systems with sensors and actuators as input and output devices. Apart from office environments the surrounding of such field devices is very rough. Humidity and heat are influences that shorten the lifetime of sensors. Expensive and unwelcome downtimes of plants are the consequence of such failures. In case of physical defects of a sensors or an actuator, e.g. a drive, human intervention is not possible without expensive delays. In many cases, continuing the production process with some restrictions until the next scheduled service is preferable than interrupting it for unplanned repair work. In case of sensor faults, defective devices can be bridged by reconfiguring the affected control loops at runtime. Sensors and actuators pose as interfaces between technical processes and the control equipment. Some additional sensors may be installed for monitoring purposes. Any faulty sensor, which is used in a control cycle, will results in uncontrolled behaviour, unless alternative solution is being used.
2. Required changes at runtime Industrial automation can be classified as production technology and process technology, which is further subclassified as batch processing and continuous flow processing [2]. Consequences of unplanned production stops have different impact on different categories. In production systems such interrupts are unwelcome but they are not critical, as the treated material can remain
Fig. 1. Model of a continuous thermo-hydraulic press. Articles
47
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
stable during the interrupts. The treating of material in process technology cannot be interrupt that easily, as the material will continue its reaction during the interrupts. Continuous flow processes are extremely challenging to interrupt. Fast flowing material requires real time reactions from the automation system for every change. Therefore, a continuous flow technology example will be used for discussion purpose. For this purpose, alternative reactions for a continuous thermo-hydraulic press with failed sensors are being inspected. Fig. 1 gives an overview of this application. A press is composed of up to 80 frames. Each frame consists of 5 separately controlled hydraulic cylinders that are equipped with pressure and distance transducers. The sensors are physically linked to programming logic controllers (PLC), which execute the control software. The sensors are linked by a field bus system. One PLC may control between 10 and 20 frames. The failure of one component results in a failure of the entire control chain. The press produces fibreboards from raw material like wood fibres and glue. [14] The raw material is sandwiched between steel belts that are pressurized by the hydraulic cylinders. Intense differences of pressure between two neighbouring frames are not allowed. Basically, there are three different ways for a control system to react to a sensors failure. First, the system may shut down the production process and stop until the defective sensor is changed. However, interrupting a running system can be very expensive or even impossible. Continuous processes in particular cannot be aborted and may react with an unpredictable behaviour, if an actuator is driven by uncontrolled value. Fig. 2 illustrates this behaviour at the thermo-hydraulic press. If the distance sensor of a hydraulic cylinder has a malfunction, the control function may set the pressure too high. This jams the steel belt and the material in the press.
Fig. 2. Sketch of a continuous fiberboard press with an uncontrolled pressure value.
Fig. 3. Sketch of a continuous fibreboard press with two controlled units in a safe position. Secondly, the jam can be avoided by forcing the controlled device to a stable state, e.g. a valve is opened completely or it is closed completely. Such stable state 48
Articles
N° 3
2009
has to be predefined for each use case and it will allow continuing the production for a short time or shut down the process safety (Fig. 3). The third way to react on sensor failures is a dynamic reconfiguration of the system during runtime using redundant devices or information. However, the backup solution is constrained by several requirements: Backup devices are expensive. A machine for pressing fibreboards has up to 200 sensors of the same kind. A hardware backup for each sensor would double the cost of the sensor equipment; it would extend the necessary number of inputs at the PLC and increase the effort for wiring the additional devices. A solution can be the use of virtual devices, which is a model based calculation of a measuring point. These calculations base on analytical dependencies to neighbouring sensors. Software solutions are cheap, easy to duplicate and they do not require any additional cabling. The quality of these calculations is depending on the quality of the model. The automation software should switch to backup devices at runtime. Dynamic reconfiguration at runtime requires more than having a number of suitable solutions. It also needs a decision, which solution is an ap-propriate compensation for the existing failure. The automation system has to be reconfigured in real time. E.g. the material of a fibreboard moves more than one meter per second. The material could jam very quickly if a failure is not handled immediately. Therefore, the decision to replace defective sensors by an alternative must be part of the control software.
3. Predefined alternatives Traditional imperative programming languages offer different strategies to combine virtual sensor devices, replacement techniques and quality checks. Established design principles are based on modular approaches. Functional coherences are capsulated in modules. The programming languages that are used in PLC do not contain such constructs to assign the replacement functions at runtime. Sensor inputs are read directly from input variables. A solution may only be realized by hard coded, nested “if - then - else” clauses. The decision has to be retrieved each time; an actual sensor value is needed. Such decisions have to cover every possible failure and link to a corresponding strategy to handle it. In case of sensor failures, a virtual value will be calculated using a predefined calculation rule. To handle cases, which are combinations of sensor failures, every possible sequence has to be regarded and implemented. In these classical approaches, each element and its relations in between is predefined and described in a static way. Furthermore a high dependability between these elements arises from a low level of abstraction by describing them (e.g. function calls) [17]. The object orientation has methods (late binding), which allow the creation of new structures at runtime [9]. However, the object-oriented approach uses the same low abstraction level. The first available object oriented IEC 61131-3 platform doesn't support this concept of late bindings [4]. To react on changes of the system-structure at run
Journal of Automation, Mobile Robotics & Intelligent Systems
time, it would be necessary, to handle all possible changes, which can occur during design-time of a system and define the appropriate behaviour. This is particularly difficult when the behaviour of the system is connected to real-time requirements. This is possible for systems with a limited number of elements, dependencies and a limited behaviour variation. For systems, which consist of many elements affected by many factors, the overall behaviour grows by the number of possible dependencies between the different elements much faster than the number of elements. [9] The attempt to describe all possible states of the system a priori, leads to an extraordinary effort designing the software. Agent based approaches promise a real dynamic reconfiguration at runtime, as decisions can be concluded from rules.
4. An agent approach for rule based decisions A well-suited approach for developing decentralized, complex and dynamic software systems is the paradigm of agent-oriented software engineering. In agent-oriented software, development of an agent is defined as an encapsulated software unit with a defined goal. An agent autonomously fulfils its goal and continuously interacts with its environment and other agents [16]. Unlike a static approach in an agent oriented approach the entirety of the structure and its behaviour has not to be fully specified at design time. The behaviour is generated dynamically at runtime regarding the current situation and within defined variations. Strategies for faulty sensors for example, are determined by a set of rules, instead of predefining a calculation as replacement for a faulty sensor. The agent retrieves the best alternative from a set of possible references at run-time. Decisions, concerning the best reaction to the current situation, are moved to run time. This leads to a reduction of complexity at design-time [17]. The lack of suitable methods for the design of agents in industrial applications is actually recognized by several working groups. The national project AgentAut [7] as well as the European projects Pabadis [6] and Pabadis promise [10] work on an integrated method for distributed control systems, but focus only on the integration of PPC/MES and control level. The European projects SOCRADES [12] and RI-MACS [11] use agents to organize the coordination of communication networks between distributed devices. Agents are not applied for open or closed loop control purposes in the field control level. In all these projects the flexibility of agents is primarily used to realize an optimised planning of production program at runtime and not to increase the dependability of the system regarding real time requirements. However, neither methods nor tools that are adapted to design agents for industrial real-time applications on PLC basis exist [8]. An exception is the project AVE [1], which developed such a method to support a systematic design of an agent-based system for embedded real-time systems in terms of safety and real-time requirements. The difficulty in developing an agent system for realtime applications is to define the action space of an agent precisely enough to ensure the requirements regar-
VOLUME 3,
N° 3
2009
ding the availability of the system, the required performance and the product quality. In [15] a SysML [5] based approach was presented by us that supports developers defining the requirements and constraints of an automation system. This model is used as a template for the definition of the action space as the main part of the agent's knowledge base. A main part of the agents duties in the context of increasing availability of an automation system, is to detect, analyse and handle faults. By now, the agents just focus on instrument fault detection by using analytical redundancy between different measurement-points. For every real sensor, which has functional dependencies to other sensors, we calculate additional virtual sensors using values of neighbouring real sensors. The virtual sensors are used to validate the corresponding measurements and detect faults (parity space approach). If there is more than one virtual or real sensor available at one measurement-point, it is possible to detect a single fault (isolation). Principally the virtual sensor values will never be as precise as a real sensor values. In case of diagnosing a fault, this implies the risk of false alarms or on the other hand the sensitive to faults. In case of substituting a real sensor by a virtual one, this loss of precision is not only relevant for closed loop controls but for the whole control strategy of the process. Therefore, it is insufficient just to calculate virtual sensors and to substitute faulty real-sensors. Additionally, the consequences of such substitutions and constraints of the automated system have to be taken into consideration. An agent knowledge base contains two main components - constraints and knowledge. Constraints define the margin of the activity space that is used by the agents to take decisions. Knowledge is the possible alternatives at a certain point inside the activity space. Both aspects require an exact orientation of the agents in the action space. Next, a knowledge base, which allows detecting sensor failures, calculating a surrogate value and estimates the resulting precision at runtime, will be introduced. One important point for the design of such a knowledge base is, that it is easy to design and implement in a PLC environment. A very simple and powerful notation for this purpose, which is well known in the domain of automation, is the directed graph [3]. In this graph, each node represents a measurement point. It is equipped with a value source that can be either a real or a virtual sensor. A quality-value at each node describes the accuracy of the measured or calculated value. The quality value ranges continuously from 0 to 1. The edges of the graph describe functional correlation (f, Fig. 4) between the measurement-points and represent the analytical dependencies, which are used to calculate virtual sensor-values at runtime. The direction of the arrows indicates sensor values that are appropriate for a substitution (Fig. 4). The black dots are used, if more than one sensor is required to calculate a virtual sensor. For example: Sensor “S2_1” can be calculated, using the function “fs2” and the sensor values “P1_1” and “S1_1”. The function “f_s2” expresses the dependency between the thickness of the incoming material “s1” the Articles
49
Journal of Automation, Mobile Robotics & Intelligent Systems
pressure of the hydraulic cylinder “p1” and the thickness of the outgoing material using a spring model. The spring constant (C, Fig. 4) represents the elasticity of the material and depends on the actual temperature, density and humidity of the wood. The time-delay (tv), which is causes by the moving material and the distance between the sensors, is considered by using recorded values for “S1_1” and “P1_1”. The functions “fs1” and “f_p1” are transformation of the same equation. The function “flin” is a linearization between two measurement points (x1,y1; x2,y2) which value is calculated at the position of the virtual sensor (x).
Fig. 4. Analytical dependencies of sensor values (P-pressure, S-distance) using the material property (C). The substitution of a real sensor by a virtual one will not change the structure of the graph. It is possible to use virtual sensors as source for other virtual sensors and the probability for that rises with the number of failures and corresponding substitutions. The precision of virtual sensor-values is possibly reduced by inaccurate models and time aspects, e.g. dead time or delays because of the underlying measurement, the field bus or the calculation of virtual sensor values in the PLC. Reduced precision lowers the quality of the virtual sensors compared with original measurements. This loss of precision is considered by a so-called quality factor (q, Fig. 4), which is bound to every arrow of the graph and described by values between 0 and 1. In addition a quality-value (Q, Fig. 4) at every node represents the precision of a sensor measurement. Real sensors get a quality-value, which is initially determined by vendor specifications. The quality-value of a virtual sensor is determined by a product of quality-factor multiplied with the quality-value of its source. It indicates the coherency between the model and the reality. A low quality-value shows a high uncertainty about the calculated or measured value. 50
Articles
VOLUME 3,
N° 3
2009
While each replacement impairs the accuracy of calculated values, the quality values represent the estimation of uncertainty. The described strategy prefers real measurements, as they have the lowest divergence or the highest quality value. It cuts of cyclic calculations, as their quality value would tend towards zero. This prevents complete virtual process images that are coherent but not validated. In case of a sensor failure, its quality-value would tend towards zero. Following the arrows of the graph and using the corresponding functions, it is possible to calculate a virtual sensor-value as well as its precision with the quality-factor and quality-value of the source. Every arrow that leaves the node represents a virtual sensor, which can be benchmarked at runtime by comparing their quality-values. The virtual sensor with the highest quality-value will replace the defective sensor without changing the model structure but changing the quality value. References to other sensors still remain. If a virtual sensor is used as a backup of a sensor, which is the source of other virtual sensors, the quality of these derived sensor values will be recalculated. The agents, which pre-assigned the defective sensor as a source, have to compare the available alternatives again. Doing so, the agent system optimises the use of the remaining real sensor values. Local decisions of agents lead to an optimal quality of controller input values over the entire system. The agent system optimises the use of measurements dynamically at runtime. The correctness of values can be reasoned by combining the assigned quality values of all possible virtual sensors. Discrepancy of measured value and a virtual sensor of low quality value do not stringently implicate a sensor fault. Only a very high discrepancy would implicate that either the measured or the source of the virtual sensor is faulty. In contrary, a sensor is detected as defective if all possible substitute values diverge in the same way with high quality values. The threshold, which defines the agent's decision, is oriented at user defined safety requirement for the specific part of process or the technical system. Furthermore the agents use the quality value of a virtual sensor to determine the effect on the availability of the plant operation and to compare it with the given requirements and constraints. The reliability of sensor values is evident for processing automated production systems. While the substitution of real sensors with calculated virtual sensors increases the readiness in case of partial faults, it risks the accuracy of the process flow. While the correctness of possible alternative strategies for static systems is determined during development time, an agent based dynamic system decides this during runtime. Both have to decide, whether the production process can be continued with replaced, calculated values or if it has to be suspended. The loss of a sensor reconfigures the control behaviour automatically. The automated result may be a single parameter adjustment or an immediate shut down of an entire plant and is characterized as dynamic reconfiguration at runtime. The effort to calculate virtual sensor values depends on the complexity of the related mathematical terms and on the number of dependant sensors. As long as virtual
Journal of Automation, Mobile Robotics & Intelligent Systems
VOLUME 3,
N째 3
2009
sensors are used only for diagnostic purposes, the effort may be reduced by extending the calculating period of virtual sensors.
5. Implementation and evaluation The introduced concept was evaluated by applying it to the thermo-hydraulic press. A Matlab model of the process was used to simulate the real industrial application. The agent system was implemented using a classical PLC [13], which is programmed according to the IEC 61131-3 standard. Each agent is coded in IEC 61131-3 programming languages and is assigned to a separate function block with duties for control, messaging, and diagnosis. The agents are linked together by a common process image and the option to communicate via messages. The agents can either bound to different tasks or PLC's or just run all in the same task on one PLC. All control agents are identical. Each agent controls one of the 23 frames of the thermo-hydraulic press. This architecture is oriented at the structure of the technical systems that composes the mechanical elements of a plant. Agent parameters are set according to the requirements of the local sub-processes. They depend on the location of the agent's controlled elements in the press. One supervisor agent stores this knowledge. On request of the control agents the supervisor submits the settings and constraints. As part of all control agents, the common knowledge base represents relations between sensors on one frame and its neighbouring frames.
Fig. 5. Distribution of sensors in a frame. Due to their identical structure all agents use a nearly congruent part of the knowledge base. Differences only concern the parameters of their calculation terms and result from the transformation of material properties. The processing of wood fibre and glue influences the dependency of measured values between different sensors significantly.
Fig. 6. Matrix of as representation of the knowledge base. The graph is mapped to a matrix (Fig. 6) that is implemented on an IEC 61131-3 runtime system. The columns and rows are labelled with references of all nodes. The main diagonal is periodically updated with the values and qualities of real measurements. All other intersections of rows and columns refer to tuples of quality factors and values. The column id is the source and the row id is the target. Replacement for sensor in column id can be identified through the row id. These table cells represent the marked edges of the graph. Table cells, which do not represent an edge, are set to the tupel (0,0). Using this mapping, an agent can get the results of real measurements and all alternative calculations. This simple access allows an easy diagnosis and straightforward reconfigurations. All table cells contain references to variables, which are written by function blocks, with measured or calculated values. The function blocks of real measurements are used to pre-process the measured input values. The function blocks of virtual sensors are more complex. They have to compensate delays that are caused by the material transport in the press. This requires measured values to be recorded. The calculation of virtual sensor values is done using this recorded data. As long as a related sensor is operating, the difference between measured and calculated value is recorded. This information is used to quantify the quality value. Additionally, this information helps to improve the precision of a simple initial model. The application example uses three different types of virtual sensors. The first sensor type calculates the backup values by interpolation of neighboured sensor values. The second one uses physical relations of the process and calculates the resulting material thickness at a frame from the incoming material and the current pressure. The third virtual sensor type calculates the pressure from the difference of incoming and outgoing material thickness. The function blocks deliver a sensor value and a quality value independently from a measured or calculated source. A status variable indicates the operation status of a sensor. This is used for local diagnosis of sensors. Although pointers are not part of the IEC 61131-3 language specification, they are supported by some of the leading manufacturers of automation equipment, for example TwinCat by Beckhoff or Step7 by Siemens. The use of pointers was helpful to build references to the measured and calculated values. It was possible to Articles
51
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
build one knowledge base that could be used by all agents without recopying the values. Additionally, pointers simplified the software structures as reconfiguration was realized by changing pointers from references to real sensors to references to virtual sensors. The agents continually checked the plausibility of current values using its alternatives. In case of a determined fault the current sensor was replaced by the best alternative sensor. Therefore, the address of the replacement substitutes the address of the predecessor. The new current sensor is available to all other functions, function blocks and programmes at the accustomed place, delivering new values for sensors and quality. The detection of faults is done at the beginning of every PLC-cycle and the replacement is done immediately, so that further access to this measurement-point is redirected to the new calculated value and no delay occurs. The exchange is represented by using the values of the virtual sensor instead of the real in the corresponding node (measurement-point). All relations of the structure remain valid. A decreasing quality value causes further decreasing quality values at the derived virtual sensor. All related agents include this new information in their decision for the selected virtual sensor and dampen the negative effect of a sensor break down on the entire system.
6. Conclusions Due to the power of modern controller hardware it is possible to make even very complex software based decisions at runtime. This flexibility can be used to detect and react on failures, in order to increase the availability or to improve the efficiency by adapting to changing requirements. Compared with a classical approach the self-adaptation of the application during run time was flexible and easy to implement. Decision rules on basis of the knowledge base facilitate the definition of constraints and dependent requirements. If it is necessary to predefine all measures and their parameters (min., max.) before run-time, worst-case scenarios will be used and according parameters chosen. The agent's knowledge base and replacement mechanism allows reaction based on changes in environmental conditions. By that it handles changed precision of sensors and/or actuators during run-time. The solution adapts the actual values with appropriate changes instead of using worstcase values in predefined replacements. The process operation time will be longer under the prerequisite that the process operation is still beneficial with reduced precision, speed, etc. This leads to higher availability of the production line. The current work evaluated, that it is possible to setup an agent based decision system that adapts data sources for sensor values automatically and shows that complex decisions are possible with rules defined in easy structures that can be implemented in an IEC 61131-3 environment. Future work will also evaluate the usability of this approach. The required effort and skills for modelling such agent-based solutions will be compared with conventional procedures. The usability of creating, understanding and modifying agent based or conventional solutions will be regarded separately. 52
Articles
N° 3
2009
The self-adapting agent approach gives the perspective that modular systems may be composed easily. Agents promise a better reuse because they negotiate their relations instead of being bound strongly and inflexible. The benefit of this approach will be regarded at different classes of applications, like hybrid combination of process and production systems.
AUTHORS Andreas Wannagat* - Chair of Embedded Systems, University of Kassel, 34121 Kassel, Germany. E-mail: wannagat@uni-kassel.de. Birgit Vogel-Heuser - Chair of Embedded Systems, University of Kassel, 34121 Kassel, Germany. E-mail: vogelheuser@uni-kassel.de. * Corresponding author References [1] [2] [3]
[4] [5] [6]
[7]
[8]
[9]
[10]
[11]
[12]
[13] [14]
[15]
AVE, “Agents for flexible and reliable embedded systems”, DFG Project: VO 937/5-1, 2005-2007. Braun E., Technology in Context - Technology Assessment for Managers, Routledge: New York, 1998. Chartrand G., “Directed Graphs as Mathematical Models”, in: Introductory Graph Theory, Daver: New York, 1985, pp. 16-19. CoDeSys 3, 3S-Smart Software Solutions GmbH, Kempten, www.3s-software.com. Hause M., “The SysML Modelling Language”, Fifth European Systems Engineering Conference, Edinburgh, 2006. Klemm E., Lüder A., “Agentenbasierte Flexi-bilisierung der Produktion bei Verwendung von vorhandenen Steuerungssystemen”, atp Automatisierungs-technische Praxis, vol. 45, 2003. Lüder A., Peschke J., Sanz R., “Design Patterns for Distributed Control Applications”, atp international, vol. 3, 2006, pp. 32-40. Mubarak H., Göhner P., Wannagat A., Vogel-Heuser B., “Evaluation of agent oriented methodologies”, atp international, vol. 1, 2007 Palsberg J., Schwartzbach M., “Object-Oriented Type Inference”, ACM SIGPLAN Sixth Annual Conference on Object-Oriented Programming Systems, Languages and Applications, 1991. Peschke J., Lüder A., Kühnle H.. “The PABADIS'PROMISE architecture - a new approach for flexible manufacturing systems”, Industrial Electronics Society (EFTA 2005), Catania, 2005, pp. 491-496. RI-MACS, “Radically Innovative Mechatronics and Advanced Control Systems”, European Project: FP6 NMP-IST Joint Call 2, 2005-2008. Socrades, “Service-oriented cross-layer infrastructure for distributed smart embedded devices”, Integrated Project, European Commission, Information Society Technologies, Frame Programme 6, 2006-2009. TwinCat, Beckhoff Automation GmbH, Verl. www.beckhoff.de. Vogel-Heuser B., “Automation in wood and paper industry”, Nof, S. Y. (Ed): Handbook of Automation, SpringerVerlag, New York, 2009; to appear. Wannagatn A., Vogel-Heuser B., “Agent oriented soft-
Journal of Automation, Mobile Robotics & Intelligent Systems
[16]
[17]
VOLUME 3,
N° 3
2009
ware-development for networked embedded systems with real time and dependability requirements in the domain of automation”, IFAC World congress, 2008. Wooldridge M.J., Jennings N.R., “Intelligent agents: Theory and practice”, The knowledge Engineering Review, vol. 10, issue 2, 1995, pp. 115-152. Jennings N.R., “On agent-based software engineering”, Artificial Intelligence 117, 2000, pp. 277-296.
Articles
53
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
N° 3
2009
INTELLIGENT CONTROL SYSTEM FOR HSM
Antonio Vallejo, Ruben Morales-Menendez, Hugo Elizalde-Siller
Abstract: Next-generation of High-Speed Machining (HSM) systems demand advanced features such as intelligent control under uncertainty. This requires, in turn, an efficient administration and optimization of all system's resources towards a previously identified objective. This work presents an optimization system based on Markov Decision Process (MDP), where an intelligent control guides the actions of the operator in peripheral milling processes. Early results suggest that MDP framework can cope with this application, yielding several benefits, which are discussed in detail. Future work will address the full integration of the developed optimization scheme within a commercial machining center. Keywords: Markov Decision Process, optimization, HighSpeed Machining, milling process, neural network.
1. Introduction High-Speed Machining (HSM) requires high magnitudes of spindle speeds, feed rates, as well as acceleration and deceleration rates. Simultaneously, it is subjected to stringent restrictions such as low machining cost and time, as well as high precision and accuracy. Intelligent machines have the potential to meet the strong competitiveness demanded by new businesses. For example, intelligent CNC machines convey advance features such as prediction of operations, reduction of setup time, detection of cutting tool condition, acquisition of knowledge and inferences from incomplete information [1]. However, process planners still have great difficulties for measuring on-line process data on machining processes such as cutting tool life and surface roughness [5]. This paper presents the design and implementation of a novel Intelligent Control System for HSM, exhibiting several desirable features such as: prediction of key variables (surface roughness and cutting tool condition), definition and adaptation of optimal cutting conditions and operation policy, and an objective function-based optimization. Special emphasis is given to the decisionmaking module of this system. This paper is an extended version of that presented at â&#x20AC;&#x153;IFAC Workshop on Intelligent Manufacturing Systems'08â&#x20AC;?, and is organized as follows: Section 2 describes the state of the art on HSM, where key areas for improvement are identified. Section 3 introduces the industrial HS-1000 Kondia machining center and the data acquisition system where the experiments took place. Section 4 briefly describes the optimization scheme 54
Articles
proposed. Section 5 illustrates the intelligent system, while Section 6 discusses results. Finally, section 7 closes with some concluding remarks.
2. State of the art Several optimization methods have been developed around process planning systems for machining processes. A procedure for tool selection in milling operations was proposed in [2]. First, several alternatives of cutting tools were considered by an iterative method. Then, cutting data were refined by a set of technological constraints including tool life, surface finishing, machine power, and available spindle speeds and feed rates. Three user-defined optimization strategies were available (minimum cost, maximum production rate or predefined tool life). In [3], a Cutting Parameters Optimization System (CPOS), based on a two-stage methodology, was introduced. First, a tentative number of passes and depth of cuts were determined through the so-called Volume Sectioning method. Then, the cutting speed and feed rate for each pass were optimized by using Genetic Algorithms (GA). The cutting tools were selected from predefined libraries. Two optimization criteria were considered (minimum production time and minimum production cost), accounting for several technological constraints. A second order mathematical model was developed for Ra prediction as a function of the cutting speed, feed rate, depth of cut, and nose radius of the cutting tool in turning operations [10]. The minimization of Ra was taken as objective function and it was optimized using GA. A combination of these cutting parameters was optimized based on a GA approach. Based on previous work by [3], an algorithm for the selection of optimal cutting conditions was proposed in [8], allowing the calculation of number of cuts required and machining time. [20] presented a new hybrid optimization technique based on the maximum production rate criterion and ten technological constraints. A general algorithm, called OPTIS, was used in conjunction with Artificial Neural Networks (ANN) in order to solve the complex optimization problem. OPTIS selects the optimum cutting conditions (based on minimum machining costs) from commercial databases. ANN ensured efficient and fast selection of the optimum cutting conditions and processing of available technological data. Compared to the GA and Linear Programming (LP) approaches, this hybrid optimization technique improved the optimal cutting parameters selection by around 30.41% and 20%, respectively. Based on OPTIS, [21] proposed an adaptive
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
neural controller for on-line optimal control of a milling process. The milling state was estimated via the cutting force's measurement. The feed-rate was selected as the optimized variable. A two-phase optimization strategy based on the Taguchi dynamic characteristic theory was proposed in [12]. Experimental results showed that the machining time could be reduced with low process variance and increased robustness of the CNC milling processes. [19] presented a Taguchi method coupled with Principal Component Analysis (PCA) for the optimization of high-speed CNC milling processes. Optimal process conditions were selected for producing the best dimensional precision and accuracy, surface roughness, and tool wear. The selected control factors were: milling type, cutting speed, feed per tooth, film material, tool material, number of teeth, rake angle and helix angle. Based on the PCA technique, an index for the inter-correlated multiple performance features of a high speed CNC milling process was computed, obtaining optimized settings. A Genetically Optimized Neural Network System (GONNS) that selects the optimal cutting conditions for milling processes was proposed by [11]. GA was used to maximize the rate of metal removal and minimize the surface roughness based on different ANN models. A mathematical model based on both material behavior and machine dynamics was described in [9], able to determine cutting forces for end-milling operations. A GA optimized the cutting parameters for minimizing machining time and maximizing tool life for a constant rate of material removal. [7] reviewed different optimization techniques in metal cutting processes, discussing a general framework for process parameter optimization. Reviewed optimization methods currently applied are: Taguchi method, Response Surface Methodology, Mathematical Iterative Search Algorithm, Genetic Algorithms, and Simulated Annealing. Furthermore, typical objective functions include: minimum production cost, maximum production rate, increase tool life and maximum profit rate, as well as weighted combination of these. Cutting constraints that
N° 3
2009
should be considered in machining economics include: tool-life, cutting force, power, chip-tool interface temperature, and surface finish. Table 1 compares previous works in optimization of machining processes. Almost all of them are usable within narrow operating conditions only; some do not consider process variables, while others demand unavailable HSM handbooks. In this research, an intelligent control system, which includes a planning module, guides the operator in the decision-making process in order to minimize operating costs.
3. Experimental set-up Experiments were carried out in an industrial HSM center HS-1000 Kondia, featuring a 25 KW drive motor, 3 axis, 24000 rpm maximum spindle speed and a Siemens open Sinumerik 840D controller (shown in Figure 1). Several sensors were installed as follows (Figure 2): 1. Three accelerometers and one Acoustic Emission (AE) sensor were installed on a ring. The ring was fixed to the spindle of the machining center (Figure 3). 2. Two accelerometers were fixed in the “x” and “y”-axis directions on the work-piece. 3. One AE sensor was fixed on the table. 4. A Kistler 3-component force dynamometer was fixed to the work-piece, in order to record force signals. The signals were fed to two data acquisition boards with sample rates of 40,000 and 1,000,000, respectively (due to technical requirements of the AE sensors). A milling process was carried out in a test piece of size 100 x 170 x 25 mm, with different Aluminium alloys (5083-H111, 6082-T6, 2024-T3, 7022-T6, 7075-T6), several cutting tools (25o helix angle, and 2-flute, Sandvik Coromant of 8, 10, 12, 16, and 20 mm) and several geometries (concave, convex or straight path), as shown in Figure 3. Table 2 lists the variables and their description.
Table 1. Comparison of previous works in optimization of machining processes (MMC = Minimum Machining Cost). Reference
Machining process [Optimization method]
Objective Function
Carpenter & Maropoulos, 2000
Milling [Iterative Method]
Dereli et al., 2001 Suresh et al., 2002 Mursec & Cus, 2003 Zuperl et al., 2004 Tzeng & Chen, 2005
End Face milling [Volume Sectioning & GA] Turning [GA] Turning, milling [Data from tool manufactures] Turning [OPTIS algorithm & ANN] Milling [Taguchi Dynamic Characteristic theory]
Yih-Fong, 2005
Milling [Taguchi and PCA]
Zuperl et al., 2006
Milling [Adaptive neural Controller]
Tansel et al., 2006
Milling [ GONNs]
Palanisamy et al., 2007
End Milling [Mathematical model & GA]
MMC, maximum production rate & tool life. Minimum machining time & MMC. Minimum surface roughness. MMC. Maximum production rate, & MMC. High machining efficiency & geometrical accuracy. Dimensional precision & accuracy, surface roughness, tool wear. Regulation the cutting force by adjusting the feed rate. Maximum metal removal rate, & minimum surface roughness. Minimum machining time & maximize tool life. Articles
55
Journal of Automation, Mobile Robotics & Intelligent Systems
VOLUME 3,
N° 3
2009
face roughness monitoring and planning module. The first three are here briefly described, while the planning module will be presented in detail in the following section. 4.1. Data Acquisition Module Based on the aforementioned data acquisition system, standard filtering was applied to the process variable signals. Some signals were pre-processed by the Mel Frequency Cepstrum Coefficients (MFCC), widely used in speech recognition systems [18], in order to find particular features.
Fig. 1. HS-1000 Kondia machining center.
Fig. 3. Accelerometers and Acoustic Emission (AE) sensors installed on a ring fixed to the spindle of the CNC machining center.
Fig. 2. Data acquisition system. The monitoring system is integrated by: (1) Supporting ring. (2a) BrĂźel and Kjaer accelerometers (charge sensitivity: 98 Âą 2 pC/g, resonant frequency: 16 KHz & 42 KHz). (2b) PCB piezotronic accelerometers (x-axis & y-axis, sensitivity: 10 mV/g; range frequency: 0.35-20,000 Hz). (3) Charger amplifiers. (4) Conditioning amplifier. (5) Kistler 3-component forces Dynamometer (force range: -7.5 pC/N in x-axis and y-axis and 3.5 pC/N in z-axis, natural frequency: 3-5 KHz). (6) Kistler charger amplifier. (7) National Instruments data acquisition card. (8) Kistler Piezotron acoustic emissions (sensitivity: 700 V/m/s, freq. range: 50-400 KHz). (9) AE Piezotron coupler. (10) CompuScope card. Finally, (11) a HMI based on LabView for real time control and monitoring operations was developed.
Fig. 4. Cutting tools (Sandvik Coromant 8,10,12,14,16 mm) and several Geometries (concave, convex, and straight path).
4. Optimization scheme proposed [14-16] proposed an intelligent control system, illustrated in Figure 5. This control system integrates four main modules: data acquisition, cutting tool monitoring, sur56
Articles
Fig. 5. Intelligent Control System. The system considers four integrated modules: data acquisition, cutting tool, surface roughness and planning module.
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
Table 2. Definition of Variables. Variable Description Vf n ap ae Curv Dtool fz Fy HB Ra p Ra d Ra VB CC PC PG
Feed rate. Spindle speed. Axial depth of cut. Radial depth of cut. Curvature of the geometry. Cutting tool diameter. Feed per tooth. y-axis workpiece cutting force. Brinell Hardness. Surface Roughness. Predicted Ra. Desired Ra. Flank wear in cutting tools. Cutting conditions: n, Vf , ap, ae. Cutting parameters. Selection of the cutting tool, workpiece hardness, etc. Geometric parameters. Geometry of the cutting tool and path of the cutting process.
The procedure for computing the MFCC can be summarized as follows:
N° 3
2009
(HMM) framework was developed in order to identify VB based only on the MFCC of the vibration signals in the work-piece (y-axis). 4.3. Surface Roughness Module Several factors affect the surface roughness (Ra), such as: feed per tooth, cutting tool diameter, radial depth of cut, work-piece hardness, etc. A Response Surface Methodology (a statistical and mathematical technique) was applied for modeling Ra. Applying an ANOVA, four models were obtained for computing the Ra: Ra = f(fz, Dtool, ae, HB, Curv) Each model was developed for a single cutting tool condition (VB). Verifying that the residuals followed a normal distribution statistically validated these models. It is also possible to predict Ra during the machining process by applying an Artificial Neural Network (ANN) model. The ANN model was built based on cutting parameters (fz, Dtool, ae, HB, Curv) and on-line measurement of process variables (MFCC of Fy, the y-axis workpiece cutting force), as follows: Ra = ANN(fz, Dtool, ae, HB, Curv, Fy, VB)
1. A small segment of the signal is selected for applying a Discrete Fourier Transform (DFT), in order to compute the magnitude of the energy spectrum in a logarithm scale. 2. The real frequency scale ( ) is mapped to the perceived frequency scale ( ) as:
3. After a triangular band-pass filter is applied for smoothing the scaled spectrum, the MFCC are computed using the inverse DFT:
where y(j) is the output of the triangular band-pass filter, Np is the number of band-pass filters, c defines the Cepstrum coefficient number (c = 1, 2,…,Nc), and Nc defines the total number of Cepstrum coefficients. 4.2. Cutting tool Module The cutting tool wear condition is defined as a gradual loss of tool material at contact zones with the workpiece. It has a direct impact on the final dimensions of the product, surface finishing and surface integrity. Direct monitoring is not easily implemented due to nonstandard measuring methods. An indirect monitoring approach based on vibration measurements was developed [13]. Vibration signals were characterized by MFCC and associated with the cutting tool condition. The cutting tool states were defined as: new (0 < VB < 75 μm ), half-new (75 μm < VB < 150 μm), half-worn (150 μm < VB < 250 μm), and worn (250 μm < VB), where VB is the flank wear according to ISO-8688-2 norm. A Hidden Markov Model
An estimator based on multi-sensor and data fusion provides an improved and robust estimation. For details see [17].
5. Planning module A CNC machining center could have three main intelligent areas: cutting tool monitoring, operation & machine tool modeling and adaptive control [6]. The planning module on the proposed system has two main tasks: Computation of the optimal cutting parameters that minimizes the surface roughness. There are two operating modes: pre-process and in process. Computation of the machining policy that minimizes the production cost. 5.1. Cutting parameters (off-line optimization) One of the key tasks of the planning module is the computation of the optimal cutting parameters before the cutting operation (off-line optimization). Given a set d of variables provided by the operator (CC, PC, PG, Ra ), p the surface roughness (Ra ) is estimated, and the cutting parameters are optimized with a Genetic Algorithm (GA). Figure 6 shows the detailed procedure. 5.2. Cutting parameters (on-line optimization) The second key task of the planning module is the computation of the optimal cutting parameters during the machining process (on-line optimization). Considering some process variables and the on-line p estimated VB, the actual surface roughness (Ra ) is estip mated. First, the difference between the Ra and the ded sired surface roughness (Ra ) is computed: p
d
d
DError =(Ra -Ra )/ Ra
Articles
57
Journal of Automation, Mobile Robotics & Intelligent Systems
Based on this error and the previous feed per tooth (fz,n-1), the new fz,n is re-computed: fz,n = fz,n-1 (1-DError) p
Finally, the Ra is re-estimated on-line, based on current process variables and new cutting parameters. This iterative scheme can yield improved strategies for the next work-piece. Figure 7 shows the detailed procedure.
VOLUME 3,
N° 3
2009
a1, no action. This action represents an aggressive condition, because the operator uses the cutting tool until to reach the VB maximum. a2, change the cutting tool. It is a conservative condition, which implies to change the cutting tool when the Cutting Tool Module predicts the worn condition. a3, stop the machine and inspect the cutting tool. It is an intermediate condition among the a1 and a2 conditions.
5.3. Machining Policy The third key task of the planning module is the optimization of the machining policy, based on a minimization of the production costs. The policy, which generates guidelines for the operator, is limited to the available universe of variables of this problem (different Aluminum alloys, cutting tool diameters, and cutting tool wear condition). A methodology based on the Markov Decision Process (MDP) [4] was here implemented. The key characteristic of a Markov model is a probability law in which the future behavior of the system is independent of the past behavior, given its current condition. Therefore, a MDP is a controlled stochastic process satisfying the Markov property with a cost assigned to state transitions. A solution to a MDP is a policy mapping states to actions, and that determines the state transitions to minimize the cost according to the performance criterion.
Fig. 7. On-line optimization of the cutting parameters and machining policy. P: S x A is the state transition probability distribution function. For each action and state of the system, there is a probabilistic distribution over the states that can be reached after the actions. These transition matrices were defined to reach the tool fracture condition from any state of the cutting tool. The function P(s|s', a) is defined as the probability of reaching state s starting in state s' and given action a. As shown in Figure 8, the transition matrices were computed by considering the evolution of the cutting tool life,
Fig. 6. Off-line optimization of cutting parameters. A formal description of a MDP is as follows: S = {s1, s2, s3, s4, s5} is a finite set of states of the system. The possible states of the cutting tool wear condition are:
s1, new, s2, half-new, s3, half worn, s4, worn, and s5, tool fracture.
A = {a1, a2, a3} is a finite set of actions that the operator can take. The possible actions are: 58
Articles
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
N° 3
2009
Let c = {xn; n=0,1,...} be a Markov Chain with Markov Matrix P. Let f be cost function and let a be a discount factor (a = 0.925 is recommended). Then the expected total discounted cost is given by
Additionally, the tool fracture was included to simulate a random failure of the cutting tool, which can happen at any time during the machining process.
Therefore, the total expected discounted cost, under the probability law specified by the policy (p), is given by
Thus, the discounted cost optimization problem can be stated as follows [4]: a Find p in J such that where the vector is defined by
The expected discounted cumulative cost with respect to a state i for a particular policy p and fixed discount factor a is defined by (for all i in S): Fig. 8. Four states of the cutting tool condition can be identified. The measured VB is a function of the removal metal v. The optimal total-cost function is defined as The instantaneous cost function f is defined for each action as: fa1 = {44.15, 46.89, 49.28, 87.84 320.52}
which can be shown to satisfy the following optimality equations (for all i in S):
fa2 = {44.15, 46.89, 49.28, 300.37 320.52} fa3 = {49.2 51.94 54.32 52.59 320.52} These cost functions were computed by considering: a) the decision cost for a right or wrong action (Decision Theory), b) operator costs, energy cost, and the operator labor, and c) the cost of the cutting tool. The cost function was defined for all the cutting tool wear conditions and for each action. For this demonstration, the cost functions were computed for 6082-T6 Aluminium alloy, cutting tool of 16 mm, and a machining time of 1.2 minutes. R : S x A is a reward function for executing action a in the state s, assigning a real number for each action in each state of the system. b defines a vector that maps the state space into the action space, that is, an action function, which assigns an action to each state. These are evaluated by the MDP algorithm to compute the optimal policy. A stationary policy (p) is a policy that can be defined by an action function. The stationary policy is defined by the function b taking action a(i) at time n, if Sn = i, independent of previous states, actions and time-steps. The set of all (decisions) policies is denoted by u. The Expected Discount Cumulative Cost will be used to compute the optimal minimum cost. The total discount factor problem is equivalent to using a present worth calculation on the basis of decision-making.
The optimal policy can be found from the total-cost function as follows (for all i in S):
6. Experimental results A further set of experiments was defined for different cutting conditions (see Table 3), in order to evaluate the system performance. The test pieces were designed to represent three typical geometries used in the molding industry (see Figures 9-11). The cutting conditions were defined to include central points, limit points, and external points into the domain. Also, the different workpiece materials were defined for these validation tests.
Fig. 9. Test piece number 01 with the three machining geometries: straight, concave and convex paths. Articles
59
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
N째 3
2009
1. An operator defines the cutting conditions, cutting and geometric parameters, and the desired Ra value. p 2. The planning module computes the Ra under these conditions. 3. The GA computes the new cutting conditions (fz) bap d sed on the difference between the Ra and Ra , Figure 13. p d 4. If the Ra > Ra , GA re-computes new cutting conditions based on fz and Dtool, Figure 14. p d 5. If the Ra > Ra , GA re-computes the new cutting conditions now based on fz and ae, Figure 15. Fig. 10. Test piece number 02 with the three machining geometries: convex, concave and straight paths.
Fig. 12. Comparison between the measured and predicted Ra. Fig. 11: Test piece number 03 with the three machining geometries: straight, concave and convex paths.
The GA was configured by 100 generations, 20 population sizes, 0.8 crossover probabilities, and 0.2 mutation probabilities. The feed per tooth ranged between 0.0250.13 mm/foot, the radial depth of cut ranged between 1-5 mm and the cutting tool diameter 8-20 mm.
6.1. Off-line Optimization The optimization step was validated with several tests.
Table 3. Different cutting conditions, and geometric parameters defined for the new experiments(N: New, HN= Half-New, HW = Half-Worn, W: Worn, L: Line, I: Island, B: Box). VB
Experiment
fz
ae
Dtool
HB
Curv
N
P1-2024-L P1-2024-I P1-2024-B P3-2024-L P3-2024-I P3-2024-B P4-2024-I P5-2024-B P1-5083-I P1-7075-L P1-6082-I P1-6082-B P2-6082-L P2-6082-I P2-7075-B P2-7075-L P3-7075-B P3-7075-I P3-7075-L P1-CERTAL-B P9-2024-B P9-2024-I P3-5083-L P3-5083-I P3-5083-B
0.075
3
12
109
0.075
3
12
110
0.075 0.075 0.047 0.115 0.04
3 3 2 4.5 4
12 12 8 20 12
110 109 71 158 89
0.04
4
12
94
0.08
5
16
151
0.08
5
16
151
0.04 0.05
4 2
16 20
144 110
0.05
2
16
67
0 0.037 -0.019 0 0.083 -0.556 0.025 -0.025 0.077 0 0.038 -0.0185 0 0.0385 -0.0286 0 -0.0286 0.0222 0 -0.0185 -0.0313 0.0208 0 0.0357 -0.0192
HN
N N HN HN HW HW W HW
W W HW
60
Articles
Journal of Automation, Mobile Robotics & Intelligent Systems
6.2. Machining Policy The Markov Decision Process (MDP) was validated in the industrial HS-1000 Kondia machining center.
Fig. 13. Ra optimization based on fz. GA computes the optimum cutting parameters.
Fig. 14. Optimization based on fz and Dtool.
VOLUME 3,
N° 3
2009
Based on this result, the recommendations are: For s1 (new cutting tool condition) the action a1 should be applied. For s2, a2 should be applied. For s3, a1 should be applied. For s4, a3 should be applied. For s5, a1 should be applied. Given that MDP is a stochastic model defined by a Markov system, the transition matrixes and an initial distribution of the states (i.e., {s1, 0, 0, 0, 0}) were simulated several times to illustrate the variability of the results. Figure 16 shows two simulations given the Pa1 (aggressive condition) and Pa2 (conservative condition) matrices. Figure 16 (top plot) shows a normal evolution of the VB in the cutting tool, where the operator does not take actions and waits for a possible tool fracture when the cutting tool reaches the maximum worn condition. Figure 16 (bottom plot) depicts the conservative condition, where the operator decides to change the cutting tool if the worn cutting tool condition is detected during the machining process. Figure 17 illustrates the results of the variability of the Markov system for 30 evaluations. The “box and whisker” plot shows the comparative costs between the different actions and the optimal policy determined by the MDP. The boxes have lines at the lower quartile, median, and upper quartile values. The whiskers are lines extending from each end of a box to show the extent of the rest of the data. The boxes are notched to represent a robust estimate of the uncertainty about the medians for boxto-box comparison. These results demonstrate that the optimal policy presents the lower costs when compared with the aggressive, intermediate, and conservative actions.
Fig. 15. Optimization based on fz and ae. The MDP can be solved using different algorithms: Police iteration and value iteration. The optimal totalcost function was computed based on the defined MDP and the information presented in section 5.3. The optimal total-cost function computed with the Policy iteration algorithm is given by u* = {659.56, 704.04, 803.51, 868.92, 4273.6} The optimal policy (p) can be obtained by an iterating step that defines the actions of the operator that minimize the cost: p = {a1, a2, a1, a3, a1}
Fig. 16. Simulation of the Markov system with the two transition matrixes. Top plot presents an aggressive condition for the action a1, and bottom plot a conservative condition for the action a2. Articles
61
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
It can be also observed that the machining policy based only on a1, a2 or a2 have greater accumulated cost than the one yielded by the MDP for the 100 machining cycles. In Figure 17 (upper plot), the average accumulative cost for the 100 machining cycles is USD $4973.79, $4755.87, and $4385.18 for actions a1, a3, and optimal policy, respectively. Therefore, the potential savings are USD $588.6 and $370.7 for the a1-optimal policy and a3optimal policy, respectively.
N° 3
2009
AUTHORS Antonio J. Vallejo*, Ruben Morales-Menendez - Associate Director of Research, Tecnológico de Monterrey. Tecnológico de Monterrey, Campus Monterrey, Av. E Garza Sada # 2501 Sur; 64,849 Monterrey; NL México. Hugo Elizalde Siller - Professor of the Mechanical Engineering Department at the ITESM-Campus Monterrey. Tecnológico de Monterrey, Campus Monterrey, Av. E Garza Sada # 2501 Sur; 64,849 Monterrey; NL México. E-mails: avallejo@itesm.mx, rmm@itesm.mx, hugo.elizalde@itesm.mx. * Corresponding author
References [1]
[2]
[3]
[4]
[5]
[6] [7]
Fig. 17. Comparison of costs for the different actions and optimal policy by using the box a whisker plot. Results with Pa1 (up plot) and Pa2 (low plot) transition matrixes.
[8]
7. Conclusions In this work, a planning module for High Speed Machining was designed and incorporated within an intelligent control system. This module is based on the Markov Decision Process (MDP) framework, yielding novel features in an optimization process. In particular, the MDP framework allows modeling decision-making under uncertainty where the actions of the operator are partly under control. Although early results are promising, the full integration of an MDP framework will require more research into the cross relationship between key variables. This will be investigated in future works.
[9]
[10]
[11]
ACKNOWLEDGMENTS The authors would like to thank to J.R. Alique (Instituto de Automática Industrial, www.iai.csic.es, Spain) for the financial support and technical advice during the experimental program. Also, authors are grateful for the important suggestions made th to this work during the 9 IFAC workshop on Intelligent Manufacturing Systems, which motivated this extended version. 62
Articles
[12]
[13]
Balic J., Intelligent CAD/CAM Systems for CNC Programming - An Overview. Advances in Production Eng & Management, no. 1, 2006, pp. 13-22. Carpenter I., Maropoulos P., “Automatic Tool Selection for Milling Operations. Part 1: Cutting Data Generation”. In: Proc. Instn. Mech. Engrs. 214, 2000 pp. 271282. Dereli T., Filiz I., Baykasoglu A., “Optimizing Cutting Parameters in Process Planning of Prismatic Parts by using Genetic Algorithms”, Int. J. Prod. Research, vol. 39, no. 15, 2001, pp. 3303-3328. Feldman R.M., Valdez-Flores C., Applied Probability and Stochastic Processes, Thomson Brooks/Cole, 511 Forest Lodge Road, Pacific Grove, CA 93950, 1st Edition, 2004. Jawahir I., Wang X., “Development of Hybrid Predictive Models and Optimization Techniques forMachining Operations”, Journal of Materials Processing Technology, no. 185, 2007, pp. 46-59. Monostori L., “Intelligent Machines”. In: Proc. of 2nd Conf. on Mechanical Eng., Hungary, 2000, pp. 24-36. Mukherjee I., Ray P.K., “A Review of Optimization Techniques in Metal Cutting Processes”, Computers and Industrial Engineering, no. 50, 2006, pp. 15-34. Mursec B., Cus F., “Integral Model of Selection of Optimal Cutting Conditions from Different Databases of Tool Makers”, J. of Materials Processing Technology, no. 133, 2003, pp. 158-165. Palanisamy P., Rajendran I., Shanmugasundaram S., “Optimization of Machining Parameters Using Genetic Algorithm and Experimental Validation for End-Milling Operations”, Int. J. Adv. Manuf. Technol., no. 32, 2007, pp. 644-655. Suresh P.V.S., Venkateswara P, Deshmukh S.G., “A Genetic Algorithm Approach for Optimization of Surface Roughness Prediction Model”, Int. J. of Machine Tools and Manufacture, no. 42, 2002, pp. 675-680. Tansel I.N., Ozcelik B., Bao W.Y., Chen P., Rincon D., Yang S.Y., Yenilmez A., “Selection of Optimal Cutting Conditions by Using GONNS”. Int. J. of Machine Tools and Manufacture, no. 46, 2006, pp. 26-35. Tzeng Y., Chen, F., “Optimization of High Speed CNC Milling Process Using Two-Phase Parameter Design Strategy by the Taguchi Methods”, JSME Int. J. Series C, vol. 48, no. 4, 2005, pp. 775-783. Vallejo A., Nolazco-Flores J., Morales-Menendez R., Sucar L., Rodríguez C., “Tool-Wear Monitoring Based on
Journal of Automation, Mobile Robotics & Intelligent Systems
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
VOLUME 3,
N° 3
2009
Continuous Hidden Markov Models”. In: 10th Iberoamerican Congress on Pattern Recognition, 2005, pp: 880-890. Vallejo A., Morales-Menendez R., and Alique J.R., Designing a Cost-effective Supervisory Control System for Machining Processes, In: IFAC-CEA Monterrey México, IFACPapersOnline.net. Available: October 2007. Vallejo A., Morales-Menendez R., Alique J.R., “Intelligent Monitoring and Decision Control System for Peripheral Milling Process”. In: IEEE Int. Conf. on Systems, Man, and Cybernetics, Singapore, 2008, pp. 1620-1625. Vallejo A., Morales-Menendez R., Decision Control System for HSM. In: IFAC-IMS, Szczecin, Poland, IFACPapersOnLine.net. Available: October 2008. Vallejo A., Morales-Menendez R., Elizalde-Siller H., Surface Roughness Modeling in Peripheral Milling Processes, to appear in NAMRC 37 USA, May 2009. Wong E., Sridharan S., “Comparison of Linear Prediction Cepstrum Coefficients and Mel-Frequency Cepstrum Coefficients for Language Identification”. In: Proc. of Int. Symp. on Intelligent Multimedia, Video and Speech Pro-cessing, pp. 95-98. Yih-Fong T., “A Hybrid Approach to Optimize Multiple Performance Characteristics of High-Speed Computerized Numerical Control Milling Tool Steels”, Materials and Design, 2005, p. 110. Zuperl U., Cus F., Mursec B., Ploj T., “A Hybrid Analytical-Neural Network Approach to the Determination of Optimal Conditions”, J. of Materials Processing Technology, no. 157-158, 2004, pp. 82-90. Zuperl U., Cus F., and Kiker E., “Intelligent Adaptive Cutting Force Control in End-Milling”, Technical Gazette, vol. 13, no.1-2, 2006, pp. 15-22.
Articles
63
Journal of Automation, Mobile Robotics & Intelligent Systems
VOLUME 3,
N° 3
2009
AN AUTOMATIC METHOD TO IDENTIFY HUMAN ALTERNATIVE MOVEMENTS: APPLICATION TO THE INGRESS MOVEMENT Mohand Ouidir Ait El Menceur, Philippe Pudlo, Jean-François Debril, Philippe Gorce, François-Xavier Lepoutre
Abstract: Alternative movement techniques identification is primordial in many studies of biology, medicine and ergonomics. Few studies in literature propose quantitative techniques to achieve this purpose. Park et al. [19] proposed a biomechanical index based technique named JCV. This method finds its limits when dealing with three-dimensional complex movements. In the present study we propose a modification of this method so that it will be applied to complex movements. We consider a non-habitual end effector (Midpoint between Hips). We obtain other indices, 3BJCV, to which a non-supervised clustering technique is applied. We have applied our method to the ingress movements of 37 young and elderly subjects with or without prosthesis entering in a minivan vehicle. Our method allows the identification of the two big classes of ingress movements observed by Ait El Menceur et al. [4]. Keywords: joint contribution vector, automatic classification, automobile ingress movement strategies, young and elderly drivers, and biomechanics.
1. Introduction Alternative movement techniques contribute to the movement understanding, ergonomic analysis or movement simulation. Many alternative movement techniques (motion strategies) are presented in literature like Assiante [10] with the description of the evolution of the equilibrium strategies for children, Burgess-Limerick and Abernethy [11], Zhang et al. [21] in the determination of the lifting strategies, Alexandrov et al. [6], [7] in the human trunk forward bending strategies. Some authors identified alternative movement techniques in order to apply them in industrial applications. Among these studies there are studies dealing with vehicles. For instance, Monnier et al. [18] identified three belt-fastening strategies; Andreoni et al. [8] identified two seating strategies when studying drivers' posture. Some alternative movement techniques of ingress and egress are presented in literature as well. Like Andreoni et al., who defined three ingress movement strategies and one common egress strategy in 1997, Ait El Menceur et al. [4] present 5 automobile ingress strategies and 3 egress strategies observed on 4 vehicles covering a wide range of vehicles present on the market for young and elderly population with or without hips and/or knees prostheses, Kawachi et al. [14] present two ingress strategies, Lempereur et al. [15] present another two ingress strategies, and Lempereur [17] presents other three ingress strategies. 64
Articles
Few studies propose quantitative methods to identify alternative movement techniques. For example some objective indices are defined in literature to differentiate the two squat and stoop strategies of the weight lifting. Among these studies, Burgess-Limerick and Abernethy [11] defined a static index based on the ratio between knee flexion and the sum of the ankle, hip and lumbar flexions to describe the initial position of the lifting motion. Zhang et al. [21] combined an inverse kinematics technique with a trial and error heuristic optimization procedure to determine an index made from two parameters, assigned to the two legs, that quantifies the contributions of the velocities of the back and the legs relative to the linear velocity of the shoulder. Park et al. [19] proposed a biomechanical parameters-based index JCV (Joint Contribution Vector) to quantify three dimensional whole body movements. They consider the contribution of joint articulation angles through the estimation of the distance between one movement and another one similar to the movement, where a joint degree of freedom (DOF) is eliminated. After that a semi automatic classification is practiced to these JCVs. Lempereur et al. [15] used this technique to identify two classes of ingress movement strategies. However, the approach as presented by Park et al. [19] presents some limits like the fact of considering just goal-directed task movements. Also their method is not able to quantify complex movements like running, jumping… (Adams and Cerney, [5]). Though Lempereur et al. [15] used this method to identify ingress alternative movement techniques; they have just considered one body chain (the right lower limb). While the ingress movement is a three dimensional and complex movement involving the whole body motion coordination. Ingress movement consists to set the centre of gravity of the driver far enough towards the rear of the vehicle (Way et al. [20]). That contributes to its staying near to the body. Some studies assimilate the body centre of gravity to the centre of gravity of the trunk (De Leva, [13]). We define the ingress movement as the action of setting the bassin on the seat and that by respecting many constraints (minimizing discomfort, collision with some vehicle parts avoidance…). In the present study we propose a modification of the method of Park et al. [19] to be applied to other movements other than the goal-directed tasks. We propose to consider the midpoint between the two hips (MPH) like the end effector. The position of this end effector can be determined from different body parts (head or feet). Our method consists to quantify the contribution of each degree of freedom (DOF) of each body chain in the posi-
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
tioning of the end effector (MPH). By the end an index 3BJCV is defined. We have applied our method to quantify and distinguish ingress movement strategies for 37 young and elderly people with or without prosthesis on a minivan vehicle. By this application we want to confirm the ingress strategies observed by Ait El Menceur et al. [4], on the minivan vehicle. This step of observation proves very useful for the comprehension of the human behaviours. However, it requires a great expertise and unquestionably suffers from the lack of numerical reference marks. The present study may fill this gap.
2. Method 2.1. Humanoid model To reconstruct the ingress movement, we consider 20 degrees of freedom three-dimensional humanoid model (Ait El Menceur et al., [3]). The model is made up from three open kinematic chains representing the two lower limbs and the trunk with the head.
Fig. 1. Humanoid model. The 20 DOF of the Humanoid model are partitioned as follows: 3 DOF for each hip, 3 DOF for the joint linking the two bodies of the trunk (Lempereur et al. [16]), 3 DOF for the joint linking the head to the upper trunk, 2 DOF for each knee and ankle. The humanoid's articulations are rotoid. To apply our approach, the extremities of the lower limbs and the head are seen like the roots of the 3 open kinematic chains of our model (Fig. 1). We consider three bases with one effector. The bases are the extremities of the two feet and the head. The end effector corresponds to the midpoint between the two hips MPH). The humanoid model is represented in Fig. 1. 2.2. Movement reconstruction Depending on the base from which we compute, the end effector's (MPH) trajectories are extracted from the following expression:
N째 3
2009
: the spatial position of the origin of the reference linked to each root (N=1 for the right foot, N=2 for the left foot and N=3 for the head). All references are expressed in the vehicle reference system. : the three joint articulation angles giving the spatial orientation of each root in the vehicle reference system. : the 3D position of each ankle and of the midpoint between C7 at the manubrium expressed in each corresponding local reference system linked to each root. (with J=7 for the lower limbs and J=6 for the trunk+ head chain) is the homogeneous matrix giving the spatial position of the MPH in the reference system having the same base than each root but expressed either in the ankles or in the midpoint between C7 and the manubrium and that according the considered chain. 2.3. Three Bases Joint Contribution Vector (3BJCV) Our method reposes on the same principle that the method of Park et al. [19]. We consider that the ingress movement is achieved by occupying a set of postures during a defined time t, with (with is the final time of the ingress movement). The objective is to set the midpoint between the two hips (MPH) inside the vehicle. Each person has ones own way to do that. The different ways (motion strategies) are characterized by special body motion configurations. The objective is to represent quantitatively the characteristics of each movement, then classify the movements into big groups. The movement characteristics are the individual contributions of each DOF in the final positioning of the MPH on the vehicle's seat. Each kinematic chain is represented by a set of joint angles: , with J=7 for the lower limbs chains and J=6 for the head + trunk chain. are the joint articulation angles of the kinematic chain. For each chain the following algorithm is applied: Compute the contribution of the motion of the degree of freedom during a movement by comparing with a hypothetical "almost identical" movement , in which the motion of the joint articulation angles is eliminated.
(2) The first DOF to be eliminated are the most distal. Compare the movements and in the end effector's trajectory domain. The contribution of the motion of the joint articulation angles is defined in the task space as follows: (3)
(1)
(4)
With is the homogeneous matrix from which the spatial position of MPH (expressed in the vehicle reference system) is extracted.
(5)
Articles
65
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
2009
where and are the trajectories of the end effector in the task space corresponding to the motion and . These trajectories are computed with the method of the movement reconstruction (see section 2.2).
ingress movement adaptation phase, which starts with the take off of the right foot from the ground (prior to its positioning inside the vehicle) and ends with the take off of the left foot from the ground (prior to its positioning inside the vehicle).
Normalize the motion contributions to be represented on a proportional scale defined between [-100,100]:
3. Experimentation
(6)
(7)
(8) Build The 3 vectors, (9) (10) (11) With J = 7 for the “lower limbs” chains and J = 6 for the “trunk + head” chain, N=1 for the right foot, N=2 for the left foot and N=3 for the head. Build the vectors (12) (13) (14) with is the index of the right lower limb, the index of the left lower limb and the index of the trunk and head chain. Gather the three vectors in a single vector (15) The vector is used as a movement index for each subject. 2.4. Identification of movements Our method receives as inputs the ingress movements. Each motion is represented in terms of 3BJCV. Hierarchical clustering method using the furthest neighbour aggregation distance is applied to the 3BJCV dataset to form clusters of 3BJCVs by similarity. Each cluster represents a distinct movement strategy. In a previous study (Ait El Menceur et al., [1]), we partitioned the ingress movement into three sub-phases. Just the second sub-phase is considered in the present study as it is the most characterizing sub-phase for the ingress movement strategies (Ait El Menceur et al., [4]). The subphase is the 66
N° 3
Articles
The experiments were conducted as part of the French HANDIMAN (RNTS 2004) project. This project aims at integrating the ingress/egress discomfort for elderly and/or disabled persons in first stages of new vehicle conception for these populations. This project considers several trials of ingress and egress movement of 41 test subjects on four vehicles representative of a large part of vehicles present in the trade (Ait El Menceur et al., [2]). In the present study only the ingress trials of 37 subjects on one vehicle are considered. The trials are performed on a minivan vehicle. An optoelectronic motion capture system Vicon® 612 at sampling rate of 60 Hz is used to capture the different movements. The system is equipped with 8 CCD cameras. Fifty-three anatomical markers are set on the different body segments of the subjects to capture their movements during the different acquisitions (Ait El Menceur et al., [2]). The subjects are asked to enter in the vehicle in an ordinary manner. After these experimentations we obtain three-dimensional positions of the different markers. These positions constitute our data. 3.1. Off-line data processing Some off-line data processing was done on the data issued from the different acquisitions. These processing integrate: data filtering, joint centres estimation, body segment lengths computation and joint articulations angles computation. Most of the processing stated above is similar to the ones presented in (Lempereur et al., [15]). 3.2. Joint articulation angles computation The joint articulation angles are computed for each chain. The recommendations of ISB are followed in the definition of the different body references. To apply our approach, the order of the angles computation starts from the most distal body (base) and ends at the end effector (midpoint between the two hips). The joint angles express, in this case, the spatial orientation of the proximal bodies compared to their immediately distal bodies. 3.3. Movement correction Due to the experimental discrepancies, the computed angles are biased (Cappozzo et al., [12]). Recently we have proposed a mutli-objective optimization based procedure to correct these problems (Ait El Menceur et al., [3]). By the end we obtain a movement database, allowing a movement reconstruction close to the measured one, to be used in our method. 3.4. Three bases joint contribution vector computation Ones the movement database is obtained, the 3BJCV algorithm is applied on the 37 subjects ingress movements. For each subject's movement a 3BJCV, characte-
Journal of Automation, Mobile Robotics & Intelligent Systems
VOLUME 3,
N째 3
2009
rizing the contribution of each DOF in the positioning of the MPH, is computed.
4. Data analysis With the obtaining of the 3BJCV set, an aggregation method is used in order to identify the different ingress movement strategies. We have used the Furthest neighbour aggregation distance. The 37 3BJCV and their proximity relationships are represented on a dendrogram (Fig. 2). The dendrogram contains 69 nodes. The aggregation distances are represented in Figure 3. The aggregation distance of the node 69 is 25 and 24.5 for the node 68, while it is 19.5 for the nodre 67. The big loss of distance between each two successive aggregation distances is observed between nodes 68 and 67 (5). That suggests us to cut the dendrogram at the level of the node 68. On the other hand a visual inspection of the dendrogram and the aggregation distances histogram confirms the existence of two big classes of movements. The first class contains 31 subjects and the second class contains 6 subjects. In a former study (Ait El Menceur et al., [4]) we have observed two big families of ingress movements, the one-foot ingress movement family and the two-foot ingress movement family. The subjects of the first class identified in the present study correspond to the subjects entering by one-foot strategy and those of the second class correspond to the two-foot ingress movement.
Fig. 3. Aggregation distances histogram.
Fig. 4. Ingress movement of the first class.
Fig. 2. Horizontal dendrogram. Figure animations of motions (0%, 25%, 50%, 75% and 100% of the motion) corresponding to the two motion clusters are provided in figure 4 and 5. These potions will be used like phases to present the identified strategies.
The two classes of the ingress movement identified in the present study are characterized by the special body motion coordination that the subjects adopt. The subjects of the first big class take off their right feet from the ground to put them inside the vehicle's floor. At 25% of their movements, the subjects of this class pass the vehicle's sill with different knees' flexions and hips' flexion and rotation. They curve their heads and trunks, so that they prepare the next phase. At 50% of the movement they set their right feet on the vehicle's floor and they adapt their bodies so as to drive them inside the vehicle. At 75% of the movement more than a half of the body is already inside the vehicle, the angles of flexion of the knees, flexion and rotation of the hips and the curving of the trunks and heads take important values. These angles accentuations are provoked by the body ingress moveArticles
67
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
ment coordination influenced mainly by the vehicle's geometry Ait El Menceur et al. [4]. The subjects finalize their movements by the take off of the left feet from the ground and by starting the positioning on the seat.
N° 3
2009
tasks, our method proposes to consider a non habitual end effector (MPH in our case), therefore to take into account the contribution of all body parts, that allows its application to quantify complex movements. To apply our method, the joint articulation angles are computed between the most distal and the immediately proximal joints. Geometric indices, 3BJCV, quantifying the contributions of each DOF in the final positioning of the MPH are computed for each subject. An automatic and nonsupervised classification method is applied on the 37 3BJCVs. The hierarchical clustering technique considering the furthest neighbour aggregation distance is the used technique. The number of clusters was determined by the visual inspection of the obtained dendrogram. Two big motion clusters were identified. The first group contains 31 subjects whereas the second group contains 6 subjects. These two motion clusters are coherent with the two big families of ingress movement identified by Ait El Menceur et al. [4]. In the present study we showed the applicability of our method on the ingress movement. To generalize our method we propose to test it on other movements. ACKNOWLEDGMENTS The presented research work has been supported by International Campus on Safety and Intermodality in Transportation, the Nord-Pas-de-Calais Region, the European Community, the Regional Delegation for Research and Technology, the Ministry of Higher Education and Research, the French National Research Agency and the National Center for Scientific Research. The authors gratefully acknowledge the support of these institutions. The authors would like to thank their colleagues at SMPR of Lille, INRETS, and Renault, who participated in the HANDIMAN project.
Fig. 5. Ingress movement of the second class. The subjects of the second big class make a first step at 25 % of the movement so as to turn their backs to the vehicle. They aim to adopt their sitting position. At 50% of the movement, the subjects start to cross the vehicle's door with their basins. They curve their trunks and heads so as to watch the vehicle's cockpit and so as to adopt their movements. At 75% of the movement the subjects already sit on the seat, a large part of their bodies are inside the vehicle. They finalize their movements by sitting completely on the seat, lifting their left feet from the ground and thus starting the positioning on the seat phase.
AUTHORS Mohand Ouidir Ait El Menceur, Philippe Pudlo*, JeanFrançois Debril, François-Xavier Lepoutre - LAMIH UMR CNRS 8350, Université de Valenciennes et du Hainaut Cambrésis, Le mont Houy, 59313 Valenciennes, cedex 9, France. Tel: +33 (0)3 27511350; Fax: +33 (0)3 2751 1316; e-mail: philippe.pudlo@univ-valenciennes.fr. Gorce Philippe - HANDIBIO-ESP EA 43-22, Université du Sud Toulon-Var, La Garde, France. * Corresponding author
References [1]
5. Conclusion We have proposed an adaptation of the joint contribution method, presented initially by Park et al. [19], so that it will be applied for complex movements. An experimental protocol and device were setup to record the ingress movements. Many offline processing allowing the exploitation of the obtained data are performed. Unlike the method of Park et al. [19], that considers simple end effectors and which is well adapted for goal-directed 68
Articles
[2]
Ait El Menceur M-O., Pudlo Ph., Découfour N., Bassement M., Gillet C., Chateauroux E., Gorce Ph., Lepoutre F.-X., “An experimental protocol to study the car ingress/egress movement for elderly and pathological population”. In: Proceeding of the European Annual Conference on Human Decision-Making and Manual Control, Valenciennes, September 2006, ISBN 2-905725-87-7. Ait El Menceur M-O., Pudlo Ph., Découfour N., Bassement M., Gillet C., Chateauroux E., Gorce Ph., Lepoutre F.-X., “Towards dynamic studying of the car ingress/egress movement for elderly and disabled population”. In: Proceeding of IEEE HUMAN'07, Human Machine Interaction Conference, Timimoun, Algeria, 12th-14th March 2007.
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
Ait El Menceur M-O., Pudlo Ph., Gorce Ph., Lepoutre F.-X., “An optimization procedure to reconstruct the automobile ingress movement”. In: Proceeding with IEEE, IFAC, 5th International Conference On Informatics in Control, Automation and Robotics ICINCO, Funchal, MadeiraPortugal, 11th-15th May 2008, ISBN 978-989-8111-35-7. Ait El Menceur M-O., Pudlo Ph., Gorce Ph., Thévenon A., Lepoutre F.-X., ”Alternative Movement Identification in the Automobile Ingress and Egress for Young and Elderly Population with or without Prostheses”, International Journal of Industrial Ergonomics, vol. 38, 2008, pp. 1078-1087. Adams D.C., Cerney M.M., “Quantifying biomechanical motion using Procrustes motion analysis”, Journal of Biomechanics, vol. 40, 2007, pp. 437-444. Alexandrov A.V., Frolov A.A., Massion J., “Biomechanical analysis of movement strategies in human forward trunk bending. I. Modeling”, Biological cybernetics. no. 84, 2001, pp. 425-434. Alexandrov A.V., Frolov A.A., Massion J., “Biomechanical analysis of movement strategies in human forward trunk bending. II. Experimental study”, Biological cybernetics, no. 84, 2001, pp. 435-443. Andreoni G., Rabuffetti M., Pedotti A., “New approaches to car ergonomics evaluation oriented to virtual prototyping”. In: EURO-BME Course on Methods & Technologies for the Study of Human Activity & Behaviour, Italy, March, 1997. Andreoni G., Santambrogio G.C., Rabuffetti M., Pedotti A., “Method for the analysis of posture and interface pressure of car drivers”, Applied Ergonomics, no. 33, 2002, pp. 511-522. Assiante C., “La construction des strategies d'équilibre chez l'enfant au cours d'activités posturo cinétiques”, Ann Réadaptation Méd Phys., no. 41, 1996, pp. 239-249, (in French). Burgess-Limerick R., Abernethy B., “Qualitatively different modes of lifting”, International Journal of Industrial Ergonomics, no. 19, 1997, pp. 413-417. Cappozzo A., Catani F., Leardini A., Benedetti M.G., Croce U.D., “Position and orientation in space of bones during movement: experimental artifacts”, Clinical Biomechanics, no. 11, 1996, pp. 90-100. De Leva P., “Joint center longitudinal positions computed from a selected subset of Chandler's data”, Journal of Biomechanics, no. 29, 1996, pp. 1231-1233. Kawachi K., Aoki K., Mochimaru M., Kouchi M., “Visualization and Classification of Strategy for Entering Car”. In: SAE International conference and exposition of Digital Human Modelling for Design and Engineering, Iowa City, Iowa, USA, 14th-16th June 2005, SAE paper: 200501-2683. Lempereur M., Pudlo Ph., Gorce Ph., Lepoutre F.-X., “Identification of alternative movement techniques during the ingress movement”. In: Proceeding of the IEEE In-ternational Conference on Systems Man and Cybernetics, Hawai, USA, October 2005. Lempereur M., Pudlo Ph., Gorce Ph., Lepoutre F.-X., “Mannequin virtuel adapté à la simulation du mouvement d'entrée-sortie au véhicule automobile”, Journal Européen des Systèmes Automatisés, no. 38, 2005, pp. 959-976, (in French).
[17]
[18]
[19]
[20]
[21]
N° 3
2009
Lempereur M., Similation du mouvement d'entrée dans un véhicule automobile, PHD thesis, Université de Valenciennes et de Hainaut Cambrésis. 06/04, 2006, (in French). Monnier G., Wang X., Dolivet C., Verriest J.-P., Lino F., Dufour F., “Experimental investigation on the discomfort of safety belt handling”, VDI Berichte, no. 1675, 2002, ISSN: 0083-5560 CODEN: VDIBAP, pp. 467-483, 518. Park W., Martin B.J., Choe S., Chaffin D.B., Reed M.P., “Representing and identifying alternative movement techniques for goal-directed manual tasks”, Journal of Biomechanics, vol. 38, 2005, pp. 519-527. Way M.L., Berndt N., Jawad B., “The Study of a Cockpit with a Fixed Steering Wheel Position: Methods and Model”. SAE International conference and exposition of Digital Human Modelling for Design and Engineering, SAE paper. 2003-01-2180, 2003. Zhang X., Nussbaum M.A., Chaffin D.B., “Back lift versus leg lift: an index and visualization of dynamic lifting strategies”, Journal of Biomechanics, no. 33, 2000, pp. 777-782.
Articles
69
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
N° 3
2009
EARLY DETECTION OF BEARING DAMAGE BY MEANS OF DECISION TREES Bovic Kilundu, Pierre Dehombreux, Christophe Letot, Xavier Chiementin
Abstract: This paper presents a procedure for early detection of rolling bearing damages on the basis of vibration measurements. First, an envelope analysis is performed on bandpass filtered signals. For each frequency range, a feature indicator is defined as sum of spectral lines. These features are passed through a principal component model to generate a single variable, which allows tracking change in the bearing health. Thresholds and rules for early detection are learned thanks to decision trees. Experimental results demonstrate that this procedure enables early detection of bearing defects. Keywords: damage detection, bearing damage, envelope detection, decision trees, preventive maintenance.
1. Introduction Rolling bearing degradation can be very detrimental in certain situations. Its progressive character raises the question of the determination of the right moment to perform their replacement, at the cost of stopping machines. If it is possible to detect incipient bearing damage and to identify all their evolution stages, one can estimate the reliability curve of the bearing and its remaining life and thus optimize maintenance schedule. The use of vibrations for rolling bearing monitoring is explainable by the degradation process. Indeed, bearing degradation generally results in a subsurface or a surface fatigue of one of the races. Thus fatigue crack can occur and propagate until a large pit or spall occurs in the surface [1], [2]. This will generate repetitive impacts during the rotation of rolling elements (ball or roller) over the race. These shocks excite defective frequencies, which depend on the number of rolling elements, the rotational speed and the geometry of the bearing. These frequencies are given by the following expressions: Outer race defect frequency: f(Hz) =
n BD f (1 – cosb) 2 r PD
(1)
Inner race defect frequency: f(Hz) =
n BD f (1+ cosb) 2 r PD
(2)
Ball (roller) defect frequency: f(Hz) =
BD BD 2 f [1 – ( cosb) ] PD r PD
Articles
2. Envelope analysis Vibration signals raised on degraded bearings contain repetitive shocks, which excite high frequency resonances. A direct frequency analysis does not always give access to interesting information when the energy content of the signal, in consequence of these resonances, is located in these high frequencies. However, these repetition frequencies can be easily highlighted in the envelope signal. Classically, the signal is first band pass-filtered around the frequency range where a significant broadband increase has been detected [5]. From the filtered signal, which must contains only the repetitive impulses; one performs envelope detection or amplitude demodulation, which gives the outline of the signal. Usual methods process by squaring and low pass filtering or by Hilbert transforms.
(3)
Where n is the number of balls or rollers, fr is the 70
relative rotational frequency between inner and outer races, BD is the ball (roller) diameter, PD is the pitch diameter and b the contact angle. Consequently, vibration analysis is well indicated for monitoring and diagnosis tasks of bearing. Tandon and Choudhury [3] have presented an exhaustive review of methods for bearing monitoring and diagnostic of rolling bearings on the basis of vibration analysis. For incipient damage, bearing defective frequencies are usually buried under noise and other frequency components in spectral representation. Denoising methods can be applied to improve damage detection. According to industrial standards, the fatal size of spall is fixed at 6.25 mm² (0.01 in²) [4]. When a defect of the fatal size is detected, emergency stop, which likely involves expensive disturbances, must occur. Therefore, it is important to detect defects at their early phase. In this work, we apply a procedure, which uses envelope analysis of band-pass filtered signals and decision trees to automate the detection of incipient defects. Features extracted from signals are processed by principal component analysis to define a residue, which accurately reveals the alteration of the bearing health. In section 2, envelope analysis is introduced. Section 3 presents the principles of principal component analysis, as well as its application in fault detection. In section 4, we present decision trees and their use in fault diagnostic. Experimental validation of these concepts and obtained results are discussed along the section 5.
2.1. Squaring and low pass-filtering This method proceeds by squaring the signal before
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
low pass-filtering it. Squaring the signal effectively demodulates the input by using itself as the carrier wave. If necessary, one can correct the scale by using a gain of 2 on the signal. Since only the lower half of the signal energy is to be kept, this gain boosts the final energy to match its original energy. The square root of the signal is finally taken to reverse the scaling distortion from squaring the signal. This method is easy to use, but one has to make judicious choice of cutoff frequency of the low passfilter. 2.2. Hilbert transform This approach creates the analytic signal from the input signal. The analytic signal is a complex signal. Its real part is the original real signal and its imaginary part is composed by the Hilbert Transform of the signal. y(t) = x(t) + jH(x(t))
(4)
The envelope of the original signal is obtained as the magnitude of the analytic signal. 2.3. The use of a root mean square detector In a similar way to the squaring and low pass-filtering method, we have proposed the use of a root mean square detector to obtain envelope signal. This procedure is very simple to implement and to apply and proceeds as follows: Let xn be the vibration signal. One defines a sequence of vectors using a sliding window of judicious length and form. This window is applied with an overlap of 50%. Then, the root mean square value calculated on each of these vectors is assigned to the time position corresponding to its beginning. The time series made up of these root mean square values will represent on a certain scale the envelope (Figure 1).
N° 3
2009
Fig. 2. Envelope analysis scheme.
3. Principal component analysis Principal component analysis (PCA) is a way of transforming a set of data by finding, in the feature space, an orthogonal base, which dimension is determined by principal directions. PCA allows first to go from a set of vectors x1 , x2 ,…, xM stored in matrix XM´N to YM´N, a set of vectors y1 , y2 ,…, yM. Components of vectors xi are the original variables and those of yi the factors or factor scores. The new variables will avoid any redundancy in the loaded information. Then, one will only retain in Y components that correspond to an informational criteria [6]. PCA goes about this transformation linearly. Factors are built as linear combination of variables. In this linear context, the non-redundancy condition of factors is simply the condition of non-correlation of factors. Spectral decomposition is thus applied to S the covariance matrix of X. The main idea behind PCA is that high information corresponds to high variance. To transform matrix into X into Y=X.A, AN´N must be chosen in such manner that Y has largest variances. A will thus be the orthogonal matrix used in spectral decomposition of S. Columns in matrix A eigenvectors of S. Directions of largest variance are parallel to eigenvectors As in practice S is not known, one uses the sample variance (covariance) matrix S, which is defined as: S=
1 T XX M–1
(5)
Vectors yi are such that their components are not correlated and they are characterized by the fact that high information is stored in a few components. Thus, only a reduced number of components can be considered to describe the data set. One will keep components with high variance, i.e. corresponding to largest eigenvalues. This means that principal components that contribute less than a given fraction (threshold) to the total variation in the data set are eliminated. This criterion can be written: la ³ threshold N Sa=1 la
(6)
where la represent eigenvalues of S. Fig. 1. Envelope detection by root mean square detector. Envelope analysis is the FFT frequency spectrum of the modulating signal (the envelope of the original signal). In this work, this step is performed after filtering the envelope signal around the bearing frequency in order to emphasize the effect of bearing damage over other spectral lines. Thus low pass filtering is carried out in two steps: application of the sliding window then filtering around the bearing frequency. Figure 2 illustrates this process of envelope detection.
After this, let assume that one has kept the P first factors; the transform matrix will be EN´P instead of AN´N. These yields Y'M´P = XM´N EN´P
(7)
3.1. Damage detection with PCA PCA as a means of fault detection has already been intensively studied. But its application has mainly concerned the field of chemical process monitoring, where the number of sensors is generally significant [7], [8]. For detection of mechanical damage on the basis of Articles
71
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
vibration measurement, this technique has been rarely used probably because the number of variables to be supervised is not generally large. However, it can be very interesting to make use of this technique for detection of mechanical defects in combination with machine learning methods. The basic concept of the use of PCA for detection is summarized hereafter. In multisensor context, or if several features are extracted from vibration signals, it is interesting to make use of PCA for damage detection. Let X be a data matrix representing normal (healthy) operation conditions. We can transform X by PCA to get Y. Retaining significant components in Y, we obtain Y'. Back transform of Y' to original variables gives X* M´N = Y'M´P EP´N T
(8)
Since it was retained only significant components in the constitution of the transform matrix E, data in X* are obtained with only significant variances, i.e. insignificant noise effects have been removed (9). The difference X-X* between the two matrices will be insignificant. Let now suppose that a set of new operation conditions is given in a data matrix X1. One transforms X1 by application of the transform matrix built on healthy data, then its back transform in the space of original variables will give a data matrix X1*. The residual matrix is computed as T
R = X1 – X1* = X1 [I – EE ]
(9)
This matrix indicates the deviation from the healthy state. For a vector x corresponding to a residual vector r the deviation is given by R(x) = r1´N rN´1 T
N° 3
2009
4. Decision trees A decision tree is a hierarchical representation used to determine the classification of an object (observation) by testing the values of some of its attributes (variables). In a decision tree, final nodes are decision or classification nodes, they are called sheets. Intermediate nodes are nodes of test on the properties of the objects. The construction process of decision trees is recursive. In fault detection and identification [9], [10], [11], two questions arise for the tree structure building: which attribute to choose, and which value of that attribute will constitute the decision threshold for segmentation at a test node? The principle is to select, at each node, the variable, which presents the greatest information gain, called purity. The concept of purity just induces the fact that sets should contain only data of most similar type, ideally of only one single class membership [12]. Instead of purity, one calculates a measure of impurity given by the Shannon statistical entropy: – å Pi log2 Pi
(11)
i
where Pi s the proportion of data concerned by an attribute or a particular value of that attribute. The attribute, which presents the best gain, i.e. minimal entropy, will be selected as the root of the tree or as a test node. The process will be thus carried out in a hierarchical way until the final nodes are reached, i.e. nodes, which contains objects belonging to the same class. The most used algorithm to build decision tree is the C4.5 algorithm [13]. A decision tree can be used to learn the structure of monitoring data, and thus establish rules and thresholds to detect bearing damage at an early stage. The only requirement is the extraction of sensitive feature by means of adequate signal processing.
(10)
This number indicates how much an operation condition is far from the healthy one, and constitutes an ideal feature for detection. The detection process is illustrated in the Figure 3.
5. Experimental validation 5.1. Experimental setup In order to apply this detection procedure, several sizes of faults were induced on the inner race of a FAG NU206 roller bearing. The test rig consists of a shaft supported by two roller bearings housed in a carter (Figure 4). During tests, the shaft is driven at three different speeds: 2000 rpm, 1500 rpm and 1000 rpm. Three different radial loads are applied to the shaft and bearing with the help of a hydraulic jack. The test bearing is not lubricated. Three accelerometers are used to measure vibration in horizontal, vertical and axial directions. Vibration data are collected with a sampling rate of 50 kHz. Table 1 gives the size of induced faults as well as the number of signals collected at each measurement point. In all, 353 signals are collected. Table 1. Size of induced faults.
Fig. 3. Damage detection with PCA. 72
Articles
Healthy Very slight fault Slight fault Advanced fault Severe fault
Size (mm²) 0 0.196 0.250 1.50 3
Level 0 1 2 3 4
Number of recorded signals 8 8 176 71 90
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
N° 3
2009
5.3. Results and discussion Assuming that the evolution of bearing damage is continuous, early detection will be related to very low size of defect. As at level 1 the damage is very small, early detection will concern the transition from level 1 to level 2. With the aim of automating the task of detection, we propose the use of decision tree. The advantage in this choice is that the structure learned by a decision tree can easily be traduced in rules, and from rules one can define decision thresholds. Fig. 4. The test rig. 5.2. Data preparation and feature extraction Signals are band-pass filtered in 5 frequency ranges with a Butterworth filter: £1000, 1000-3000, 3000-5000, 5000-10000 and ³10000 Hz. An envelope signal is extracted from each filtered signal by a root mean square detector with exponential sliding windows. Windows are used with an overlap of 50%. Window's length is chosen to be of 100 points, which allows an under-sampling of 100 and thus a frequency range, which goes up to 500 Hz. The envelope signal is then filtered around the BPFI before a frequency spectrum is computed. To characterize a filtered signal, sums of spectral lines are considered. These sums were normalized by the load and the square of speed, and then fused by concatenation in a 15-dimension vector (3 directions x 5 frequency ranges), which represents the operation condition. A PCA transform matrix was constructed from vectors representing healthy bearings, i.e. the 16 first one (level 0 and level 1). The transformation and back transformation of the other vectors gave a residue whose evolution is represented in the Figure 5. Significant components are retained on the basis of a scree test (Figure 6). This residue, denoted RFILT, will constitute the feature for the need of detection.
Figure 7 shows the decision tree learned from data. One can observe that the thresholds for the transition from level 1 to level 2 is located at RFILT=0.055102. This decision tree has allowed early detection with an error rate of 0.2%. The learning evaluation is made by cross validation. The confusion matrix (Figure 8) shows that levels 0 and 1 are well separated from the other levels excepted one object of level 1 which is recognized as non defect-free one. This demonstrates that the methodology proposed in this work allows early detection of bearing degradation. Another fact observed from the confusion matrix is that level 0 and level 1 are not very different since the decision tree failed to separate them accurately.
Fig. 7. Decision tree.
Fig. 5. Evolution of the residue with bearing damage.
Fig. 8. Confusion matrix for early fault detection with decision tree.
Fig. 6. Choice of significant components by scree test. Articles
73
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
For each new operation condition, data will be processed as represented in Figure 9.
[2] [3]
[4]
[5]
[6] [7]
[8]
[9]
[10]
Fig. 9. Data processing.
6. Conclusions This paper addressed the important issue of early detection of bearing fault, which can allow an optimal organization of maintenance interventions. Principal component analysis was used to construct a single detection feature from which a decision tree learned rules and thresholds for an early detection. Envelope analysis was used to emphasize the effect of bearing damage over other spectral lines in the frequency domain. The results obtained in this study show that it can be possible to detect incipient defects of bearing by the using the decision trees with the proviso that a suitable signal processing have been is carried out.
AUTHORS Bovic Kilundu*, Pierre Dehombreux, Christophe Letot - Pôle Risques, Faculté Polytechnique de Mons, rue du Joncquois, 53 B-7000 Mons, Belgium. E-mail: bovic.kilundu@fpms.ac.be. Xavier Chiementin - Université de Reims ChampagneArdenne, BP 1039, 51687 Reims, Cedex 2, France. * Corresponding author
References [1]
74
Gebraeel N., Lawley M., Liu C.R., Parmeshwaran V., “Residual life predictions from vibration-based degradation signals: A neural network approach”, IEEE Transactions on Industrial Electronics, vol. 51, issue 3, 2004, pp. 694-700.
Articles
[11]
[12] [13]
N° 3
2009
Harris T., Kotzalas M., Advanced Concepts of Bearing technology. s.l.:CRC Press, Taylor & Francis Group, 2007. Tandon N., Choudhury A., “A Review of Vibration and Acoustic Measurement Methods for the Detection of Defects in Rolling Element Bearing”, Tribology International, vol. 32, no. 8, 1999, pp. 469-480. Li Y., Zhang C., Kurfess T. Danyluk S., Liang S., “Adaptative Prognostics for Rolling Element Bearing Condition”, Mechanical Systems and Signal Processing, vol. 13, no. 1, 1999, pp. 103-113. Courrech J., Eshleman R., “Condition monitoring of machinery”. In: Harri's Shock and Vibration Handbook. s.l.: McGrawHill, 2002, pp. 16.1-16.25. Hand D., Manilla H.,Smyth P., Principles of Data Mining. s.l., MIT Press, 2001. McGregor J., Kourti T., Nomikos P., “Analysis, monitoring and fault diagnosis of industrial processes usind multivariate statistical projection methods”. In: Proceedings of IFAC Congress. San Francisco 1996. vol. M, 1996, pp. 145-150. Mohamed-Faouzi H., Détection et localisation de défaut par analyse en composantes principales. Institut National Polytechnique de Lorraine. 2003. Ph.D thesis (in French). Iserman R., Fault-diagnosis System. An introduction from fault detection to fault tolerance. s.l.: Springer, 2006. Sugumaran V., Ramachandran K., “Automatic rule learning using decision tree for fuzzy classifier in fault diagnosis of roller bearing”, Mechanical Systems and Signal Processing, vol. 21, no. 5, 2007, pp. 2237-2247. Chen Y., “Impending failure detection for a discrete process”, Mechanical Systems and Signal Processin,. vol. 7, no. 2, 1993, pp. 121-132. Chen Z., Computational Intelligence for Decision Support, New York: CRC Press LLC, 2000. Quilan R., C4.5: Programs for Machine Learning, San Mateo: Morgan Kaufmann Publishers, 1993.
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
N° 3
2009
STIFFNESS ANALYSIS OF MULTI-CHAIN PARALLEL ROBOTIC SYSTEMS WITH LOADING Anatol Pashkevich, Alexandr Klimchik, Damien Chablat, Philippe Wenger
Abstract: The paper presents a new stiffness modelling method for multi-chain parallel robotic manipulators with flexible links and compliant actuating joints. In contrast to other works, the method involves a FEA-based link stiffness evaluation and employs a new solution strategy of the kinetostatic equations, which allows computing the stiffness matrix for singular postures and to take into account influence of the external forces. The advantages of the developed technique are confirmed by application examples, which deal with stiffness analysis of a parallel manipulator of the Orthoglide family. Keywords: parallel robotic manipulators, stiffness analysis, kinetostatic modelling, loaded mode, Orthoglide robot.
1. Introduction In modern manufacturing systems, parallel manipulators have become more and more popular for a variety of technological processes, including high-accuracy positioning and high-speed machining [1], [2]. This growing attention is inspired by their essential advantages over serial manipulators, which have already reached the dynamic performance limits. In contrast, parallel manipulators are claimed to offer better accuracy, lower mass/inertia properties, and higher structural rigidity (i.e. stiffness-to-mass ratio) [3]. These features are induced by their specific kinematic structure, which resists the error accumulation in kinematic chains and allows convenient actuators location close to the manipulator base. This makes them attractive for innovative robotic systems, but practical utilization of the potential benefits requires development of efficient stiffness analysis techniques, which satisfy the computational speed and accuracy requirements of relevant design procedures. Generally, the stiffness analysis evaluates the effect of the applied external torques and forces on the compliant displacements of the end-effector. Numerically, this property is defined through the “stiffness matrix” K, which gives the relation between the translational/rotational displacement and the static force/torque causing this transition. As follows from mechanics, K is 6×6 semidefinite non-negative matrix, where structure may be non-diagonal to represent the coupling between the translation and rotation [4], [5]. Similar to other manipulator properties (kinematical, for instance), the stiffness essentially depends on the force/torque direction and on the manipulator configuration [6].
Several approaches exist for the computation of the stiffness matrix, such as the Finite Element Analysis (FEA), the matrix structural analysis (MSA), and the virtual joint method (VJM). The FEA method is proved to be the most accurate and reliable, since the links/joints are modeled with its true dimension and shape. Its accuracy is limited by the discretisation step only. However, because of high computational expenses required for the repeated re-meshing, this method is usually applied at the final design stage. The MSA method incorporates the main ideas of the FEA but operates with rather large flexible elements (beams, arcs, cables, etc.). This obviously yields reduction of the computational expenses and, in some cases, allows even obtaining an analytical stiffness matrix. This method gives a reasonable trade-off between the accuracy and computational time, provided that link approximation by a beam element is realistic. Because it involves rather high-dimensional matrix operations, it is not attractive for the parametric stiffness analysis. Finally, the VJM method, which is also referred to as the “lumped modelling”, is based on the expansion of the traditional rigid model by adding virtual joints, which describe the elastic deformations of the manipulator components (links, joints and actuators). This approach originates from the work of Gosselin [7], who evaluated parallel manipulator stiffness taking into account only the actuators compliance. At present, there are a number of variations and simplifications of the VJM method, which differ in modelling assumptions and numerical techniques. Generally, the lumped modelling provides acceptable accuracy in short computational time. However, it is very hypothetic and operates with simplified stiffness models that are composed of one-dimensional springs that do not take into account the coupling between the rotational and translational deflections. Recent modification of this method allows to extend it to the over-constrained manipulator and to apply it at any workspace point, including the singular ones [8]. It should be stressed that the standard stiffness analysis focuses on the unloaded structures, for which there were proposed several efficient semi-analytical techniques [9]-[11]. However, for the loaded working modes, the stiffness analysis is still an open problem. Besides, with respect to this case, several authors introduced a concept of the asymmetric Cartesian stiffness matrix [12]-[14], but this concept was recently revised by Kövecses and Angeles [5]. This paper presents a new stiffness modelling method for the loaded parallel manipulators, which is based on a multidimensional lumped-parameter model that replaArticles
75
Journal of Automation, Mobile Robotics & Intelligent Systems
ces the link flexibility by localized 6-dof virtual springs that describe both the linear/rotational deflections and the coupling between them. The spring stiffness parameters are evaluated using FEA modelling to ensure higher accuracy. In addition, it employs a new solution strategy of the kinetostatic equations, which allows computing the stiffness matrix for the overconstrained architectures, including the singular manipulator postures. This gives almost the same accuracy as FEA but with essentially lower computational effort because it eliminates the model re-meshing through the workspace.
2. Problem of stiffness modelling 2.1. Manipulator Architecture Let us consider a general n-dof parallel manipulator, which consists of a mobile platform connected to a fixed base by n identical kinematics chains. Each chain includes an actuated joint “Ac” (prismatic or rotational) followed by a “Foot” and a “Leg” with a number of passive joints “Ps” inside (Fig. 1). Generally, certain geometrical conditions are assumed to be satisfied with respect to the passive joints to eliminate the undesired platform rotations and to achieve stability of desired motions. Typical examples of such architectures include 3-PUU translational parallel kinematic machine [15], Delta parallel robot [16], Orthoglide parallel manipulator that implements the 3-PRPaR architecture with parallelogram-type legs and translational active joints [17]. Here R, P, U and Pa denote the revolute, prismatic, universal and parallelogram joints, respectively.
VOLUME 3,
N° 3
2009
(c) a rigid “Foot” linking the actuating joint and the leg, which is described by the constant homogenous transformation matrix ; (d) a 6-d.o.f. virtual joint defining three translational and three rotational foot-springs, which are described by the homogenous matrix function , where and correspond to the elementary translations and rotations respectively; (e) a 2-d.o.f. passive U-joint at the beginning of the leg allowing two independent rotations with angles , which is described by the homogenous matrix function ; (f) a rigid “Leg” linking the foot to the movable platform, which is described by the constant homogenous matrix transformation ; (g) a 6-d.o.f. virtual joint defining three translational and three rotational leg-springs, which are described by the homogenous matrix function , where and correspond to the elementary translations and rotations, respectively; (h) a 2-d.o.f. passive U-joint at the end of the leg allowing two independent rotations with angles , which is described by the homogenous matrix function ; (i) a rigid link from the manipulator leg the end-effector (part of the movable platform) described by the homogenous matrix transformation . The expression defining the end-effector location subject to variations of all coordinates of a single kinematic chain may be written as follows
(1)
Fig. 1. Schematic diagram of a general n-dof parallel manipulator (Ac – actuated joint, Ps – passive joints). 2.2. Basic Assumptions To evaluate the manipulator stiffness, let us apply a modification of the virtual joint method (VJM), which is based on the lump modelling approach [7]. According to this approach, the original rigid model should be extended by adding the virtual joints (localized springs), which describe elastic deformations of the links. Besides, virtual springs are included in the actuating joints to take into account stiffness of the control loop. Under such assumptions, each kinematic chain of the manipulator can be described by a serial structure, which includes sequentially: (a) a rigid link between the manipulator base and the ith actuating joint (part of the base platform) described by the constant homogenous transformation matrix ; (b) a 1-d.o.f. actuating joint with supplementary virtual spring, which is described by the homogenous matrix function where is the actuated coordinate and is the virtual spring coordinate; 76
Articles
where matrix function is either an elementary rotation or translation, matrix functions and are compositions of two successive rotations, and the spring matrix is composed of six elementary transformations. 2.3. Problem statement In general, the stiffness model describes the resistance of an elastic body or a mechanism to deformations caused by an external force or torque [18]. For relatively small deformations, this property is defined through the ''stiffness matrix” K, which defines the linear relation (2) between the six-dimensional translational/rotational displacements , and the static forces/torques causing this transition. Here, the vector includes all passive joint coordinates, the vector collects all virtual joint coordinates, n is the number of passive joins, m is the number of virtual joints. Usually, the manipulator is assembled without internal preloading and the vector is equal to zero. However, for the loaded mode, similar relation is
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
defined in the neighbourhood of the static equilibrium, which corresponds to another configuration of the manipulator that is caused by external forces\torques . Respectively, in this case, the stiffness model describes the relation between the increments of the force and the position (3) where and denote the loaded position of the manipulator, and are the deviations of the passive joint and virtual spring coordinates. Let us also define the geometry of the manipulator in the Cartesian space as ,
(4)
where the function is defined by the transformation (1), and the vector describes the threedimensional position and orientation of the end-effector with respect to the Cartesian axes. Hence, the problem is to find the static equilibrium of the considered manipulator and to linearise relevant force/position relations.
N° 3
2009
(6) where is the aggregated vector of the virtual joint reactions, is the aggregated spring stiffness matrix of the size m×m, and is the spring stiffness matrix of the corresponding link. Similarly, one can define the aggregated vector of the passive joint reactions but, at the equilibrium, all its components must be equal to zero (7) Further, let us apply the principle of virtual work assuming that the joints are given small, arbitrary virtual displacements in the equilibrium neighbourhood. Then, the virtual work of the external force F applied to the endeffector along the corresponding displacement is equal to the sum . For the internal forces, the virtual work includes only one component , since the passive joints do not produce the force/torque reactions (the minus sign takes into account the adopted directions for the virtual spring forces/torques). Therefore, since in the static equilibrium the total virtual work is equal to zero for any virtual displacement, the equilibrium conditions may be written as
3. Stiffness model for the loaded mode To derive the desired stiffness model, let us divide the problem into three sequential subtasks that are solved for each kinematic chain separately: (i) computing the stiffness matrix for the unloaded mode, (ii) finding the static equilibrium for the loaded configuration, and (iii) obtaining the stiffness model for the loaded mode. At the final stage, these results for separate kinematic chains are aggregated, in order to obtain the stiffness of the entire manipulator. 3.1. Stiffness model in the neighbourhood of unloaded configuration Let us define the unloaded configuration as , where is computed via the inverse kinematic and is equal to zero (since there are no preloads in the springs). Let us also assume that the external force F relocates the manipulator to the position , which for small displacements may be expressed as (5) where
and
are the kinematic Jacobians with respect to the coordinates , , which may be computed from (1) analytically or semi-analytically, using the factorisation technique proposed in [11]. For the kinetostatic model, which describes the forceand-motion relation, it is necessary to introduce additional equations that define the virtual joint reactions to the corresponding spring deformations. For analytical convenience, all relevant expressions may be collected in a single matrix equation
(8) This gives additional expressions describing the force/ torque propagation from the joints to the endeffector. Hence, the complete kinetostatic model consists of four matrix equations (5)…(8) where either F or are treated as known, and the remaining variables are considered as unknowns. Since the matrix is non-singular (it describes the stiffness of the virtual sprigs), the variables can be expressed via F using equations (5) ...(8). This yields substitution allowing reducing the kinetostatic model to system of two matrix equations with unknowns F and , which can be written in the matrix form as (9) where the sub-matrix describes the spring compliance relative to the end-effector, and the sub-matrix takes into account the passive joint influence on the end-effector motions. Therefore, for a separate kinematic chain, the desired stiffness matrix K defining the motion-to-force mapping ,
(10)
can be computed by the direct inversion of (6+n)×(6+n) matrix in the left-hand side of (10) and extracting from it the 6×6 sub-matrix with indices corresponding to . 3.2. Static equilibrium for the loaded configuration Let us assume that, due to the external force F, the manipulator is elocated from the initial (unloaded) Articles
77
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
position to a new position , which satisfies the condition of the mechanical equilibrium. If the displacement is rather small, the new configuration can be computed easily, using results from the previous subsection. However, in general case, the stiffness model is highly non-linear and computing requires some additional efforts. Besides, for computational reasons, let us consider the dual problem that deals with determining the external force F and the manipulator configuration that correspond to the output position t. For the considered problem, the basic equations can be written as ,
(11)
where the first equation defines the manipulator geometry and the remaining ones are derived from statics. It is evident that there is no general method for analytical solution of this system and it is required to apply numerical techniques. To derive the numerical algorithm, let us linearise the kinematic equation in the neighbourhood of the
N째 3
2009
may be based on the full-scale Newton-Raphson technique (i.e. linearisation of the static equations in addition to the kinematic one); this obviously increases computational expenses but potentially improves convergence. 3.3. Stiffness model for the loaded configuration In the neighbourhood of the loaded configuration, the stiffness model is defined with respect to the force and position increments , , which are assumed to be small (see equation (3)). To derive this model, let us consider two equilibriums corresponding to the manipulator variables and respectively. For these settings, the kinematic equation is reduced to ,
(16)
while the statics yields two set of equations (17) and
(12) (18) and rewrite the static equations as (13) This leads to a linear algebraic system of equations with respect to
where and are the differentials of the Jacobians due to changes in . After relevant transformation and neglecting high-order small terms, equations (17), (18) may be rewritten as
(19) where scalar function
, are the Hessian matrices of the :
(14) which gives the following iterative scheme
(20) This allows to apply substitution for and to obtain system of two matrix equations with unknowns and
(15) , where the starting point can be chosen using the nonloaded configuration, i.e. . As follows from computational experiments, for typical values of deformations the proposed iterative algorithm possesses rather good convergence (3-5) iterations are usually enough). However, in the case of buckling or in the area of multiple equilibriums, the problem of convergence becomes rather critical and highly depends on the initial guess. Further enhancement of this algorithm 78
Articles
(21)
which generalizes (9) for the case of the loaded equilibrium. Here
.
Therefore, for a separate kinematic chain, the desired stiffness matrix defining the displacement-to-force mapping (3) can be computed by direct inversion of the
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
matrix in the left-hand side of (21) and extracting from it the left-upper 6×6 sub-matrix. Finally, when the stiffness matrices for all kinematic chains are computed, the stiffness of the entire multi chain manipulator can be found by simple summation
. This follows from the
superposition principle, since the total external force corresponding to the end-effector displacement (the same for all kinematic chains) can be expressed as where . It should be stressed that usually the matrices are not invertible but for the entire manipulator, the stiffness matrix is positive definite and invertible for all non-singular postures.
4. Evaluating the model parameters 4.1. Actuator compliance The actuator compliance, describing by the scalar parameter and by 6×6 matrix , depends on both the servomechanism mechanics and the control algorithm. Since most modern actuators implement the digital PID control, the main contribution to the compliance is produces by the mechanical transmissions. The latter are usually located outside the feedback-control loop and consist of screws, gears, shafts, belts, etc., whose flexibility is comparable with the flexibility of the manipulator links. Because of the complicated mechanical structure of the servomechanisms, these parameters are usually evaluated from static load experiments, by applying the linear regression to the experimental data.
N° 3
2009
pliance evaluation. Besides, the link origin must be fixed relative to the global coordinate system. Then, sequentially and separately applying forces and torques to the reference object, it is possible to evaluate corresponding linear and angular displacements, which allow computing the stiffness matrix columns. The main difficulty here is to obtain accurate displacement values by using proper FEA-discretization (“mesh size”). As follows from our study, the single-beam approximation of the Orthoglide links gives accuracy of about 50%, and the four-beam approximation improves it up to 30% only (compared to the FEA-based method that is proved pro-ducing very accurate results). It worth mentioning that here, in contrast to the straightforward FEA- modelling, which requires re-computing for each manipulator posture, it is needed only a single evaluation of the link stiffness. The latter essentially improves the computational speed.
5. Application examples To demonstrate efficiency of the proposed methodology, let us apply it to the comparative stiffness analysis of two 3-d.o.f. translational mechanisms, which employ Orthoglide architecture. CAD models of these mechanisms are presented in Fig. 2.
4.2. Link compliance Following a general methodology, the compliance of a manipulator link (foot or leg) is described by 6×6 symmetrical positive definite matrices , corresponding to 6-d.o.f. springs with relevant coupling between translational and rotational deformations. This distinguishes our approach from other lumped-based techniques, where the coupling is neglected and only a subset of deformations is taken into account (presented by a set of 1-d.o.f. springs). The simplest way to obtain these matrices is to approximate the link by a beam element for which the nonzero elements of the compliance matrix may be expressed analytically. However, for certain link geometries, the accuracy of a single-beam approximation can be insufficient. In this case the link can be approximated by a serial chain of the beams, whose compliance is evaluated by applying the same method (i.e. considering the kinematic chain with 6-d.o.f. virtual springs, but without passive joints). This leads to the resulting compliance matrix , where and incorporate the Jacobian and the compliance matrices for all virtual springs. 4.3. FEA-based evaluation of model parameters For complex link geometries, the most reliable results can be obtained from the FEA modelling. To apply this approach, let us introduce an auxiliary 3D object, a „pseudo-rigid” body, which is used as a reference for the com-
Fig. 2. Kinematics of two 3-dof translational mechanisms employing the Orthoglide architecture. Articles
79
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
First, let us derive the stiffness model for the simplified Orthoglide mechanics (3-PUU), where the legs are comprised of equivalent limbs with U-joints at the ends. Accordingly, to retain major compliance properties, the limb geometry corresponds to the parallelogram bars with doubled cross-section area. The geometrical models of separate kinematic chains can be described by the expression (1), where the product components are defined via the standard translational/rotational operators. Because for the rigid manipulator the end-effector moves with only translational motions, the nominal values of the passive joint coordinates are subject to the specific constrains , which are implicitly incorporated in the direct/inverse kinematics. For the second architecture (3-PRPrP) it is necessary to derive first the stiffness matrix of the parallelogram. Using the adopted notations, the parallelogram equivalent model may be written as (22) where, compared to the above case, the third passive joint is eliminated (it is implicitly assumed that ). On the other hand, the original parallelogram may be split into two serial kinematic chains (the “upper” and “lower” ones). Hence, the parallelogram compliance matrix may be also derived using the proposed technique that yields an analytical expression [11].
Articles
2009
Using this model and applying the proposed technique, there were computed the compliance matrices for both architectures and for three typical manipulator postures Q0, Q1 or Q2 (see Tables 1 and 2). As follows from the comparison, the parallelograms allow increasing the rotational stiffness roughly in 10 times. The second conclusion is related to the stiffness comparison for the unloaded and loaded modes. It was assumed that the loading (Table 3) leads to the translational deflection of 0.5 mm in all Cartesian directions but the platform orientation remains the same. The obtained results confirm influence of the loading on the manipulator stiffness. In particular, some elements of the stiffness matrix may increase up to 45%, depending on the working point (Q0, Q1 or Q2). Also, the 3-PUU manipulator is more sensitive to the external loading than its counterpart 3-PRPaR. This justifies application of 3-PRPaR architecture for high-speed machining.
6. Conclusions The paper proposes a new systematic method for computing the stiffness matrix of multi-chain parallel robotic manipulators in the presence of the external loading applied to the end-platform. It is based on multidimensional lumped model of the flexible links, whose parameters are evaluated via the FEA modelling and describe both the translational/rotational compliances and the coupling between them. In contrast to previous works,
Table 1. Translational and rotational stiffness of the 3-PUU manipulator (unloaded and loaded modes).
80
N° 3
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
N° 3
2009
Table 2. Translational and rotational stiffness of the 3-PRPaR manipulator (unloaded and loaded modes).
Table 3. Wrenches for the loaded mode (t = ( 0.5, 0.5, 0.5, 0, 0, 0) ).
the method employs a new solution strategy of the kinetostatic equations and allows computing the stiffness matrices for any given manipulator posture, including the singular ones. The efficiency of the proposed method was demonstrated through application examples, which deal with comparative stiffness analysis of two parallel manipulators of the Orthoglide family. Relevant simulation results have confirmed essential advantages of the parallelogram-based architecture and validated adopted design of the Orthoglide prototype. In future work, the method will be extended to other parallel architectures composed of several identical kinematic chains and for other types of external loading.
ACKNOWLEDGMENTS The work presented in this paper was partially funded by the Region “Pays de la Loire”, France and by the EU commission (project NEXT).
AUTHORS Anatol Pashkevich*, Alexandr Klimchik - Ecole des Mines de Nantes, 4 rue Alfred-Kastler, Nantes 44307, France; Institut de Recherches en Communications et Cybernetique de Nantes, UMR CNRS 6597, 1 rue de la No, 44321 Nantes, France. E-mail: anatol.pashkevich@emn.fr. Damien Chablat, Philippe Wenger - Institut de Recherches en Communications et Cybernetique de Nantes, UMR CNRS 6597, 1 rue de la No, 44321 Nantes, France. * Corresponding author Articles
81
Journal of Automation, Mobile Robotics & Intelligent Systems
References [1]
[2]
[3] [4] [5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
82
Brogardh T., “Present and future robot control development - An industrial perspective”, Annual Reviews in Control, vol. 31, no 1, 2007, pp. 69-79. Chanal H., Duc E., Ray P., “A study of the impact of machine tool structure on machining processes”, International Journal of Machine Tools and Manufacture, vol. 46, no 2, 2006, pp. 98-106. Merlet J.-P., Parallel Robots, Dordrecht: Kluwer Academic Publishers, 2000. Duffy J., Statics and Kinematics with Applications to Robotics, New-York: Cambridge University Press, 1996. Kövecses J., Angeles J., “The stiffness matrix in elastically articulated rigid-body systems”, Multibody System Dynamics, vol. 18, no. 2, 2007, pp. 169-184. Alici G., Shirinzadeh B., “Enhanced Stiffness Modeling, Identification and Characterization for Robot Manipulators”, IEEE Transactions on Robotics, vol. 21, 2005 no 4, pp. 554-564. Gosselin C.M., “Stiffness mapping for parallel manipulators”, IEEE Transactions on Robotics and Automation, vol. 6, no 3, 1990, pp. 377-382. Pashkevich A., Chablat D., Wenger P., “Stiffness analysis of 3-d.o.f. overconstrained translational parallel manipulators”, IEEE International Conference on Robotics and Automation, 2008, pp. 1562-1567. Chen S.F., Kao I., “Conservative congruence transformation for joint and Cartesian stiffness matrices of robotic hands and fingers”, International Journal of Robotics Research, vol. 19, no 9, 2000, pp. 835-847. Quennouelle C., Gosselin C.M., Stiffness Matrix of Compliant Parallel Mechanisms, Springer Advances in Robot Kinematics: Analysis and Design, 2008, pp. 331-341. Pashkevich A., Chablat D., Wenger P., “Stiffness analysis of overconstrained parallel manipulators”, Mechanism and Machine Theory, vol. 44, 2009, no. 5, pp. 966982. Griffis M., Duffy J., “Global stiffness modeling of a class of simple compliant couplings”, Mechanism and Machine Theory, vol. 28, no 2, 1993, pp. 207-224. Pigoski T., Griffis M., Duffy J., “Stiffness mappings employing different frames of reference”, Mechanism and Machine Theory, vol. 33, 1998, no. 6, pp. 825-838. Ciblak N., Lipkin H., “Asymmetric Cartesian stiffness for the modeling of compliant robotic systems”, ASME Robotics: Kinematics, Dynamics and Controls, vol. 72, 1994, pp. 197-204. Li Y., Xu Q., “Stiffness analysis for a 3-PUU parallel kinematic machine”, Mechanism and Machine Theory, vol. 43, no. 2, 2008, pp. 186-200. Clavel R., “DELTA, a fast robot with parallel geometry”. In: Proceedings, of the 18th International Symposium of Robotic Manipulators, IFR Publication, 1988, pp. 91100. Chablat D. , Wenger P., “Architecture Optimization of a 3-DOF Parallel Mechanism for Machining Applications, the Orthoglide”, IEEE Transactions on Robotics and Automation, vol. 19, no 3, 2003, pp. 403-410. Timoshenko S., Goodier J.N., Theory of elasticity, 3rd ed., New York: McGraw-Hill, 1970.
Articles
VOLUME 3,
N° 3
2009
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
N° 3
2009
INVERSION OF FUZZY NEURAL NETWORKS FOR THE REDUCTION OF NOISE IN THE CONTROL LOOP FOR AUTOMOTIVE APPLICATIONS Mirko Nentwig, Paolo Mercorelli
Abstract:
In particular, the author adopted the following model:
A robust throttle valve control has been an attractive problem since throttle by wire systems were established in the mid-nineties. Control strategies often use a feed-forward controller which use an inverse model; however, mathematical model inversions imply a high order of differentiation of the state variables resulting in noise effects. In general, neural networks are a very effective and popular tool for modelling. The inversion of a neural network makes it possible to use these networks in control problem schemes. This paper presents a control strategy based upon an inversion of a feed-forward trained local linear model tree. The local linear model tree is realized through a fuzzy neural network. Simulated results from real data measurements are presented, and two control loops are explicitly compared.
¶i(t) R u (t) - Cmw(t) = Coil i(t) + in LCoil ¶t LCoil
(1)
¶g(t) = w(t) ¶t
(2)
¶w(t) C -k w(t) - Tkpre - kf (g)g(t) = gr m i(t) + gr r , (3) J J ¶t in which equation (1) represents the electrical system of the actuator, and equations (2) and (3) describe the mechanical behaviour of the actuator. The coil current i(t), the angular position g, and the angular velocity m are the state variables, and uin (t) is the input voltage. RCoil and LCoil are the resistance and the inductance of the coil windings.
Keywords: neural networks, fuzzy control, inversion of neural networks, automotive control, noise reduction.
1. Introduction The automobile industry often models its engines using characteristic diagrams, or more specifically, engine operating maps. These models require a large amount of measured data acquired with advanced instrumentation. Alternatively, physics-based models may be used, but these are very complex and must be simplified for use, which degrades the model. Neural networks are another option for modelling complex systems. They are relatively straightforward systems but they may be appropriate even for highly complex modelled problems. The purpose of our work is to show that neural networks can be applied successfully, even in the presence of noise. We apply an inverse local linear model tree using a fuzzy neural network to a control loop, in the presence of noise. The considered system is the throttle valve shown in Fig. 1, which is displayed along with the parts of the internal combustion engine. The right side of Fig. 1 shows an enlargement of the throttle valve with its most important parameters. C1 and C2 are the mass flow rate for the input and output, respectively. Also, T1, P1 and T2, P2 represent the input and output temperature and pressure. A1 is the total surface area of the plate, and AD = A1 cos(g). A robust throttle valve control has been an attractive problem since throttle by wire systems were established in the mid-nineties. An already tested technology is currently available, and recent advanced studies have appeared, [1] and [13]. Mercorelli [8] presented a controller based on inversion using a physical model approach.
Fig. 1. Top: Overview. Bottom: Schematic structure of throt-tle valve. Cmw is normally called induced voltage, Cm is the constant of the motor, and J is the moment of inertia. The gear parameter gr indicates the ratio of the teeth. With this approach, the backlash effect does not generate a stationary error. The parameters krw and kf (g)g represent the viscous friction torque and the total spring Articles
83
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
torque, respectively. Tkpre is the pretension torque of the spring, which can be considered as a disturbance. It should be noted that in our case, kf (g)g is a non-linear function of the angular position. In particular, the following expression
+ ... + exp -
1
(up -cip)²
2
s ip
2
N° 3
2009
.
TL = Cm i(t) describes the Lorentz torque generated by the actuator. Mercorelli [8] showed that the adopted model is a flat model and that 2 × -k L (C i(t) - kf (y)y) + kr LCoil yC mJ uin (t) = r Coil m Cm J -
2 J (CmRCoil i(t) + Cm y× + kf (y)LCoil y× + JLCoil ××× y) Cm J
(4) Fig. 2. Complete control scheme.
is the inverse system with y = g. Because of the noise which can affect the signal from the foot pedal, a feed forward inverse controller may generate spikes and low tracking performances. From (4), it should be noted that mathematical inversions of models imply a high order of differentiation of the state variables, and consequently noise effects. Mercorelli [8] avoided this noise effect by developing an approximated proportional derivative (PD) regulator, in which the D-part was replaced with a special algorithm. Since it is always present a structural inexact description of the model with imperfect inversion, and external disturbances were not modelled, it was necessary for the control loop to contain a feedback structure.
2. Model inversion The inversion problem in neural networks has attracted many researchers and mathematicians. This is a difficult problem, which involves the inversion of the nonlinear membership functions. Fig. 2 presents a schematic structure of a possible control system. This procedure applies the LOLIMOT algorithm [9], which is based upon Neural-Fuzzy models of Takigi Sugeno type. During the execution of this algorithm, a ”divide and conquer strategy” is applied to the modelling problem, so that the major problem is split into smaller ones. The basic network structure of a local linear neural fuzzy model is depicted in Fig. 3. Every neuron consists of a local linear model (LLM) and a validity function F defining the validity of the LLM within the input space. The local linear model output is defined by: Ù
yi = wi0 + wi1 u1 + wi2 u2 + ... + wip up ,
To achieve an inversion, we develop an algorithm which allows us to obtain the required model input ur, depending on the desired model output y and the other model inputs u.
where wij is the LLM parameter at each neuron i. If the validity functions are chosen as normalized Gaussians, then it follows: mi(u)
M
å i=1 Fi(u) = 1 and Fi =
å
M i=1
mj(u)
where the membership function m is mi(u) = exp -
84
Articles
1 2
(u1 -ci1)² s
2 i1
+ exp -
Fig. 3. Top: Network structure of local linear neural fuzzy model. Bottom: Partition of the input space by validity functions.
1
(u2 -ci2)²
2
s i2
2
Fink and Toepfer [4] offer some strategies in their analysis of the inversion of non-linear models: Inverse access by numerical inversion Only one model of the non-linear function is created and used for standard and inverse access. The inverse access equals a numerical inversion and requires the application of optimization methods to determine the input for the requested output.
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
Data driven generation of an inverse model A model for inverse access is created in addition to the model for standard access. Analytical inversion of models A direct inversion of the forward trained model is applied. Hence, it is an advantage to use model architectures, which allow the direct calculation of the inverse model using its own parameters. The developed algorithm applies an analytical/numerical inversion of a given local linear model structure, and is explained below. The following constraints are required to set up the algorithm: 1. An existing forward trained local linear model tree of the process is available. 2. The expected model output y is known. 3. There are existing input values for that inputs upon which y is dependent. 2.1. The Validity Functions Issue Ù Consider the model output function y , with M local linear model u = [u1 , ..., up] inputs Ù
M
yi =å (wi0 + wi1u1 + +wi2u2 + ... + wipup)Fi(u),
(6)
i=1
the validity function Fi =
mi(u) å
M i=1
mj(u)
mir =
+ ... + exp -
2
(u1 -ci1)² s
1
(up -cip)²
2
s ip
2
+ exp -
2 i1
.
1 2
(u2 -ci2)² s
2 i2
(7)
A difficulty is that, due to the exponential quadratic nonlinearity, the model is not invertible. Hence, it is necessary to convert the functions into a linear type, as shown in Fig. 4. The membership function is split into a spline function that consists of the linear functions.
(8)
Using the function defined in equation (8), it is now possible to invert the local linear model tree. 2.2. Algorithm for the Inversion of the LLM We apply the following algorithm to invert the model: 1. Calculate the LLMs represented in equation (6), omitting the required input variable ur. That is, calculate the LLMs with the available input data until there only remains a linear equation depending on one input, e.g., yi = wr × ur + ucalc. To be more Ù comprehensible, if yi = wi0 + wi1u1 + wi2u2 + wi3u3 and ui1 is required, then yi = wi1 ui1 + ucalc, with wrur = wi1ui1 and ucalc = wi0 + wi2ui2 + wi3ui3. 2. Calculate the membership functions represented in equation (7), omitting the required input variable ur. The membership function of the LLMs is calculated with the available input data to the extent possible. Since the non-linear term depending on ur is omitted, the membership function is a constant number. To be more precise, mi(u) = exp -
1
2009
1.6 × sr ks + c £ ur £ c (u - c) + 1 1.6 × sr r ks ks 1.6 × sr + c. (u + c) + 1 + c £ ur £ 1.6 × sr r ks
and the membership function mi(u) = exp -
N° 3
+ exp -
1
(u2 -ci2)²
2
s i2
1
(up -cip)²
2
s ip
2
2
+ ... .
(9)
Then, the input ur is reconsidered in the final membership function and the following expression is obtained,. mi(u) = exp -
1
(ur -cir)²
2
s ir
2
+ m c.
(10)
3. Create the linear membership function for the required input as from equation (8).
Fig. 4. Left: Linear and non-linear validity function. Right: Linear functions on the intervals. Articles
85
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
4. Partition the input space ur. The input space of ur is partitioned into q search intervals, which are used in the later estimation of ur. Every interval describes the validity of half of the local linear model. Thus, the input space of every LLM is divided into two intervals. This is necessary due to the structure of the new linear membership functions, because they consist of two linear functions as mentioned above. For every interval, a ”left function” and a ”right function” are considered (Fig. 7). For the sake of brevity, equation (8) is represented as the following: μ(ur)i,r =
μ(ur)i,r,1 μ(ur)i,r,2.
(11)
5. In the following loop, consider every interval for a possible solution of ur. Determine which of i membership functions μ(ur)i,r are valid for the currently considered interval, by checking every spline. μ(ur)i,r,1 is valid if μ(interval_lef t)i,r,1 ³ 0 Ù μ(interval_right)i,r,1 ³ 1 μ(ur)i,r,2 is valid if μ(interval_lef t)i,r,2 ³ 0 Ù μ(interval_right)i,r,2 ³ 0, where Ù indicates the ”and” logical function. Use the valid membership spline functions to create validity functions for each local linear model. Taking the previously calculated part of the membership function i, μi,calc as in (10), and sum it with the valid linear spline membership function i μ(ur)i,r,{1,2}, where μ(ur)i,r,{1,2} represents a valid spline membership function within the range of functions. This yields:
N° 3
2009
Ù
y = y (ur ) . Verify that the calculated ur is inside the currently considered interval. interval_lef t £ ur £ interval_right. If so, accept it as one possible solution. If not, disregard it. Due to the structure of the validity functions, an inversion of the model is possible only within the input space of the required variable. Beyond these borders, the model input will drift to zero, which is comparable to the normal LOLIMOT behaviour. That is, once a work nominal point is chosen, the inversion is possible within the input domain. The worst case is if the working nominal point is close to the border of the domain. If so, a more suitable division of the input space is needed, as is described in the next section. 2.3. Boosting With a continuous system, it is possible to accelerate the algorithm's runtime behaviour by reducing the time needed to select the validity functions during the inversion. This makes it necessary to perform some off-line calculations on the linear validity functions, which otherwise remain unchanged during the execution of the inversion. For every linear validity function, two points are calculated and saved in a look-up table: the starting point ps and the point pe where the function intersects the input domain axis. This information is used at runtime to pre-select the relevant validity functions; therefore, a region of interest (ROI) around the previously calculated desired input value ur(k - 1) has to be defined as an interval, e.g., ROI(ur(k - 1)) = [ur(k - 1) - c · ur(k - 1) ur(k - 1) + c · ur(k - 1)] ,
μ(ur)i = μ(ur)i,r,{1,2}+ μi,calc and F(ur)i =
(ur)i M
.
å j=1 m(ur)j
If there are no valid linear membership functions for a local linear model, then the model will not be considered for further actions. Initially create the output function for every local linear model by multiplying its validity function with the local linear model function: Ù
Ù
y (ur )LLM,i = y (ur )i = F(ur)i . Next, sum the output functions to create the model output: Ù
M
Ù
y (ur ) = å y (ur )LLM,i . i=1
Finally, equate the model output function to the desired model output value, and solve the resulting equation with respect to the variable ur: 86
Articles
where c is a parameter describing the size of the region. If the function crosses, starts, or ends in the ROI, the validity function should be considered. A logic validity function (LVF) can be defined as follows: LV F = (ps £ ROImax Ú ps ³ ROImin) Ú (pe £ ROImax Ú pe ³ ROImin) Ú (ps > ROImax Ù pe < ROImin) Ú (pe > ROImax Ù ps < ROImin),
(12)
where ”Ú” and ”Ù” indicate the ”or” and the ”and” logical functions, respectively. If the variable LVF assumes a value equal to 1, then the validity function should be considered; if the variable LVF assumes a value equal to 0, then the validity function should be not considered. The proposed boosting algorithm is similar to the algorithms used to solve clipping problems in computer graphics, in which the ROI states the camera field of view.
Journal of Automation, Mobile Robotics & Intelligent Systems
3. Analysis of the algorithm 3.1. Stability The algorithm is mostly stable; however, due to the limited character of the linear validity functions, it is possible that no solution will be found in the border regions of the model. In that case, there are two possible solutions: Â&#x2014; The first solution is to use a more complex approximation of the validity function, e.g., a linear spline consisting of n line segments. This will increase the computational effort. In fact, each line segment represents a single interval, which must be considered to solve the inversion problem. Since the validity function is symmetric, the effort is increased by a factor of two. Applying the proposed boosting method will reduce the complexity. Â&#x2014; Another solution is to adjust the zeros of the linear approximation function so that the function will cover a larger region of the input domain. This will reduce the accuracy of the estimated input value, but it keeps the computational costs low. 3.2. Ambiguities Since the algorithm solves a squared equation, it is possible to obtain two solutions for each of the q intervals if the solutions are valid. In the worst case this results in 2 * q solutions. Therefore, we introduce a decision criterion to select the correct input value. If the system is continuous, as it is in most real applications, this decision criterion could be based on the previous input value ur(k - 1). Other criteria could also be used. 3.3. Accuracy The accuracy of the inversion is mainly influenced by the linear validity function. Therefore, a linear/linear
VOLUME 3,
N° 3
2009
spline validity function will lead to more accurate results compared with the simplest linear function. As mentioned above, this will increase the computational complexity, so the best compromise between speed and accuracy has to be found. Boosting can be used to reduce these negative effects. By using the simplest validity functions, with one linear spline per side, a difference of 10-15 percent between the predicted input value and the real input value can occur in the worst case. However, the result can be improved by applying a numerical optimization process.
4. A real application: The Throttle Valve The training data was obtained using measurements obtained from an experimental setup. We examined the manifold of the throttle angle using a network consisting of three neurons. We set up the inversion problem to depend on four inputs (Fig. 5): the input voltage (u1), the ambient air pressure in mbar (u2), the manifold air-mass pressure in mbar (u3), and the ambient temperature in Celsius (u4). Normally the model output would be the manifold air mass flow in kg/h, but here we used the throttle angle as output to correspond with the model presented in equations (1), (2) and (3). The model is trained with a k sigma equal to 0.33. In this example, the inversion is tested by the throttle angle, depending on the desired angular trajectory (Fig. 5). The scheme of Fig. 5 shows two possible simulations corresponding to the control schemes represented in Fig. 6 and 7. Fig. 8 shows a phase of acceleration and deceleration using a control scheme with the inversion defined in equation (4), and using a proportional integral derivative (PID) control in feedback configuration. Fig. 9 shows a simulation using a control scheme with an Inverse model from the proposed neural network, and using a PID control in feedback configuration. Both cases are tested with the same PID controller parameters. The input parameters for u2 , u3 and u4 are chosen as static values.
Fig. 5. Simulink-block scheme of the used model. Articles
87
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
N° 3
2009
5. Conclusions and Outlook This paper describes a powerful algorithm for the inversion of local linear model trees at runtime, and we apply the method to an automotive throttle valve control problem as an attractive example. The algorithm was analysed in terms of stability and accuracy. The results demonstrated that the inverse local linear model tree based upon Takagi-Sugeno models can be integrated into a control loop, and we obtained very good noise reduction performances. To extend this work, possible future efforts should focus on error detection and diagnostics, especially model-based system diagnostics. Fig. 9. Desired and obtained angular position with noise in superposition using the control scheme as in Fig. 7.
AUTHORS Mirko Nentwig* and Paolo Mercorelli - University of Applied Sciences, Braunschweig/Wolfenbuettel, Wolfsburg, Germany. E-mails: mail@mnentwig.de; p.mercorelli@fh-wolfenbuettel.de. * Corresponding author
References Fig. 6. Control scheme with inversion defined in (4) and using a PID control in feedback configuration.
[1]
[2]
[3]
[4] [5]
Fig. 7. Control scheme with inverse model from the proposed neural network and using a PID control in feedback configuration.
[6]
[7]
[8]
[9] [10] [11]
Fig. 8. Desired and obtained angular position with noise in superposition using the control scheme as in Fig. 6.
88
Articles
Nakano K., et. al., “Modelling and observer-based sliding-mode control of electronic throttle systems”, ECTI Trans. Electrical Eng., Electronics and Communications, vol. 4, no. 1, 2006, pp. 22-28. Fink A., Nelles O., “Nonlinear internal model control based on local linear neural networks”. In: IEEE Systems, Man, and Cybernetics, Tucson, USA, 2001. Fink A., Nelles O., Fischer M., “Linearization based and local model based controller design”. In: European Control Conference (ECC), Karlsruhe, Germany, 1999. Fink A., Toepfer A., On the inversion of nonlinear models. Technical report, University of Darmstadt, 2003. Fink A., Toepfer S., Isermann R., “Neuro and neurofuzzy identification for model-based control”. In: IFAC Workshop on Advanced Fuzzy/Neural Control, Valencia, Spain, vol. Q, 2001, pp. 111-116. Fink A., Toepfer S., Isermann R., “Nonlinear modelbased control with local linear neuro-fuzzy models”, Archive of Applied Mechanics, vol. 72, no. 11-12, 2003, pp. 911-922. Fischer M., Nelles O., Fink A., “Supervision of non-linear th adaptive controllers based on fuzzy models“. In: 14 IFAC World Congress, Beijing, China, vol. Q, 1999, pp. 335-340. Mercorelli P., “An optimal minimum phase approximating pd regulator for robust control of a throttle plate“. th In: 45 IEEE Conference on Decision and Control th th (CDC2006), San Diego (USA), 13 -15 December, 2006. Nelles O., Nonlinear System Identification with Local Linear Neuro-Fuzzy Models. Shaker Verlag, 1999. Nelles O., Nonlinear System Identification. Springer Verlag, 2001. Nelles O., Fink A., Isermann R. “Local linear model trees (lolimot) toolbox for nonlinear system identification“. th In: 12 IFAC Symposium on System Identification (SYSID), Santa Barbara, USA, 2000.
Journal of Automation, Mobile Robotics & Intelligent Systems
[12] [13]
VOLUME 3,
N° 3
2009
Nentwig M., Mercorelli P., “A matlab/simulink tool-box for inversion of local linear - model trees“. in press. Rossi C., Tilli A., Tonielli A., “Robust control of a throttle body for drive by wire operation of automotive engines“. In: IEEE Trans. Contr. Syst. Technology, vol. 8, no. 6, 2000, pp. 993-1002.
Articles
89
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
N° 3
2009
TWO SMART TOOLS FOR CONTROL CHARTS ANALYSIS
Adam Hamrol, Agnieszka KujawiĹ&#x201E;ska
Abstract: The paper deals with the analysis of process stability with the use of process control charts. A new idea of pattern recognition and two original methods of data processing, called OTT and MW have been described. The software application CCAUS (Control Charts - Analysis Unnatural Symptoms) supporting process control charts analysis with OTT and MW has been presented as well. Also the paper contains the results of the verification of the proposed methods performed on the basis of data obtained from two machining operations. Keywords: process, control chart, process stability, trend, pattern recognition.
1. Introduction The process control chart (PCC) is a statistical tool for supervising and improving process quality. Nowadays quality is usually viewed as conformance to customer needs and expectations. But from the pure manufacturing perspective quality means simply conformance to specifications (no defects) and additionally possible low variety (process stability). The other conditions mean that the product characteristics, e.g. dimensions, roughness, etc. obtained in the manufacturing process should not change from item to item. In practice PCC can be viewed as a statistical procedure in which data is collected, organized, analyzed and
interpreted in order to state whether the process is stable. In the past PCCs were applied to production processes, but it has evolved to any work where data can be gathered.
2. Process variability and process control chart All processes (characteristics) show some variability. The variability results from two types of causes: the first, which can be usually recognized and controlled, are called special-causes. The second type called common-causes (random-causes), is inherent to the process and cannot be practically eliminated in an easy way. Process control charts have to indicate if special-cause variation is present (Fig. 1). Usually PCC are the graphic presentation of process statistics like process average and process variances, which are calculated on the samples taken from the running process. Samples values result from measurements performed on chosen, usually viewed as critical characteristics of product or manufacturing process itself. The chart can show how the statistics change over time. An important part of PCCs are also called control (action) and warning lines, which enable to make decisions about process stability, even by the user without mathematical background [8], [3]. When the process is in statistical control (it is stable), the points on the control chart should follow a completely random pattern. The process is said to be out of statistical control when the pattern of the points provides hints to the source of the special-causes.
Fig. 1. Idea of conventional process control chart (with action lines and warning lines) [3]. 90
Articles
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
There are many patterns indicating process instability. The identification of such patterns is oftentimes restricted to four cases: the point beyond the action line, trend, run, and shifts (the several points above or under the central line, here: , ) (Figure 1). The written sources on the subject indicate many other patterns. Most of them are not easy to identify and it is difficult to expect from the operator to be capable of making the right decision. What to do with the process, which is under suspicion to be unstable, and how to take it under control is a matter of the process operator. Decisions of the operator are the result of his experience and sometimes the decisions are intuitive. There is a disadvantage of such a solution which consists in the necessity of constant observation of the pattern on the control charts by the worker whose attention should be focused mainly on the machine's operation. Another weakness of such solution is also insufficient knowledge of the operator of the sources of the special-cause variation and correcting actions. Moreover, there is always a risk that an experienced worker will resign from his post. Thus, the company loses his knowledge. In order to solve the above-mentioned problems automatic pattern recognition should be applied (Fig. 2).
Fig. 2. Process control using standard and smart tools of stability analysis. In such a case the man will no longer decide about the stability of the process and will no longer take up corrective actions to the process. However, it is possible to achieve by designing and programming certain methods of pattern classification on the charts. The pioneer researchers in this area are Cheng and Hueble from Arizona State University, Hamrol and Kalka from Poznan University of Technology.
N° 3
2009
limitation makes defining of non-standard signals such as; cycles, groups of points, mixtures, impossible. The method enables the researcher to define and recognize the symptoms on the chart by calculating probability of points occurrence in the following zones of control charts labeled as: A, B, C (Fig. 3), [2], [6].
Fig. 3. ABC zones on control chart.
4. Authors' methods The analysis of the methods concerning the pattern recognition has led the authors to develop theoretical assumptions and implement software for two new solutions. The new methods were called: One Two Three (OTT) and Matrix Weight (MW). The methods are called “smart” because they are based on very simple observation and do not need to use sophisticated mathematical tools. The first tool is based on the pattern recognition algorithm for control chart by Cheng and Hubele [1]. The other one is a completely new idea [5], [4], [7]. 4.1. OTT methods The idea is to correlate segments joining two successive points on the control chart with their slope in relation to the x - coordinate. To each segment an integer: 1, 2 or 3 is attributed. The integers reflect the slope direction, Figure 3. A resulting sequence of integrals attributed to successive segments is conceived as a picture, which reflects the process state from the point of view of its stability. The picture is further compared with patterns gathered in a pattern database, Figure 4. If the difference between the observed (under analysis) picture and the pattern is smaller than the fixed threshold value a signal about process instability is generated.
3. Software aided SPC During the last decades a lot of software aided statistical process control was devised. One if the most useful methods are the method called “3 zones”. It is strictly connected with the assumed normal distribution, imposes limitations concerning generating the unconventional signals (pattern). The “3 zones” method is applied in the well-known IT systems like Statistica, QDAS, and it is based on standards that were designed for the technological processes AT&T described in the year 1959. The zones are defined for the normal distribution and are multiplicity of standard deviation (also called sigma) process. The above
Fig. 4. Associating the segment with its slope to the horizontal axis.
Articles
91
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
The procedure of using OTT method is as follows (Fig. 5):
N° 3
2009
Step 3. Defining the pattern database - P. A pattern Pi is a sequence of integers out of 1, 2, or 3 (sequence of segment slopes). Each combination of the integers is correlated with the process instability symptoms, Fig. 7. Recognizing process instability. Step 4. A picture on PCC as a sequence of points (see Fig. 1), which is to be examined in order to make a decision about the process stability, is presented as a vector [Oi; Mi]. The component Oi is a sequence of integers 1, 2 or 3 and the component Mi reflects the location of the vector on PCC sheet. Mi is the product of weights of areas in which the end and beginning of the i-segment under observation is located: Mi = wi * wi +1.
Fig. 5. Flowchart of OTT method. Setting parameters Step 1. Classification of segments slopes into three classes according to the scheme in Fig 3. The slope is described by integers: 1 - (positive slope), if (ti - ti-1) > j, 2 - (without slope), if -j < (ti - ti-1) < j, 3 - (negative slope), if (ti - ti-1) < -j where j is an arbitrary assumed value.
Step 5. The vector recorded in the way defined in Step 4 is compared in a row with the patterns Pi, defined earlier in the database. The distance di between the pattern Pi and the picture Oi is examined. The distance is a measure how similar the examined vector and the specific pattern are. For each segment of the analyzed vector the value of:
is calculated, where gi = log M, Step 6. Similarity coefficient S is calculated,
Step 2. Dividing the control chart sheet into k - stripes and attributing to each of them a coefficient (weight coefficient) wi from an interval <1, k/2 >. The maximal weight wi is attributed to the central stripe, Fig. 6. It means that the value wi determines the location of strips. The number of stripes (k) is to be fixed by an expert.
where L - maximum value of sum of di (e.g. for chart with ten stripes the maximum sum of di is equal seven), Step 7. Decision making: process is unstable if S > Sth. When the calculated S value is greater than the limit value Sth and is close to 1, the analyzed pattern is said to be strongly similar to the given pattern in the database. The steps presented above concern the trends only. In order to assign the right pattern class to shifts or fluctuations (Fig. 1) equation must be realized:
An example of using the method described is shown in Fig. 8. Fig. 6. Dividing PCC sheet into k-stripes (e.g. k=10) and attributing to them the weights w.
Fig. 7. Examples of patterns. 92
Articles
4.2. The MW method The idea of MW (Matrix Weights) method is oriented on processes in which the instability is revealed, first of all, in the form of trends. In an analytical approach searching for a trend means comparing subsequent values of a given sequence of points on the PCC sheet. Denoting by xi subsequent values of a variable X, a rising trend occurs if the following condition is met: x1< x2 and x3< x4 and … and xn-1< xn. Unfortunately, in practice trends have seldom such a pure shape. For example, the second sequence in Fig. 9 can be defined by the relations: x1< x2 and x3> x4 and x3< x4 and x4> x5 and … and xn-1< xn. The information depicted by the above equation is not convenient to make a proper decision about the process
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
N° 3
2009
Fig. 8. Example of pattern recognition. stability because there is an indefinite number of various combinations. But one can notice that no matter how the points are situated towards each other they are located in a specific strip on the control chart. This observation is a starting point to the method described below. The developed method is based on the division of the chart into the matrix of [k ´ n] size (k-columns, n-lines), and each of them is assigned the weight wi Î<0, 1>. The idea of the method is shown in Fig. 10. Preparation stage Step 1. Dividing the PCC field into matrixes of [k ´ n] dimensions. Step 2. Attributing to the matrix fields weights wi Î<0, 1>. Distribution of the weights is correlated with specific symptoms of process instability. Defining the pattern database.
Fig. 9. Three examples of trends.
Fig. 10. Flowchart of MW method. Recognition stage Step 3. For a given sequence of points on PCC sheet a value is calculated. The S measure is a sum of weights wi attributed to the points from the sequence. Step 4. Comparison of S with the threshold value Sth that is fixed by an expert. If S has a greater value or equals to the limit value then there is a signal produced indicating the appearance of the pattern. Articles
93
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
The method limitation is the necessity of matrix creation for each class of patterns as well as choosing the limit value Sth that determines the picture's recognition as a symptom (Fig. 11).
N° 3
2009
where: MPr – indicator of correct recognition, MBr – indicator of incorrect recognition, n – the number of all the patterns, nr – the number of patterns recognized by the r-method, N – the number of all the indications, nbr – the number of incorrect indications of r-method. The results of the verification are presented in Table 1 and in Fig. 13. As a benchmark the recognition by using a neural network based algorithm (ANN) was applied. Table 1. Verification results.
Fig. 11. Example of rising and falling trend matrix. 4.3. Verification of the developed methods Verification of the developed methods was carried out on the set of data obtained from the grinding process (process 1) and from the superfinish of the surface of TV screen (process 2). In order to verify the efficiency of the methods, they were programmed in DELPHI 7.0 language and the software was called CCAUS: Control Charts – Analysis Unnatural Symptoms. The software enables: introduction of the measurement data, carrying out basic statistical analysis of the data, creating control charts, analysis charts and search for the symptoms by using OTT and MW methods, creating the database with the sources of process instability, creating database that would undertake correction. The main aim of CCAUS is the analysis of control charts. The analysis is carried out by the comparison of the pictures created on the chart with their defined patterns set by an expert in the database. The verification was carried out according to the following plan: defining k-element set of patterns for the process 1 and 2 (P1, P2). The patterns were defined according to the instructions considering the use of the control chart that are obligatory in given companies and from which the information about the process was obtained. In both cases, there were 7-element patterns obtained; collecting data from the processes (V1, V2) as well as the results of picture recognition by the operator of the machine (OP); data class analysis V1, V2. Marking all the patterns indicating the changes in the stability of the process, according to the experts and the person who is responsible for the process; recognizing the symptoms in the classes V1 and V2 by applying the OTT method, MW method as well as artificial neuron networks ANN (Artificial Neural Networks). Comparing the effectiveness of recognition by set measures:
94
Articles
and
The values of MB measure were not presented because in all the cases they were less than 1%.
Fig. 13. The values of MP measure for OP-, OTT-, MW- and SSN-methods. The weakest of the methods is the traditional method (assessment of operators, OP) – the least value of MP measure (Fig. 5). The percentages of correct recognition by the use of MW and SSN methods are almost approximating each
Journal of Automation, Mobile Robotics & Intelligent Systems
VOLUME 3,
N° 3
2009
other but there is a slightly better performance of artificial neural network (as well as for process 1 and 2). The verification of the methods efficiency has confirmed that the operator is a weak element in the process' analysis of the control charts. The operator performs well when dealing with a trend, run or shift pattern but not with the mixture type pattern and 2 from 3 points in the warning area. The classification based on the OTT method provides the researchers with an average result and it is much better than the man's recognition. Similarly to the operator, the OTT method does not “like” the models called mixtures. The best results were obtained for artificial neuron networks and the matrix importance method.
5. Conclusions The control charts are the pictures of the process stability. They enable the authors to use the techniques of picture recognition in the process of the charts analysis. Developing the OTT and MW methods provided the author with good results of process' state recognition. They proved to be more efficient than the operator - the man. The developed methods enable the experts to create unconventional patterns of instability processes, which significantly widens the possibility of their application. ACKNOWLEDGMENTS This work was supported by the AGH University of Science and Technology under Grant No. 11.11.120.612.
AUTHORS Adam Hamrol*, Agnieszka Kujawińska - Institute of Mechanical Technology, Poznan University of Technology, ul. Piotrowo 3, 61-138 Poznań. Tel. tel: (61) 665 2774 (Mr Hamrol), tel: (61) 665 2798 (Ms Kujawinska). E-mail: adam.hamrol@put.poznan.pl * Corresponding author
References [1] [2] [3] [4]
[5]
[6] [7]
[8]
Cheng Ch., Hubele N., “A pattern recognition algorithm for an x control chart”, IIE Transaction, no. 28, 1996. Dietrich E., A. Schulze A., Statistic methods for measuring resources' qualification, Notika System: Warsaw 2000. Hamrol A., Quality Management with Example, PWN: Warsaw, 2008. Hamrol A., Kujawińska A., New method of control charts analysis, Poznan University of Technology ATMiA: Poznań, 2006. Hamrol A., Kujawińska A., “Solving of tasks symptoms classification with neural networks”. In: Komputerowo Zintegrowane Zarządzanie (Computer Integrated Management), vol. 1, WNT: Warsaw, 2003, pp. 375-382. Juran J.M., Godfrey A.B. Juran's Quality Handbook, McGraw-Hill, 2000. Kujawińska A., “Automation of the Quality Control Charts”. In: 13th International DAAAM Symposium “Intelligent Manufacturing & Automation: Learning From Nature”, Vienna University of Technology, Vienna, Austria, 2000. Woodall W.H., “Controversies and Contradictions in Statistical Process Control”, Journal of Quality Technology Session, 2000. Articles
95
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
N° 3
2009
USING VISUAL AND FORCE INFORMATION IN ROBOT-ROBOT COOPERATION TO BUILD METALLIC STRUCTURES Jorge Pomares, Pablo Gil, Juan Antonio Corrales, Gabriel J. GarcĂa, Santiago T. Puente, Fernando Torres
Abstract: In this paper, a cooperative robot-robot approach to construct metallic structures is presented. In order to develop this task, a visual-force control system is proposed. The visual information is composed of an eye-in-hand camera, and a time of flight 3D camera. Both robots are equipped by a force sensor at the end-effector. In order to allow a human cooperate with both robots, an inertial motion capture system and an indoor localization system are employed. This multisensorial approach allows the robots to cooperatively construct the metallic structure in a flexible way and sharing the workspace with a human operator. Keywords: visual servoing, force control, sensor fusion, estimation algorithms, robot vision.
1. Introduction The automatic assembly processes involve different disciplines such as assembly sequence generation, assembly interpretation, robot positioning techniques based on vision and other sensors and handling of objects of the assembly [7]. Sensors are an important subject within the machine vision for an intelligence manipulation of objects, in situations with a high degree of randomness in the environment. The sensors increase the ability of a robot to adapt to its working environment. Currently, visual sensory feedback techniques are widely considered by researches for manufacturing process automation. Over the last few years, these techniques have been used for inspection and handling of objects [11] for estimation of pose with range data and three-dimensional image processing [4] or with stereo vision [9]. Currently, the human robot interaction to help in the modelling and localization of objects [10], and the sensorial fusion and control techniques to pose and insert objects [14] in assembly processes are employed more and more. The assembly system proposed in this paper has important advantages over the classic assembly systems, mainly due to its interaction between human and robot. In this system, the human will perform assistance tasks in the manipulation and positioning of objects. Another important aspect is the extensive use of sensors in the different phases of the task. The implemented system is composed of several subsystems. Among them a visual-force control subsystem to guide the movement of the robot and control the manipulation of objects in each planned task is emphasized. On the one hand, the basic task of the visual information is to control the pose of the robot's end-effector using 96
Articles
information extracted from images of the scene. On the other hand, the force information is used to control the handling and grasping of objects which are manipulated. The visual information is obtained from a camera mounted on a robot's end-effector, and the force data is obtained from a force sensor. The metallic structure to be assembled is manipulated with different tools which are interchanged automatically depending on the task that has been planned. Furthermore, the movement of a human who interacts with the robot at the same workspace is controlled and his positions are modelled with a RTLS (Real-time Location System) of radio frequency UWB (Ultra-WideBand) and with a full-body human motion capture suit. This suit is based on inertial sensors, a biomechanical model and sensor fusion algorithms. Finally, the proposed assembly system is complemented with a time of flight 3D-camera to help the visual control subsystem determine the localization of objects. To show how each subsystem works in an assembly process, a complex metal structure has been built. The key in constructing this, it is to combine grip and insertion movements among several types of metal pieces using robotic and human manipulators jointly to perform collaborative tasks that facilitate the correct assembly with robustness. This paper is organized as follows: The system architecture is presented in Section 2. Section 3 describes briefly the different phases of the system. These phases are presented in detail in the following sections. The visual servoing and the visual-force control approach employed to guide the robot are described in Sections 4 and 5 respectively. The robot-robot and human-robot cooperation during the task are shown in Sections 6 and 7. The final section presents the main conclusions reached.
2. System architecture The system architecture is composed of two 7 d.o.f. Mitsubishi PA-10 robots which are able to work cooperatively. Both robots are equipped by a tool-interchanger to employ the required tools during the task (gripper, robotic hand, screwdriver, camera, etc.). Both robots are equipped with a force sensor. An inertial human motion capture system (GypsyGyro18 from Animazoo) and an indoor localization system (Ubisense) based on Ultra-WideBand (UWB) pulses are used to localize precisely the human operator who collaborates in the assembly task. The motion capture system is composed of 18 small inertial sensors (gyroscopes) which measure the orientation (roll, pitch and yaw) of the operator's limbs. The UWB localization system is composed of 4 sensors which are situated at fixed positions in
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
the workplace and a small tag which is carried by the human operator. This tag sends UWB pulses to the sensors which estimate the global position of the human.
Fig. 1. System architecture.
3. Phases in the assembly system The different phases which compose the assembly system are illustrated in Fig. 2. These phases are the following: Phase 1. Visual Servoing. This system is employed to guide the robot by using visual information. Phase 2. Visual-force control. This approach is employed during the insertion to control not only the robot position but also the robot interaction forces. Phase 3. Robot-robot cooperation. The two robots are required to work jointly in order to detect with a robot visual features of the insertion task performed by the other robot. Phase 4. Robot and human sharing the workspace. The system coordinates the robot behaviour between the human and the robot. In the next sections these phases are described in detail.
4. Visual servoing In this section, an approach to guide the robot using visual information is presented. To do this, it is necessary to track the desired trajectories by using a visual servoing system employing an eye-in-hand camera system.
N° 3
2009
In a robotic task, the robot must frequently be positioned at a fixed location with respect to the objects in the scene. However, the position of these objects is not always controlled. So, it is not possible to previously assure the location of the end-effector of the robot to correctly accomplish the task. Visual servoing is a technique that allows positioning a robot with respect to an object using visual information [8]. Basically, the visual servoing approach consists of extracting visual data from an image acquired from a camera and comparing it with the visual data obtained at the desired position of the robot. By minimizing the error between the two images it is possible to control the robot to the desired position. Image-based visual servoing uses only the visual data obtained in an image to control the robot movement. The behaviour of these systems has been proved to be robust in local conditions (i.e., in conditions in which the initial position of the robot is very near to its final location) [2]. However, in large displacements, the errors in the computation of the intrinsic parameters of the camera have influence on the correct behaviour of the system [1]. Image-based visual servoing is adequate to position a robot from an initial point to a desired location, but it cannot control intermediate 3D positions of the end-effector. A solution to this problem is to achieve the correct location following a desired path. The desired path, k k T = { s/k Î1..N} (with s being the set of M points or k visual features observed by the camera at instant k, s = k { fi /i Î1..M}), is sampled and then these references are sent to the system as the desired references for each moment. In this way, the current and the final positions are very close together, and the system takes advantage of the good local behaviour of image-based visual servoing. A visual servoing task can be described by an image function, et, which must be regulated to 0: et = s-s*
(1)
where s is a M ´ 1 vector containing M visual features corresponding to the current state, while s* denotes the visual features values in the desired state. With Ls is represented the interaction matrix which relates the variations in the image with the variation in the velocity of the camera:
s = Ls × r
(2)
where s are the time derivative of the image features and r indicates the velocity of the camera.
By imposing an exponential decrease of et (et = -l1 et ) it is possible to obtain the following control action for a classical image-based visual servoing: vc = -l1Ls (s-s*) +
(3)
+
where Ls is the pseudoinverse of an approximation of the interaction matrix [8]. Fig. 2. Phases in the assembly system. Articles
97
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
The method employed to track a previously defined path in the image space must be able to control the desired tracking velocity. The set of visual features obser1 ved at the initial camera position are represented with s. From this initial set of image features it is necessary to find an image configuration which provides the robot with the desired velocity, ½vd½. To do so, the system iterk ates over the set T. For each image configuration s the corresponding camera velocity is determined considering an image-based visual servoing system (at this first stage 1 s = s): k
k
v = -l1Ls (s- s) +
(4) k
This process continues until ½ v½ is greater than the desired velocity, ½vd½. At this moment, the set of featuk res s will be the desired features to be used by an imagebased visual servoing system (see Equation (3)). Howj ever, the visual features, s, which provide the desired k k-1 velocity are between s and s. To obtain the correct image features the method described in [5] is employed. Therefore, once the control law represented in Equation (4) is executed, the system searches again for a new image configuration which provides the desired velocity. This process continues until the complete trajectory is tracked.
2009
Through this last relationship and by applying (2) it is obtained: ¶r ¶ r ¶F + = Ls × × = Ls × L F × F ® s = L FI × F (7) ¶t ¶ F ¶t
s = Ls ×
where F are the time derivative of the interaction forces + and LFI = Ls × L F is the interaction matrix. This matrix is estimated using exponentially weighted least-squares [6]. As it has been described in previous works [12], in order to guarantee the coherence between the visual and force information, it is necessary to modify the image trajectory through the interaction forces. Therefore, in an application in which it is necessary to maintain a constant force with the workspace, the image trajectory must be modified depending on the interaction forces. To do so, using the matrix LFI, the new desired features used by the controller during the contact will be: j
sd = s + LFI × (F - Fd)
(8)
Applying (8) in (3), the system is able to track a previously defined path in the image being compliant with the surface of the interaction object: vc = -l1Ls (s- sd) +
5. Visual-force control
N° 3
(9)
Now, we consider the task of tracking a path using visual and force information. The visual loop carries out the tracking of the desired trajectory in the image space. To do this, as it has been described in Section 4, the method to track trajectories in the image is employed: j
vc = -l1Ls+(s- s)
(5)
j
where s is the set of features in the path obtained by the system to maintain the desired velocity. Previously to define the visual-force controller employed, the meaning of the force-image interaction matrix, LFI, is described. To do this, considering F as the interaction forces obtained with respect to the robot endeffector and r as the end-effector location. The interaction matrix for the interaction forces, LF, is defined in this way: LF =
¶F ¶r -1 ® Ls+ = (LFT LF) LFT = ¶r ¶F
(6)
Fig. 3. 3D evolution of the end-effector in a bar insertion task. Figure 3 shows the 3D path to perform one of the assemblies to construct the structure. The desired path has been modified taking into account the forces measured at the end-effector of the robot. In this way, the robot is able to correctly introduce the bar into the aluminium
Fig. 4. On-line modification of the features in the image in an insertion task by using the visual-force controller. 98
Articles
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
holder. Figure 4 shows the desired image path and the path modified by the visual-force controller described in this section. The task can be accomplished thanks to the force-image interaction matrix which allows the robot to modify the desired image trajectory. The trajectory in the image space is recomputed on-line.
6. Robot-robot cooperation Once the bar has been inserted, a screw must be inserted to join the new bar with the structure. Before the insertion of the screw, it is necessary to make coincident the hole in the structure and the hole in the tube. As it is previously described, one robot (robot 1) rotates the tube until its hole coincides with the structure hole. In order to distribute tasks between the robots, a global planner is employed [15]. To perform these tasks, the global planner generates two tasks: “Detecting the hole” (T1) and “Inserting the bolt” (T2). The task T1 is divided into two actions: “Location of the bar hole” (A11) and “Rotating the bar to find the hole” (A12). The task T2 has only one action: “Inserting the bolt” (A2). Once the actions to be performed are generated, the task planner has to distribute them among robots. Considering the tools available (both robots are equipped with a force sensor, the robot 1 has available a parallel gripper, a screwdriver and a vacuum; the robot 2 has available a Barret-hand and a range camera), the action A11 must be performed by the robot 2. To perform the action A12 both robots are required, the robot 1 to rotate the bar using the gripper and the robot 2 to locate the hole with the range camera. The action A2 must be performed by the robot 1 because is the one that has the screwdriver. The action A11 has to be performed previously to the tube insertion, because in other case the hole will not be visible to locate it. To locate the hole, its position is approximately known, using a CAD model of the workspace. With that information, the robot has to position the camera in front of the hole. According to the geometric restrictions, the trajectory planner determines the movements of the robot to maximize the visibility of the hole. Fig. 5 shows the sequence of images captured by the range camera along the movement of the robot. In that sequence the hole in the structure is located, maximizing its visibility. Initially, the bar is not visible at all in the image. With the movement of the camera the visibility of the structure is increased, improving the visibility of the hole that is the target of that action. Once this action is done, the robot 1 has to insert the bar in the structure. After this, the bar must be orientated to achieve the correct visibility of the hole. While robot 2 holds the camera, the robot 1 must rotate the bar. These are the actions assigned by the task planner to each robot. If the hole is not visible, the bar must be orientated looking for the correct orientation of the bar to have the hole accessible for inserting the bolt. This last action is performed in a cooperative way, one robot is required to rotate the bar and other is used to control the range camera. Once the bar is properly oriented the robot changes the gripper for a screwdriver to insert the bolt in the hole [13].
N° 3
2009
7. Robot and human sharing the workspace A human operator collaborates in the assembly task in order to add a T-connector at the end of each tube of the metallic structure. The operator will place the connectors because this is a difficult task to perform for the robots. Meanwhile, the two robots will place the tubes because they might be too heavy for the human. When the human approaches the metallic structure to perform this task, she/he may enter the workspace of the robots. Because of this fact, the system has to ensure the safety of the human operator by tracking precisely her/his location. a)
b)
c)
d)
Fig. 5. Location of the bar hole. a) The hole is not visible range camera view, b) the hole start to be visible in the range camera view, c) the hole is visible in the range camera view, d) grey and real image of the hole. An inertial motion capture system is used to avoid possible collisions between the human operator and the robots. This system is able to track all the movements of the full body of the human and it represents them on a 3D hierarchical skeleton (Fig. 6). Thereby, this system not only estimates the global position of the operator in the environment but it also determines the location of all the limbs of his/her body. Although this system registers very precisely the relative positions of the different parts of the skeleton, it accumulates an important error in the global displacement of the skeleton in the workplace. Therefore, an additional localization system is needed in order to correct this error.
Fig. 6. 3D representation of the skeleton registered by the motion capture system. The other components of the environment (robots and turn-table) are also represented.
Articles
99
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
A UWB localization system is used to correct the global translational error of the motion capture system. The UWB localization system registers more precise global translation measurements but it has a smaller sampling rate (5-9 Hz instead of 30-120 Hz). The fusion of the global translation measurements from both tracking systems will combine their advantages: the motion capture system will keep a high sampling rate (30 Hz) while the UWB system will correct the accumulated translation error. A fusion algorithm based on a standard Kalman filter [3] has been applied in order to combine the translation measurements from both trackers. The measurements from the motion capture system are introduced in the prediction step of the Kalman filter while the measurements from the UWB system are introduced in the correction step. Therefore, the prediction step will be executed with a higher frequency than the correction step. Each time a measurement from the UWB system is received, the correction step of the filter is executed and the transformation matrix between the coordinate systems of both trackers is re-calculated. This new transformation matrix is applied to the subsequent measurements from the motion capture system and thus their accumulated error is corrected. Between each pair of UWB measurements, several measurements from the motion capture system are registered. Thereby, the tracking system keeps a high sampling rate (30 Hz) which is appropriate for human motion detection. In Fig. 7 two types of movements for a human in the workspace are shown. â&#x20AC;˘ A movement with linear displacement: This movement is used by human when he goes to the structure that is being built. â&#x20AC;˘ A movement with rectangular displacement: This movement is used by human when he walks around the metallic structure in the workspace. For each movement, the global translation of the human in the workspace is compute with the fusion of both systems: UWB and human motion capture. We can observe how the position obtained is better calculated when the fusion is employed. The result of the fusion algorithm is a set of translation measurements which determine the global position of the human operator in the workplace. These measurements are applied to the relative measurements of
Fig. 7. Position estimates obtained with the fusion algorithm. 100
Articles
N° 3
2009
the motion capture skeleton in order to obtain the global position of each limb of the human operator's body. The algorithm that controls the robots' movements will verify that the distance between each limb of the human and the end-effector of each robot is always greater than a specified threshold (1 m). When the human-robot distance is smaller than the safety threshold, the robot will stop its normal behaviour and will initiate a safety behaviour. The robot will remain still until the human-robot distance is again greater than the threshold. Thereby, collisions between the human and any of the robots are completely avoided and the human's safety is ensured.
8. Conclusions In this paper a robotic system to assembly a metallic structure has been presented. An important aspect of the proposed application is the flexibility that provides the multisensorial system employed. These sensorial systems developed in our previous works are working in this application cooperatively in order to provide a high degree of flexibility. Furthermore, in order to successfully develop the task, it is necessary to work in the same workspace the human and the robot. To do so, in this paper an inertial motion capture system is used to avoid possible collisions between the human operator and the robots. Furthermore, we have presented different ways to inspect different assembly tasks. We have used a time independent visual servoing system to guide robots. This system is independent of interruptions in the task which make lose the references to follow the trajectory. In addition, the visual servoing has been complemented with force control to correct the robot position. On the other hand, we have also studied how the information provided from human capture system and UWB system determines the exact position of the human in the workspace to maintain the security distance between robot and human to avoid collisions. To do this, a method of data fusion based on Kalman filter has been used. Finally, we have shown the utility to combine tasks of assembly and inspection between robots to performance some tasks. For example, the manipulation of a bar by a robot while another with a range camera detects the adapted position in the insertion task.
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
ACKNOWLEDGMENTS This work was funded by the Spanish MEC project “Design, Implementation and Experimentation of Intelligent Manipulation Scenarios for Automatic Assembly and Disassembly Applications (DPI2005-06222)”and the FPU grant AP2005-1458.
[13]
[14]
AUTHORS Jorge Pomares*, Pablo Gil, Juan Antonio Corrales, Gabriel J. García, Santiago T. Puente, Fernando Torres - Physics, Systems Engineering and Signal Theory Department, University of Alicante, PO Box 99, 03080, Alicante. Spain. Tel. +34 965903400. Fax. +34 965909750. E-mails: {jpomares, pablo.gil, jcorrales, gjgg, santiago.puente, fernando.torres}@ua.es * Corresponding author
[15]
N° 3
2009
tems, Man, and Cybernetics-Part C, vol. 35, no. 1, 2005, pp. 4-15. Puente S.T., Torres F., “Automatic screws removal in st a disassembly process”, In: 1 CLAWAR/EURON Workshop on Robots in Entertainment, Leisure and Hobby, 2004. Son C., “Optimal control planning strategies with fuzzy entropy and sensor fusion for robotic part assembly tasks”, International Journal of Machine Tools and Manufacture, vol. 42, 2002, pp. 1313-1335. Torres F., Puente S., Aracil R., “Disassembly planning based on precedence relations among assemblies“, The International Journal of Advanced Manufacturing Technology. vol. 21, no. 5, 2003, pp. 317-327.
References [1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
Chaumette F. and Hutchinson S., “Visual Servo Control, Part I: Basic Approaches”, IEEE Robotics and Automation Magazine, vol. 13, no. 4, 2006, pp. 82-90. Chaumette F., “Potential problems of convergence in visual servoing”, Int. Symposium on Mathematical Theory of Networks and Systems, Padoue, Italy, 1998. Corrales, J.A., Candelas F.A., Torres F., “Hybrid Tracking of Human Operators using IMU/UWB Data Fusion by rd a Kalman Filter”, In: 3 ACM/IEEE International Conference on Human-Robot Interaction, Amsterdam, 2008. Dongming Z., Songtao L., “A 3D image processing method for manufacturing process automation”, Computers in Industry, vol. 56, 2005, pp. 975-985. Garcia G.J., Pomares J., Torres, F., “A new time-independent image path tracker to guide robots using visual th servoing”, In: 12 IEEE International Conference on Emerging Technologies and Factory Automation, Patras, Greece, 2007. Garcia, G.J., Pomares, J., Torres, F., “Robot guidance by estimating the force-image interaction matrix”, IFAC International Workshop on Intelligent Manufacturing Systems 2007, Alicante, Spain, 2007. Gil P., Pomares J., Puente S.T., Diaz C., Candelas F., Torres F., “Flexible multi-sensorial system for automatic disassembly using cooperative robots”, Computer Integrated Manufacturing, vol. 20, no. 8, 2007, pp. 757-772. Hutchinson S., Hager G.D., Corke P.I., “A tutorial on visual servo control”, IEEE Trans. Robotics and Automation, vol. 12, no. 5, 1996, pp. 651-670. Kosmopoulos D., Varvarigou T., “Automated inspection of gaps on the automobile production line through stereo vision and specula reflection”, Computers in Industry, vol. 46, 2001, pp. 49-63. Motai Y., “Salient feature extraction of industrial objects for an automated assembly system”, Computers in Industry, vol. 56, 2005, pp. 943-957. Pauli J., Schmidt A., Sommer G., “Vision-based integrated system for object inspection and handling”, Robotics and Autonomous Systems, vol. 37, 2001, pp. 297309. Pomares J., Torres F., “Movement-flow based visual servoing and force control fusion for manipulation tasks in unstructured environments”, IEEE Transactions on SysArticles
101
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
N掳 3
2009
SPECIFICITY OF BOTTLENECKS IN CONDITIONS OF UNIT AND SMALL-BATCH PRODUCTION J贸zef Matuszek, Janusz Mleczko
Abstract: The manufacturing industry has evolved over the past several decades in response to changing customer needs. Customers have become more demanding and want products that can meet their specific individual requirements. The standard products previously produced in large batches are not sufficient to meet this variety demanded. Given the increased competition, both locally and globally, companies must also now respond faster to get and keep customers. Enterprises were forced to unit and small-batch production. Currently, advanced planning systems are coming into use, however their cost exceeds the possibilities of small and medium enterprises and algorithms used often require great customization to industries' needs and conditions of unit and small-batch production. The paper has been drawn on the basis of research on overloads of moving bottlenecks in conditions of unit and small batch production in real conditions having a big number of resources and tasks. The methods used so far are not capable of finding the global optimum of such big data ranges. The author took on building a heuristic algorithm, which could find solution good enough and based on TOC (Theory of Constraints) assumptions and verification of assumptions using tests in real production systems. The above method found application to the industrial scale, as extension of the ERP class system. Keywords: Theory of Constraints, job shop scheduling, moving bottlenecks, heuristic algorithm.
1. Introduction The guarantee of success on contemporary, more and more competence-driven and changeable market is fast and flexible implementation of production processes, which also assures immediate adjustment of production to changes both of the environment and more and more demanding customers. If the 70's were the times of costs reduction, the characteristic of the 80's was quality improvement, the 90's were focused on flexible producst tion, the beginning of the 21 century is characterized by focus on customer's satisfaction This trend translates into production of articles adapted to customer's needs and to shortening the availability of products very often below the production cycle. Today, manufacturers in many industries are faced with very high product variety and much smaller batches, which can approach one unit. To implement the tasks connected with controlling the production in such conditions it is necessary to construct operational plans determining the order of production tasks performance by individual production sec102
Articles
tions. For the plans no to be a chance set of tasks it is necessary to order them properly and to optimize the route of processes. Since the production is aimed at fulfilling specific needs of demanding customers and not at filling warehouses, the production volume should reflect the volume of orders. In times of fight for the client every order has to be performed on time. What is more, in times of fight for shortening the delivery cycle, meeting safe deadlines, that is distant in time, is not enough. Companies are forced to meet short deadlines with keeping the product price competitiveness condition. It is hardly possible without a proper, APS (Advanced Planning System) class, advanced planning support system. Currently, advanced planning systems are coming into use, however their cost exceeds the possibilities of small and medium enterprises and algorithms used often require great customization to industries' needs and conditions of unit and small-batch production.
2. Formal description of the issue This paper contains some propositions regarding optimization of production plan (ordering tasks, orders) of real businesses and a description of problems related to this issue in real conditions narrowed to abovementioned conditions. Particular attention was given to issues of uncertainty and verification of algorithm assumptions using positive feedback in the plan. They are the key to obtaining a model, which properly reflects the reality in which we often encounter vague or even incomplete information. Given is a finished set of tasks , which have to be performed by finished of machines from the finished set . Each task is a sequence of operations , whose arrangement (order) is determined by a set of limitations usually described with a graph. Each operation is performed by only one, specific machine in the specified period of time [7]. To find the solution of arranging task, measurements connected with the rate of utilization of machine job fund ( ) were used:
,
(2.1)
where: - means the sum of jobs' process time on the machine j, - standard working time of the machine j, - rate of carry out the standard of the operation for the machine j.
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
If < then machine j is overloaded. The criterion of the optimization was formulated as
for every j where
<
.
(2.2)
It is hardly possible to determine "a priori" fulfillment of the condition < so dp_aps_1 procedure is committing preliminary selection of machines.
3. Alternatives of the manufacturing process The abovementioned assumption should be supplemented with additional ones, which will bring the issue closer to real conditions.
N° 3
2009
of every task. The operation arc to operations...... represents the requirement of the operation to be finished before starting operation . From the practical point of view the abovementioned issue was broadened by alternatives of production process. Each task is possible to be performed in alternative production processes. Task.... is a sequence of operations , for alternative a(l), whose arrangement (order) is defined by a set of limitations usually described using a graph. Single performance of a task in alternative a is enough for the task to be performed. In the example (see Fig. 1) for the task there are 3 variants a11 , a12 , a13 of the production process and for the task there are 2 variants a21 , a22. The number of possible solutions of scheduling is growing dramatically. Model solution (see Fig. 1) is based on choosing the alternative a12 for task and a21 for task . The schedule for the machines' tasks is as follows: , , , . Practically the number of alternatives is unlimited. In principle we can define the optimal variant in statistical terms (e.g. – without taking into account the availability of machines or the influence of this variant on other processes). The criterion of choice of this variant in statistical sense is usually costly but a large-area analysis can also be conducted. The main variant – optimal in the statistical sense, will play crucial part in searching for a global optimal solution. The phrase “searching for” is the key one here since finding such a solution in case of P N hard problem is rather a chance event.
4. Solutions already used Fig. 1. Graph of the model solution for scheduling with alternatives of the manufacturing process. The first assumption relates to alternatives of the manufacturing process. For some products and elements we possess a database of alternative itineraries. The itinerary of production process can basically be presented in the form of a graph. Every type of scheduling issue can be presented in the form of a disjunctive weighted graph in nodes. Let G = {N, A È E} be such a graph. N – set of apexes, A – set of conjunctive arcs (edges), E – set of disjunctive arcs. Every operation represented by the apex of a graph is performed by a given machine within the period of time defined by the weight of the apex by which it is represented. Additionally two apparent apexes (operations) were introduced: the source - direct predecessor of the first operation of every task and the outlet - direct follower of the last operation of every task, whose weight equal zero. Order limitations between operations are represented by conjunctive arcs from set A. Numbers in brackets by the apexes represent the duration of the operation. For every operation there is an arc leading from it directly to the subsequent operation. For every operation there is an arc leading from it directly to the subsequent operation . Additionally there also exist conjunctive arcs leading from the apparent operation of the source to the first operations
There exist many algorithms used to solve problems of arranging tasks (scheduling), which can be divided into two main groups: optimizing (accurate) and approximating (rough) [8]. The first group is algorithms guaranteeing finding an optimal solution. Practically speaking, while solving problems of bigger scale only approximating techniques are used, which do not guarantee finding an optimum but they require fewer resources and are faster. The main problem in approximating algorithms is “being stuck” in one of local extremes. The basic strength of this type of algorithms is finding acceptable, „good enough” solution. One of the main problems to be solved is the starting point determining the pace of reaching the aim function and the possibility of avoiding “being stuck” in the local extreme. The algorithm presented below can be also counted among approximating methods. This article is based on model of automated data collection for simulation [6] from ERP system [3]. This kind of problems is also solved using methods of engineering optimization [5].
5. Solution algorithm The solution in the method above is based on assumptions of the Theory of Constraints formulated by dr Eliyahu M. Goldratt in business novel [2]. In the solution we will use those elements of the Theory of Constraints, which refer to a bottleneck [4]. Since the flow of material stream is limited by the flow in the bottleneck, the profit of a company directly correlates with this flow. The Articles
103
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
theory, simple in its essence, will be strengthened by mathematical device and computer data container of ERPclass data system. Creating the algorithm (see Fig. 2) we focused on the bottleneck's work. The problem is however the fact that the bottleneck is moving, it appears periodically at some machines, while very seldom or never at others. In the first step of the algorithm we make standard backwards scheduling using the method without resources limitations. For production orders the work tasks have been originally generated in the variant described as the major one. It is usually the statistically optimal variant of the process. Assigning the reverse scheduling function to so prepared tasks allows us to perform tasks as late as possible. Limitless scheduling allows for identifying overloads of particular resources in specific periods of time. Additionally, the above method of scheduling enables calculating the normative length of the cycle - and the sum of lengths of cycles being the basis for calcula-
Fig. 2. General scheme of the optimization algorithm. 104
Articles
N° 3
2009
ting â&#x20AC;&#x201C; , the rates of the lengthening of production cycle calculated according to , â&#x20AC;&#x201C; maximum of the lengthening of production cycle calculated according to , medium of the lengthening of production cycle calculated according to . Since we only focus on critical resources of the whole range of tasks and from the resources we only pick up those for which the sum of tasks is greater than the accepted norm . The multiple overloads will remain under the relation (2.3). The problem to be solved is still the density of time axis division. Generally, we dispose of daily density, weekly density or monthly density. Assuming monthly density is burdened with too great an error. In the sum of a month there may not be any excess; there can however appear heaps in particular days and weeks. On the other hand, assuming daily accuracy seems to be exaggerated.
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
input data to the procedure of searching for alternatives of a production process for the elements.
F02
2837-12120-4
1
32,995
1,750
18,854
F02
2837-12120-4
2
32,995
1,750
18,854
5,000
F03
0160 1063 00
1
600,717
58,000
10,357
24,000
F03
0160 1063 00
2
600,717
44,400
13,530
24,000
F03
0160 1063 00
3
600,717
44,400
13,530
24,000
F03
0322 2053 00
1
256,849
29,500
8,707
16,000
F03
0322 2053 00
2
256,849
17,200
14,933
16,000
F03
0863 1213 00
1
464,101
3,600
128,917
2,000
F03
0863 1213 00
2
464,101
3,600
128,917
2,000
F03
0863 1213 00
3
464,101
3,200
145,032
2,000
F03
0863 1213 00
4
464,101
2,800
165,751
2,000
QUANTITY T
ALTERNA-TIVE
5.3. The analysis of possible production variants for tasks in the bottleneck - DP_APS_3 procedure.
ELEMENTS
5.1. Bottlenecks identification – DP_APS_1 procedure The operation of the procedure for bottleneck identification DP_APS_1 consists in assigning aggregated overloads to weekly ranges and groups of workstations. In the first step the availability of workstations is calculated on the basis of data of class ERP system. In class ERP system the availability of workstations, in other words job time fund follows from 3 basic attributes of workstations group: the number of workstations (machines), work calendar (working days, non-working days – planned renovations, failures, holidays etc.) and regulations of work scheme (1-shift, 2-shift, continuous work etc.). The aim of this step is to create the matrix Hkt of job fund H:{1,2,…n}x{1,2,…m} aggregated to particular weeks of job fund, where {1,2,…n} is a set of machines groups and {1,2,…m} is a set of numbers of week of the year. In the next step what is investigated is the sum , of job intensity of particular terms ranges, in addition to which the terms of tasks performance follow directly from previously implemented scheduling function. In the next step we calculate the matrix of overloads while the element of matrix , where is the element of tasks matrix and a is the element of job fund matrix. The matrix is presented in D_APS_1 table. If in order to speed up calculations the outcomes are written into a supporting table D_APS_2. In order to depict the functioning of the procedure, the achieved outcomes have been presented below (see Fig.3).
2009
BOTTLENECK
Optimization within daily plans should be left to solving in the next optimization phases. The above assumptions have been verified by production practice.
N° 3
5,000
Fig. 4. The rate of throughput per time of capacity constraint resources. The operation of the procedure for tasks searching in bottlenecks – DP_APS_3 consists in the analysis of possibilities connected with the change of process variant into less overloading for the bottleneck. In order to do that it is necessary to build the matrix of possible solutions for elements from D_APS_3 matrix and for every ai variant to calculate job intensity of a variant in the bottleneck and the rate of value stream flow through the bottleneck. The essence of the procedure is connected with the evaluation of value stream flow through the bottleneck. What was used in this case was the assumptions of Theory of Constraints connected with cost evaluation of the variant used to solving the traditional problem PQ [1]. We calculate the rate of throughput per time of capacity constraint resources (CCR) ,
(5.1)
where
Fig.3. Graph of tasks' overloading in the week's period. 5.2. Searching for tasks from time ranges in the bottlenecks - DP_APS_2 procedure The operation of the procedure for tasks searching in bottlenecks – DP_APS_2 consists in defining the area of possible exchanges of alternatives of production process. In order to do it, it is necessary to find all tasks.............. for which operations in a weekly time range show overload . In order to speed up the calculations the outcomes are written down in the supporting table D_APS_3. Additionally, apart from the tasks list also the element code and number of units produced are searched for. The above information will be the
P – price of product, TVCi – total variable cost in i variant. Total variable cost is calculating according to Throughput Accounting (TA) [1]. In this case, equals material purchase costs), denotes job intensity of the bottleneck (CCR) in the i variant. The higher this rate, the better is . In analyzed cases there appeared in some variants the cases of lack of work of the bottleneck. Then , which means optimal variant from the point of view of capacity constraint resources (CCR) in the given time range (Fig. 4).
Articles
105
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
5.4. The analysis of limitations to exchange of alternatives for tasks matrix – DP_APS_4 procedure. The operation of the procedure for searching of limitations to exchange of variant DP_APS_4 consists in the analysis of limitations connected with: limitation of availability of alternatives number, limitation of material charging, limitation of advancement elements performance for tasks. In case of appearing limitations for task the limitation status is calculated and written down in the alternatives exchange table D_APS_5. 5.5. Optimization of choice of alternatives exchange – DP_APS_5 procedure The operation of the procedure of searching for optimization of choice of alternative exchange DP_APS_5 generally consists in searching through the set of possible solutions of D_APS_5 taking into account the limitations analyzed in procedure DP_APS_4 in connection with the demand for decreasing overloads of the bottleneck from the set D_APS_2. Optimization consists in arranging exchange variants according to the cost criterion of the rate of values stream flow to the moment when demand for job intensity after exchange of variants
N° 3
2009
> 0. Additionally, there follows the checking of next limitation relating to conformity of material demand in optimal variant with a variant so far appearing in task . 5.6. Exchange of tasks for optimal variants - DP_APS_6 procedure schedule of tasks performance and exchange in the ERP system database of the tasks list into an optimal list according to the DP_APS_5 procedure and of outcomes saved in matrix D_APS_5 and D_APS_2. After conducting DP_APS_6 procedure we perform another iteration starting from scheduling with backwards method. In practical conditions, after undergoing three iterations there was neither improvement nor further decrease of overloads noticed.
6. Experimental research The research was done on 6 enterprises marked A-G, of different production characteristics. Appropriate samples concerning production system were taken from the companies (Fig. 5.). Input data come from accumulated databases of the REKORD.ERP system. 6.1. Exemplary analysis of research outcomes – Sample B2. The objective of examination of the second sample was to define both the degree of method's usefulness and
Fig. 5. Input data for researches algorithm's usefulness.
Fig. 6. Layout's graph of overloading after individual itera-tions in the period – in the B2 sample (F02,F04, F08 groups). 106
Articles
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
N° 3
2009
Fig. 7. Results of the optimization procedure operating in the B2 sample. necessary number of iterations and to confirm the outcomes of sample 1 examination. In order to do that, 10 iterations in each sample were conducted. The phenomenon of bottlenecks moving was confirmed also in sample 2. In individual iterations, while exchanging variants, to unload the overloads in one place, in one group of machines, the system can move them to the other machines. Let us check the conduct of a group of millers. Presented below is the configuration of overloads on the weekly axis with details of overloads after each of key iterations (Fig. 6). Sample 2 was taken in the time of intensive growth of the order portfolio. It has a greater normative summary length of cycles, the number of tasks and job intensity in comparison with B1. The above relationship is a result of a production workshop assuming a bigger number of tasks in relation to its realizations in the preceding period. The number of orders has also increased together with the sum of overloads reaching the value of 4495,61 h. The number of groups of machines taking part in the operation plan has slightly increased (from 29 to 30). In the research 10 iterations were inducted. As a result of procedures operation the number of orders changed (falling by 4). It is connected with choosing the variants of full cooperation. The relations of the number of jobs change similarly to sample B1, deceasing by 50 items. The characteristic of tasks number in individual iterations also tends to decrease, while it also stabilizes at a certain level. The numbers of tasks after a rapid jump in the first iteration (sending tasks to cooperation) are gradually decreasing and stabilizing in subsequent routes. The characteristic of a rate, which is most interesting for us, has been shown above (Fig. 7). It behaves quite predictably and similarly to sample B1. After initial considerable decrease of the rate in the first three iterations the further decrease was observed not until iteration 6. The number of available variants does not allow for further decreasing of the rate, what is more, even in iterations from 4 to 10,
the rate oscillates around a certain value (2700 h). In the later phase, while exchanging some variants into others the system only causes transfer of the overload form one bottleneck to the other, not contributing anything new. As can be seen in this case using subsequent iterations (after 5) will not bring much profit but will only lengthen the time of calculations. The system starts „to vibrate” and the amplitude of “vibrations” of the system is running at around 100 h. In absolute numbers, the decrease of sum of overloads amounted to 1811,64 h, and in percentages it reached over 40% - while the maximum value was observed in iteration 9. The above result was reached using to the full the production process alternatives. As a result of algorithm operation the summary job intensity also decreased together with summary length of cycles, while there can be observed oscillation around certain value. Similarly to A1, A2, A3 the graphs have similar characteristics. Also the rate of company's preparation to working in alternative production processes looks similar to a B1. The above rate of relatively high values in absolute system, from around 1400 in the first route to 250 in routes 6-10 allows for a considerable reduction of overloads. It happens so thanks to great saturation with alternative variants the overload has improved by nearly 40%. It is worth noticing the fact of growth in the number of group 1 elements. 6.2. Repeating of the experimental research After the period of 3 quarters of the year in selected enterprises experiment was repeated. DP_APS_1 procedure was used to three databases downloaded from the system ERP. A period of fifteen weeks was taken into consideration from 01.12.2008 to 16.03.2009. After applying the procedure to all three bases (called F1, F2, F3), resources being bottlenecks were identified. Received results were presented on the graphs (Fig. 8, Fig. 9, Fig 10). Articles
107
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
N° 3
2009
In all three cases chronic bottlenecks, i.e. being stores overloaded by the entire analyzed period appeared. Also swimming bottlenecks were observed. It was easier for graphs made out to observe the thanks, as „are moving” critical stores. They are turning up at one week in order to disappear in consecutive and then again are returning in the next week. In every base there were resources, which turned out to be limitations only in one or in two weeks. Only difference, which is appearing between, bases F1, F2 and F3 it is a size of overloading on some positions. Stated the structure of all bases were similar. 6.3. Summary of the experimental research results Summary of the experimental research was shown below (Fig. 11).
Fig. 8. Moving bottlenecks in F1 sample.
While examining the above enterprises it has been found that: Among examined companies the usefulness of the above method is visible in companies, which have, for a long time been using alternative processes in which the attained decrease of overloads reaches 60%. Efficiency of the above method does not cause problems in practical implementation. In case of companies with elaborate products structures usefulness is particularly visible. In companies which lack overloads (G1) using of the method is groundless. Repeating experiment confirmed previous examinations. There exists the whole range of companies not prepared for using this method (e.g. sample F).
7. Conclusion Fig. 9. Moving bottlenecks in F2 sample.
Fig. 10. Moving bottlenecks in F3 sample. Next results from all three bases were compared between themselves. It turned out that bottlenecks were appearing in all bases, and their disintegration developed similarly. In every of bases it was possible to observe how critical resources are changing in time periods. 108
Articles
In research papers there can be found descriptions of many test problems of tasks ordering. It is difficult to find an example of a problem solved in real conditions of such a number of tasks and job resources. Therefore, the author has presented the analysis of the problem of tasks ordering on real data in a broad spectrum of many production companies. The author's aim is not to prove superiority of this method over others. The task was to state usefulness of the method of process alternatives exchange in real conditions. The results below refer to states before optimization and after its application. Providing the above results helped to define the rim conditions of companies in which usefulness of this method is sufficient. Heuristic algorithms cannot be proven using mathematical methods. A number of tests on real data have been carried out to prove this method. What is new in that approach? Considerable an advantage of the method is automated data collection for simulation in conditions of unit and small batch production. In conditions of unit and small batch production collection is an extremely time consuming process predominantly because the task is manually orientated. Hence, automating process of data collection would be extremely advantageous. This paper presents one of examples how simulation could utilize the ERP system as the simulation data source. It may be one of steps for Digital Factory creating [9].
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
N° 3
2009
Fig. 11. Overall comparison of findings. The concept of automatic data collection through an interface between the simulation model and ERP class system could not be a distant future. The above method can be called the simulation “on line”. This method found application to the industrial scale, as extension of the ERP class system. It is a problem demanding separate consideration and further research.
[8] [9]
no. 46-47, 1996, pp. 109-118. Smutnicki C., Algorithms of Arranging Tasks, Akademicka Oficyna Wydawnicza EXIT: Warszawa, 2002, (In Polish). Zulch G., Stowasser S., “The Digital Factory: An instrument of the present and the future”, Computers in Industry, no. 56, 2005, pp. 323-324.
AUTHORS Józef Matuszek - University of Bielsko-Biała, Department of Industrial Engineering, Poland, tel. +4833827253. E-mail: jmatuszek@ath.bielsko.pl. Janusz Mleczko* - University of Bielsko-Biała, Department of Industrial Engineering, Poland, tel. +4833827253. E-mail: jmleczko@ath.bielsko.pl. * Corresponding author
References [1] [2]
[3]
[4]
[5] [6]
[7]
Corbett T., Throughput Accounting, North River Press Publishing Corporation, 1999. Goldratt E., Cox J., The Goal - A Process of Ongoing Improvement, 2nd Rev. Ed., North River Press Publishing Corporation: Great Barrington MA, 1992. Gupta M., Kohli A., “Enterprise resource planning systems and its implications for operations function”, Technovation, no. 26, 2006, pp. 687-696. Jones T.C., Dugdale D., “Theory of Constraints: Transforming Ideas?”, British Accounting Review, no. 30, 1998, pp. 73-91. Singiresu R.S., Engineering Optimization - Theory and Practice, 3rd Edition, John Wiley & Sons, 1996. Robertson N., Perera T., “Automated data collection for simulation?”, Simulation Practice and Theory, no. 9, 2002, pp. 349-364. Schmidt G., “Modelling production scheduling systems”, International Journal of Production Economics, Articles
109
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
N掳 3
2009
REVIEW, CLASSIFICATION AND COMPARATIVE ANALYSIS OF MAINTENANCE MANAGEMENT MODELS M贸nica L贸pez Campos, Adolfo Crespo M谩rquez
Abstract: The present article does a chronological tour through some representative maintenance management models, describes them in a general way and classifies them according to their functioning under declarative models and under process oriented models. It distinguishes in addition the innovations proposed by each author and compares the elements appearing on each model with some of the points mentioned by the ISO 9001:2008 standard, as well as other criteria considered suitable to the case. From this analysis are derived the results between which are distinguished some desirable characteristics for a modern and efficient maintenance management model. In addition the application of these models for supporting industrial needs, as well as its future challenges are discussed too. Keywords: maintenance, management, maintenance process, maintenance model, and maintenance tools.
1. Introduction Maintenance is defined (EN 13306:2001) as the combination of all technical, administrative and managerial actions during the life cycle of an item intended to retain it in, or restore it to a state in which it can perform the required function (or a combination of functions which are considered necessary to provide a given service). In the same standard, maintenance management is defined as all the activities of management that determine the maintenance objectives or priorities, strategies and responsibilities and implement them by means such as maintenance planning, maintenance control and supervision, and several improving methods including economical aspects. Different authors have proposed models, frames or systems seeking to manage maintenance in the best way. Using the most advanced techniques and proposing innovative concepts, every model proposed has strengths and weaknesses, which are matter of study in this paper. In this paper we initiate briefly mentioning the importance of a maintenance management system, and later we describe the methodology we followed for the development of this research. The process consisted in an intensive search and compilation of the maintenance management models found in literature since 1990 up to date, their classification and comparative analysis following certain structured criteria. This global search of maintenance management models and systems is also chronologically presented, at least for the twenty models that were selected, and then compared. 110
Articles
Models comparison is produced in different steps. In a first step, we divide the models into two types: declaratives and process oriented. Then innovations proposed/introduced by each model are chronologically distinguished. Finally, each scheme is compared with a specified criteria (based on ISO 9001:2008 standard), and then some results and conclusions are identified. From this analysis finally we do a brief description of the application of these models in industry and the future challenges in this domain.
2. The importance of a maintenance management system Maintenance has been experiencing a slow but constant evolution across the years, from the former concept of "necessary evil" up to being considered an integral function of the company and a way of competitive advantage. Since approximately 3 decades, companies realized that if they wanted to adequately manage maintenance it would be necessary to include it in the general scheme of the organization and to manage it in interaction with other functions [15]. The challenge is to integrate maintenance within the management system of the company. Several objectives are fulfilled after developing a maintenance management system in an organization: a more comprehensible system to all the people involved is created [24] and a structure that propitiates the providing of leadership direction and support is formed [7]. According with Prasad et al. [19] some others benefits of having a maintenance management model are: achievement of high productivity, overall equipment emergencies reduction, improvement in production efficiency, accidents reduction, verification of the investments profit, development of a flexible, multi skilled organization, among others. Then the ultimate reasons for maintenance management are fulfilled: to maximize the business profit and to offer competitive advantage. ([7], p. 6). Therefore the challenge of "designing" the ideal model to drive maintenance activities has become as we will see later, a research topic and a fundamental question to accomplish the effectiveness and efficiency of maintenance management and to fulfil the enterprise objectives [19].
3. The literature review The bibliographical search was done using the following electronic databases: Abi/Inform Global - ProQuest, Blackwell Synergy, Business Source Premier - EBSCOhost, Compendex (Engineering Village) - Elsevier Engineering Information, Current
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
Contents Connect ISI, ISI Web of Knowledge ISI, NTIS Ovid (SilverPlatter), Scopus - Elsevier, Springer Link, Wiley InterScience. th From this exploration, finished on February 18 2008, a whole of 14 articles were selected, these articles are: Pintelon and Van Wassenhove (1990)[16], Pintelon and Gelders (1992)[15], Vanneste and Wassenhove (1995) [24], Riis et al. (1997)[2], Hassanain et al. (2001)[10], Tsang (2002)[23], Waeyenbergh and Pintelon (2002) [25], Murthy et al. (2002)[14], Cholasuke et al. (2004)[7], Abudayyeh et al. (2005)[1], Pramod et al. (2006)[18], Prasad et al. (2006)[19], Tam et al. (2007)[22], and Söderholm et al. (2007)[21]. The criteria for the selection of the mentioned 14 articles were: 1. The article had to propose a global maintenance management model and it had not to be focused only on a particular management phase or maintenance tool, 2. The model proposed in the article had not to be a computer model or CMMS (Computerized Maintenance Management Systems), 3. The article had to be published only by indexed scientific journals, 4. The article had to present not only a review or an application, but also a new model proposal, 5. The model in the article had to be preferably represented using a graphical diagram. Besides the mentioned articles, a bibliographical search was carried out in which the following books were found and selected, due the models proposed by them fulfil the above mentioned criteria: Campbell (1995/ 2001)[4], Kelly and Harris (1997)[13], Wireman (1998) [26], Duffuaa (2000)[8], Kelly (2006)[12], Crespo (2007) [6]. In this way 20 contributions were selected, presenting the same number of maintenance management models that will be object of this study.
4. Declarative vs. process-oriented models Once selected these 20 contributions, and as the reader may guess, to synthesize the content of each and every one of them is very difficult. In order to do so, we used a table to concentrate the information gathered from each model. Based on this synthesis an initial classification is proposed, dividing the models into two types: declarative models (referenced from the concept “declarative language” found in the Encyclopedia Britannica (2008)[9]), and process-oriented models (from “business process orientation”, a concept based upon the work of Porter (1985) [17], among others). What is the difference between these two types of models? Declarative models mention the components of a maintenance system, although they do not refer to the intercommunication/link between those components in an explicit form. In this type of models a clear information flow among the components is not distinguished, and therefore, some functional, interrelational and synchronization aspects cannot be clearly appreciated. However some of these kinds of models are very complete, inclu-
N° 3
2009
ding a great variety of aspects and tools related to maintenance. Process oriented models normally offer a clear information flow between their components. In some of these models, inputs and outputs of the maintenance management system are identified. In some others, a closed loop sequence of steps is clearly represented. Though in many cases we may suppose that these models seem to be of easier application in organizations than declarative models, they require proper definition about the coordination among their elements in order to be effective, and this is sometimes missing. We can observe that a process oriented model seems to impose a more tidy system; certainly the complexity degree for its implementation process is greater than in a declarative model, where is possible to take only the elements that are suitable to add to the already operating organization, and thus to obtain fast innovations and benefits in maintenance management ([19], p.163). It is undeniable that every type of model has its own pros and cons; therefore it is convenient to study and to analyze all of them to be able of distinguish which one may be better applied to certain kinds of scenarios and conditions. Table 1. Models classification. Declarative models
Declarative models
Pintelon & Van Wassenhove (1990) Pintelon & Gelders (1992) Cholasuke et al. (2004) Prasad et al. (2006) Tam et al. (2007)
Vanneste & Wassenhove (1995) Campbell (1995) Kelly & Harris (1997) Riis et al. (1997) Wireman (1998) Duffuaa et al. (2000) Hassanain et al. (2001) Tsang (2002) Waeyenbergh & Pintelon (2002) Murthy et al. (2002) Abydayyeh et al. (2005) Pramod et al. (2006) Kelly (2006)
As can be appreciated in Table 1, the majority of the models are process oriented; however, some of the declarative models - like Prasad et al. (2006)[19] - are especially wide, and can surely serve as an implementation and operation guide for any maintenance management system.
5. Model contributions analysis by chronological order Some important aspects of this study have to do with the chronological analysis of the different author's contributions; Figure 1 represents the twenty models studied in this work arranged through a time line. In this figure we can observe that the interest in generating new proposals has remained constant during almost the last two decades.
Articles
111
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
Fig. 1. Time line for the maintenance management models. In many books and articles about maintenance it is commented the existence of different generations or sta-
N° 3
ges of maintenance management models, but that evolution is not described in an explicit form about the integration of new elements and techniques into the models. Since history lessons can be of great interest for us we summarize in Table 2 the innovations identified in the main-tenance management models studied, according to a chronological order. It is necessary to mention that the indicated innovations correspond to those subjects detected appearing for the first time inside a model of maintenance management, it does not mean that they are new concepts also out of the maintenance subject. Discussing briefly the results presented in Table 2 we can summarize that maintenance management models have been acquiring new elements and trends through the years, such as: approach to processes; innovating propo-
Table 2. Innovations of maintenance management models in chronological order.
112
Year
Innovations
Author(s)
1990
Propose a complete system of maintenance indicators.
Pintelon & Van Wassenhove
1992
Expose the necessity of linking between maintenance and the other organizational functions. Highlight the importance of using quantitative techniques for maintenance management. Glimpse the utilization of expert systems. Mention TPM (Total Productive Maintenance) and RCM (Reliability Centered Maintenance).
Pintelon & Gelders
1995
Propose an analysis focused in effectiveness and efficiency of maintenance. Emphasizes the importance of the managerial leadership in maintenance management. Introduces the concept of “maintenance reengineering”.
Vanneste & Wassenhove
1997
Propose an integrated modelling approach based on the concepts of situational management theory.
Riis et al.
2000
Propose the use of a great variety of Japanese concepts and tools for the statistical control of maintenance processes in a module called “feedback control”.
Duffua et al.
2001
Orientate the model to the computer use, expressed in IDEF 0 language (a standard for information exchange).
Hassanain et al.
2002
Glimpses the use of e-maintenance. Proposes a guide to analyze the outsourcing convenience as an entry element to the maintenance framework. Incorporate both the tacit knowledge and the explicit one and integrates them in a computer database. Give special value to the knowledge management.
Tsang
2006
Suggest the union of tools: QFD (Quality Function Deployment) and TPM into a model.
Pramod et al.
2007
Propose a process view in which maintenance contributes to the fulfilment of “external stakeholders” requirements. Proposes a model with a methodology of application clearly expressed, oriented to the improvement of the operational reliability besides the life cycle cost of the industrial assets.
Soderholm et al.
Articles
2009
Campbell
Waeyenbergh & Pintelon
Crespo
Journal of Automation, Mobile Robotics & Intelligent Systems
sals in technical aspects; use of standard languages for information exchange (in order to be used subsequently in CMMS and other computer applications); successive incorporation of quantitative techniques and computer tools (due the increasing amount of maintenance, operational and financial data generated); evaluation and constant improving of maintenance operations (for instance, using automated tools); analysis of the assets life cycle besides the evaluation of the maintenance function; integration of the assets strategy with the maintenance strategy, etc.
6. Comparative analysis of the maintenance managing models In order to compare and to analyze the previously mentioned models, we constructed a checklist that captures different important elements to appear in a maintenance system oriented to the continuous improvement of its activities. A first group of our checklist elements is inspired in ISO standard 9001:2008. This standard is chosen since it is the international reference for any quality management system, which turns into a generic guide for a process operation in which fulfilment with requirements should be demonstrated, such as in the case of the maintenance function. The elements of this check list are: related to Quality Management System (process approach, sequence and interaction of the processes, description of the elements of each process, generation of documents or records), related to Management Responsibility (entailment with strategic targets of the organization, objectives definition, top management commitment, clear definition of responsibilities and authorities, suitable communication), related to Resource Management (humans beings, materials, infrastructure), and related to Measurement, Analysis and Improvement (audits, studies of the internal client satisfaction, information analysis, corrective and preventive actions, continuous improvement approach). A second group of the checklist elements is elaborated considering the "support tools and techniques for maintenance management" mentioned in the studied models. Some of theme are: techniques about economic or financial maintenance aspects, CMMS, techniques about human resources management, application of operations research or management sciences, life cycle analysis, TPM, RCM, simulation, inventories models, reliability theory, expert systems, etc. Finally, the last observation done to every model for its analysis is: Does the model have a methodology for its implementation? This is a key question. As we mentioned above some models limit themselves to enumerate the elements that must conform a maintenance management system, without explaining the dynamics of the system in operation. Nevertheless, an organization that wishes to initiate with the implementation of a maintenance system, may not find information enough concerning the steps to follow for this purpose in that kind of models. There are relatively few models that define a clear methodology to implement it and to operate a maintenance management system, reason why this criterion becomes a key appreciation in this work.
VOLUME 3,
N째 3
2009
Discussing briefly the results of the comparative analysis done: About the management system: within the models, declarative ones do not count with an input - output process approach and do not consider either a clear methodology for its implementation. In general, these models do not mention either the advanced quantitative techniques in maintenance. But on the other hand, all kind of models generate documents and records as inputs for the decisions making process. About the management responsibility: all models define objectives for the maintenance function; however, not all of them link these goals with strategic company targets. Also, most of the models do not make a clear reference to principles of responsibility, authority and a good communication. Maybe this could be because these elements are considered as an initial assumption. About the maintenance support: approximately half of the models incorporate the use of support techniques such as operational research techniques or management sciences techniques. TPM and RCM are the most mentioned and they tend to appear together in management models. Also CMMS is mentioned as an indispensable tool in the majority of the models. Recent models include another techniques as the use of e-maintenance, expert systems, etc. About the management of resources: the majority of models mention something on the matter, though in several schemes this topic is omitted. An explanation could be that this subject is considered to be an assumption. For example, almost the third part of models does not mention techniques for inventory management and purchase control. Curiously, in earlier models, a major emphasis in aspects related to the human resources management is appreciated. About measurement, analysis and improvement: all the models consider different phases for maintenance evaluation, analysis and improvement. Although few more than half of them mention literally the concept "continuous improvement", this trend has grown especially in the last years. About the methodology and the operation of the model: a very important attribute of some models is the inclusion of an application/implementation methodology, which stimulates the continuous improvement. Few ones clearly incorporate this feature.
7. Use of maintenance management models to support industrial needs and applications As a consequence of the implementation of advanced manufacturing technologies and just-in-time production systems, the nature of the production environment has changed during the last two decades. This has allowed companies to massively produce products in a customized way. But the increase in automation and the reduction in inventory buffers in the plants clearly put more pressure on the maintenance system [5]. Many of the maintenance management models have been created in order to reduce this pressure, it means: from a real necessity to apply them in industry. Some of them look for an operational guide to reach the mainteArticles
113
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
nance and organizational goals, others have the final purpose of developing a computerized system, and some more persecute the only objective to obtain an evaluation of the maintenance function. In all cases the main sense of designing a maintenance management model is to continuously improve the maintenance performance. The majority of the models analyzed also were already applied in a variety of industries. That is the case of Pramod (2006)[18]: the practical implementation feasibility of his model was checked in an automobile service station. Söderholm (2007)[21] has another model that has been applied even in different industrial sectors. Waeyenbergh (2001)[25] applied his model in a company that has several plants, each one with different types of installations, using different technologies and having different ages. The models implementation reveals its practical validity, but in fact maintenance deals with highly diverse problems even in firms within the same productive sector, due to this is very difficult to design an operating methodology of general applicability. That is one reason to be attentive to future challenges and developments.
114
Articles
2009
ACKNOWLEDGMENTS This research was partially funded by the Spanish Ministry of Science and Education (Project DPI 2004-01843), the National Council of Science and Technology (CONACYT, México) and the FEDER funds.
AUTHORS Mónica López Campos* - Department of Industrial Management, School of Engineering, University of Seville, Camino de los Descubrimientos s/n. 41092 Seville, Spain. E-mail: mlcinv@hotmail.es. Adolfo Crespo Márquez - Department of Industrial Management, School of Engineering, University of Seville, Camino de los Descubrimientos s/n 41092 Seville, Spain. Tel: +34 954 487215; Fax: +34 954 486112. E-mail: adolfo.crespo@esi.us.es. * Corresponding author
References [1]
8. Conclusions and future challenges After doing the state of art investigation, the classification and the comparative analysis of some of the most important maintenance management models of the last twenty years, we could observe the ultimate trends in this regard. ISO 9001:2008 standard and maintenance techniques proposed by the considered authors were used as references. From this study is possible to identify some elements that seem to be important factors to consider during the design and application of a maintenance management model that looks for efficiency. These elements or characteristics are: input-output processes approach, a clear methodology of application, generation of documents and records, objectives entailment, incorporation of support technologies (TPM, RCM, etc.), orientation to CMMS, flexibility against rapid structural changes, management of material, human and information resources, focus on constant improvement, evaluation and improvement and finally, a cyclical operation. Nevertheless, whatever the model an organization adopts, it has to be evolving to continue being useful against the fast changes that occur in business, communications and industry. A key to achieve this could be the incorporation of the modern tools and platforms known as “next generation manufacturing practices” (NGMS). This implies the use of e-maintenance as a sub-concept of e-manufacturing and e-business. E-maintenance is defined by the Intelligent Maintenance Centre (IMC) as ''the ability to monitor plant floor assets, link the production and maintenance operations systems, collect feedbacks from remote customer sites, and integrate it upper level enterprise applications''. A more general definition is that ''maintenance management concept whereby assets are monitored and managed over the Internet'' (Crespo & B. Iung, 2006). In this way, e-maintenance would have to be integrated to maintenance management models looking for new ways of working that involves collaboration and availability of knowledge and intelligence any time and any place, perhaps changing also the entire business process.
N° 3
[2] [3] [4] [5]
[6]
[7]
[8]
[9] [10]
[11]
[12]
[13]
[14]
Abudayyeh O., Khan T., Yehia S., Randolph D., “The design and implementation of a maintenance information model for rural municipalities”, Advances in Engineering Software, vol. 36, no. 8, 2005, pp. 540-548. AENOR Norma UNE-EN 13306: Terminología de mantenimiento, 2002. AENOR Norma ISO 9001:2008 Sistemas de gestión de la calidad, 2008. Campbell J.D., Organización y liderazgo del mantenimiento, Madrid: TGP Hoshin, 2001, (in Spanish). Crespo Márquez A., Iung B., “Special issue on e-maintenance”, Computers in Industry, vol. 57, no. 1, 2006, pp. 473-475. Crespo Márquez A., The Maintenance Management Framework. Models and Methods for Complex Systems Maintenance, United Kingdom: Springer, 2007. Cholasuke C., Bhardwa R., Antony J., “The status of maintenance management in UK manufacturing organisations: results from a pilot survey”, Journal of Quality in Maintenance Engineering, vol. 10, no. 1, 2004, p. 5. Duffuaa S., Raouf A. & Dixon Campbell, J., Sistemas de mantenimiento. Planeación y control, México: Limusa, 2000. th Encyclopedia Britannica Online. Consulted: May 11 2008, http://0search.eb.com.fama.es.es:80/eb/article-248126. Hassanain M.A., Froese T.M., Vanier D.J., “Development of a maintenance management model based on IAI standards”, Artificial Intelligence in Engineering, vol. 15, no. 1, 2001, pp. 177-193. Intelligent Maintenance Centre (IMC) http://www.imscenter.net/. Consulted: September 2005. Kelly A., Maintenance and the industrial organization in Strategic Maintenance Planning, United Kingdom: Butterworth-Heinemann, 2006. Kelly A., Harris M.J., Gestión del mantenimiento industrial, Madrid: Publicaciones Fundación Repsol, 1997, (in Spanish). Murthy D.N.P., Atrens A., Eccleston J.A., “Strategic maintenance management”, Journal of Quality in Maintenance Engineering, vol. 8, no. 4, 2002, pp. 287-305.
Journal of Automation, Mobile Robotics & Intelligent Systems
[15]
[16] [17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
VOLUME 3,
N° 3
2009
Pintelon L.M., Gelders L.F., “Maintenance management decision making”, European Journal of Operational Research, vol. 58, no. 3, 1992, pp. 301-317. Pintelon L., Van Wassenhove L., “A maintenance management tool”, Omega, vol. 18, no. 1, 1990, pp. 59-70. Porter M.E., Competitive Advantage: Creating and Sustaining Superior Performance. New York: The Free Press, 1985. Pramod V.R., Devadasan S.R., Muthu S., Jagathyraj V.P., Dhakshina Moorthy G., “Integrating TPM and QFD for improving quality in maintenance engineering”, Journal of Quality in Maintenance Engineering, vol. 12, no. 2, 2006, pp. 1355-2511. Prasad Mishra R., Anand D., Kodali R., “Development of a framework for world-class maintenance systems”, Journal of Advanced Manufacturing Systems, vol. 5, no. 2, 2006, pp. 141-165. Riis J., Luxhoj, J. & Thorsteinsson, U., “A situational maintenance model”, International Journal of Quality and Reliability Management, vol. 14, no. 4, 1997, pp. 349-366. Söderholm P., Holmgren, M. & Klefsjö, B., “A process view of maintenance and its stakeholders”, Journal of Quality in Maintenance Engineering, vol. 13, no. 1, 2007, pp. 19-32. Tam A., Price J., Beveridge A., “A maintenance optimisation framework in application to optimise power station boiler pressure parts maintenance”, Journal of Quality in Maintenance Engineering, vol. 13, no. 4, 2007, pp. 364-384. Tsang A., “Strategic dimensions of maintenance management”, Journal of Quality in Maintenance Engineering, vol. 8, no. 1, 2002, pp. 7-39. Vanneste S.G., Van Wassenhove L.N., “An integrated and structured approach to improve maintenance”, European Journal of Operational Research, vol. 82, no. 2, 1995, pp. 241-257. Waeyenbergh G., Pintelon L., “A framework for maintenance concept development”, International Journal of Production Economics, vol. 77, no. 1, 2002, pp. 299313. Wireman T., Development performance indicators for managing maintenance, New York: Industrial Press, 1998.
Articles
115
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
N° 3
2009
BENCHMARKING METHODOLOGY BASED ON ERP SYSTEM EVALUATION: CASE STUDY Sławomir Kłos, Justyna Patalas-Maliszewska
Abstract: Enterprise resource planning (ERP) is a computer-based information system for enterprise integration. ERP integrates information from all functional areas of enterprise to support management processes. The implementation of ERP is an expensive and time-consuming process but for contemporary enterprises it is a necessity. The effective enterprise management is impossible without on-line information about business processes executed in all functional areas. The ERP implementation involves a strategic decision that influences enterprise development for many years. Thus, ERP systems should evolve together with enterprises. The paper proposes a benchmarking methodology that enables evaluating the implementation of an ERP system in three manufacturing companies focused on maketo-order production. The enterprises make small series or single prototype production. The research is based on benchmarking of the enterprises that implemented ERP systems in the past years. It involves comparison of the most important financial ratios and statistical data extracted from the ERP system. Results of data analysis are a basis for a proposed procedure of ERP evaluation. The procedure enables not only evaluating the ERP system implementation but also determining functional areas that require support, changing business processes or functionality of ERP. The proposed benchmarking methodology presents how much the ERP system is adapted to enterprise requirements. Examples presented are based on a case study of Polish enterprises. Keywords: enterprise resource planning, benchmarking, evaluation methodology, case study.
1. Introduction The process of implementing the ERP system is often a shock therapy for most enterprises and especially for manufacturing companies that focus on prototype and variable production. Approximately 90 percent of enterprise resource planning (ERP) implementations is late or over the budget [5] and 70 percent of ERP implementations fail to deliver anticipated benefits [1]. The crucial impact on the implementation complexity has the size of the enterprise [4], type of production and a scope of ERP implementation (included functional areas into the ERP project) [McGinnis]. Because the ERP systems are complex and expensive, there are many researches in ERP success measurement [Jen, Wang]. Some researchers investigate organizational adoption of ERP [Basoglu], [Wang], methodology of ERP selection [Wei, Wang] and cultural issues in ERP [Xue, Liang]. Many researchers investigated 116
Articles
critical factors (e.g., top management support, sufficient training, proper project management, communication, etc.) to the success of ERP implementation [Motwani]. The evaluation of ERP systems is not a trivial process because of the implementation impact on all functional areas of an enterprise. On the one hand, it is difficult to measure benefits of ERP system implementation because the measurement requires a lot of ratios and, on the other; the evaluation of the ERP system requires determining implementation scope and time. The ERP implementation is a strategic decision for an enterprise, which means it cannot be evaluated on the basis of ratios, collected a year after a productive start of the system. At least two or more periods should be taken into consideration. The paper proposes a methodology of ERM system evaluation based on a benchmarking technique. The methodology is used to evaluate real enterprises. The investigated enterprises belong to the same branch and all of them implemented the same ERP system. Firstly, the similarity of the enterprises was checked in terms of economic ratios. The proposed methodology requires data to be collected from several periods of time after ERP implementation to evaluate the ERP long-term impact on the enterprise. Examples presented in the paper are based on real data extracted from ERP systems, balance sheets and profit and loss accounts. The issue discussed in the paper is formulated as follows: “Given is a number of similar enterprises that implemented ERP systems and a set of economic and technical data from at least three periods of time of ERP system operation. How to evaluate the ERP system implementation on the basis of a benchmarking methodology?” Next chapter presents characteristics of three companies. The companies are compared as regards economic ratios and data rates extracted from ERP systems that enable to evaluate repeatability of business processes supported by ERP.
2. Benchmarking of companies X, Y and Z Benchmarking is a continuous search for and adaptation of significantly better practices that lead to superior performance by investigating the performance and practices of other organizations (benchmark partners). In addition, it can create a crisis to facilitate a process of change. Benchmarking (also "best practice benchmarking" or "process benchmarking") is a process used in management, in particular strategic management, in which organizations evaluate various aspects of their processes in relation to best practice, usually within their own sector. This then allows organizations to develop
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
N° 3
2009
Table 1. Income of companies X, Y, Z - ratio INCOME. Net income Company X Company Y Company Z
2002 16 122 247 15 214 843 57 535 039
2003 20 146 185 16 328 299 70 321 448
2004 24 722 777 23 111 416 86 688 575
2005 26 620 102 23 516 703 87 189 582
2006 37 595 665 30 075 340 104 712 076
2005 2 827 501 1 270 123 6 148 159
2006 8 327 078 2 483 880 16 067 266
2004 5 595 899 3 286 486 18 543 308
2005 7 164 281 2 877 036 16 673 952
2006 8 899 214 3 416 115 12 477 788
2004 14 153 834 20 122 227 60 098 533
2005 14 861 173 19 750 557 69 003 845
2006 18 998 440 24 635 191 73 612 411
2005 1.79 1.19 1.26
2006 1.98 1.22 1.42
Table 2. Profit of companies X, Y, Z - ratio PROFIT. Profit Firm X Firm Y Firm Z
2002 775 544 1 674 952 8 768 851
2003 781 398 2 013 436 15 110 390
2004 2 581 251 2 665 157 14 570 127
Table 3. Inventory of companies X, Y, Z - ratio INVENT. Inventory Firm X Firm Y Firm Z
2002 4 984 301 2 149 269 14 914 622
2003 4 850 111 3 963 364 17 149 945
Table 4. Cost of companies X, Y, Z - ratio COST. Cost Firm X Firm Y Firm Z
2002 8 927 782 12 636 602 40 636 458
2003 12 825 880 13 935 076 44 612 068
Table 5. Productivity of companies X, Y, Z - ratio PRODUCTIVITY. Productivity Firm X Firm Y Firm Z
2002 1.81 1.20 1.42
2003 1.57 1.17 1.58
2004 1.75 1.15 1.44
plans on how to adopt such a best practice, usually with the aim of increasing some aspects of performance. Benchmarking may be a one-off action, but it is often treated as a continuous process in which organizations continually seek to challenge their practices [Camp], [Miller], [Watson]. There are several types of benchmarking [Camp]: product benchmarking, process benchmarking, functional benchmarking, financial benchmarking, performance benchmarking strategic benchmarking. Authors are expected to mind the margins diligently. Conference papers need to be stamped with conference data and paginated for inclusion in the proceedings. If your manuscript bleeds into margins, you will be required to resubmit and delay the proceedings preparation in the process. The paper uses financial and strategic benchmarking to evaluate the ERP system implementation in manufacturing companies. The benchmarking is done for three Polish enterprises X, Y and Z, which have implemented the same ERP systems. All the enterprises complete engineer-to-order production. Companies X and Z manufacture machine tools and technological lines and company Y manufactures details for machine tool com-
panies. All three companies manufacture prototype products where mechanical engineering work in area of construction and technology is critical. Table 1 presents net income of the companies in 2002-2006. All the companies started exploring the ERP system in 2003. It means that since 2004 all the firms have explored ERP systems. Therefore, data will be analyzed for the period 2002-2006 but the benchmarking will be done only for the period 2004-2006. Income structures of the enterprises presented in Fig. 1 are quite similar. Enterprises X and Y have almost the same net income, whereas enterprise Z achieves about three time better results. The profit of the enterprises in 2002-2006 is given in Table 2 and in Fig. 2. Enterprises X and Y have similar level of profit (about PLN 2 billion) and company Z has made profit four or eight times higher (PLN 6-16 billion). It is difficult to conclude from Figures 1 and 2 that the implementation of ERP has impact on income or profit of the companies but two or three years after the implementation all the companies increase their net income and profit. Next analysis shows changes of inventories in the enterprises after ERP implementation. Table 3 and Figure 3 present the inventories of the enterprises. Figure 1 shows that production of the three enterprises steadily increased in the period concerned. Production growth Articles
117
Journal of Automation, Mobile Robotics & Intelligent Systems
means that the companies have to order more materials and it results in inventory increase. Company X illustrates the typical situation where the inventory increases together with the production volume.
VOLUME 3,
N째 3
2009
management is in company Z. Beside rapid production development, the inventory value in 2004 2006 dropped significantly. Cost reduction is very important for every production enterprise. Figure 4 shows cost incurred by the firms.
Fig. 1. Net income benchmarking of companies X, Y and Z. Fig. 4. Cost of companies X, Y, Z - ratio COST. Beside similar values of income in enterprises X and Y, company X incurs lower cost. Income of company Z in 2004 and 2005 stays relatively on the same level of PLN 87 billion but the cost in the same period grows about PLN 10 billion. The best method to find the relation between net income and cost of enterprise is to analyze enterprise's productivity (see Figure 4).
Fig. 2. Profit benchmarking of companies X, Y and Z.
Fig. 5. Productivity of companies X, Y, Z - ratio PRODUCTIVITY. Figure 5 shows that after ERP implementation the productivity of enterprise X increases, in enterprise Y it stays at the same level and in enterprise Z decreases (in 2006 the same productivity as before ERP implementation). Fig. 3 Inventory benchmarking of companies X, Y and Z. Better inventory management is executed in company Y. Beside production growing, the inventory stays on the same level (about PLN 3.2 billion). The best inventory 118
Articles
Next chapter proposes the methodology of ERP system evaluation based on benchmarking. The methodology enables evaluating the improvement of the financial ratios of an enterprise after ERP system implementation.
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
3. The evaluation methodology of ERP system implementation To evaluate real impact of the ERP implementation on financial ratios in a company, relative values of the ratios should be taken into consideration. If for example a0 is a value of a financial ratio before ERP implementation (a reference value) and a1, a2, ... , aN, values of the ratio in the succeeding periods the simple evaluation index can be calculated as follows: N
å ak A=1–
k=1
a0 × N
× 100%
If the value from the last year before ERP implementation is not acceptable, the reference value can be calculated as average value from several periods before ERP implementation. The five investigated ratios presented in the previous chapter are evaluated in Table 6. Upward arrows in the heads of columns mean that the greatest value is the best and downward arrows mean that the lowest value is the best. Table 6 shows that beside high absolute value of net income of the company Z, relatively the company has the worst results of ERP implementation. Relatively, company X makes the best profit and the worst by Y. The best inventory management is in company Z and the worst in Y. Cost is significantly reduced in company Z and the same company achieves the best relative productivity. The comparison of the average productivity ratio shows that the implementation of ERP in companies X and Y has not improved their status. The summary evaluation depends on priorities of ERP implementation determined by every enterprise individually. If, for example, the critical goal of ERP implementation for all the enterprises was inventory reduction, the best result is achieved by company Z, etc. Of course, besides average values of different ratios, trend analysis is very important. Objective evaluation of ERP systems requires taking not only results into consideration but also the labour intensity referred to as the utilization of the system and annual repetitiveness of business processes. Average values that describe a number of data introduced annually into ERP by companies X, Y and Z are presented in Table 7 and in Fig. 6.
N° 3
2009
Table 7 shows that company Z generates the greatest number of indexes annually (high charge of ERP). Company Y generates the greatest number of sales offers and orders in ERP but have the worst profit of the three companies analyzed. The number of inventory documents in the companies is approximately 10000 but company Z achieves better results by inventory reduction then other two firms. The investigated factors depend on an individual enterprise strategy. Data presented in Table 7 represent labour intensity related to the ERP system in different areas of enterprises. To compare labour input in ERP and results for an enterprise, the relation between different ratios should be calculated. For example, to compare how the business process of preparing sales offers influences the enterprise profit, it can be calculated as an average profit in 2003-2006 divided by number of sales offers:
Fig. 6. Average values of data quantity extracted from ERP for 2003-2006. Firm X: PLN3,629,307 / 3,103 = PLN1170 Firm Y: PLN2,108,149 / 12,624 = PLN167 Firm Z: PLN12,973,986 / 942 = PLN13,773 It means that enterprise Z makes the highest profit on one sale offer. The calculation can be repeated to evaluate the inventory cost related to the number of inventory turnover, cost of a purchase order, etc. The data presented in Figure 6 are extracted from ERP systems of the companies. Consequently, Table 8 shows the average profit on a specific data set.
Table 6. Evaluation and benchmarking of ERP implementation in companies X, Y, Z. %/FIRM
X Y Z
Net income Relative ratio RRINC 69 53 52
Profit Relative ratio RRPRO 368 26 48
Inventory ¯ Relative ratio RRINV 33 58 9
Cost ¯ Relative ratio RRCOS 70 55 52
Productivity Relative ratio RRPRO -1,88 -1,75 0,73
Table 7. Average values of data quantity introduced into ERP in 2003-2006. %/FIRM
X Y Z
Number of indexes INDEX 18886 14228 22441
Number of sales offers SALEOF 3103 12624 942
Number of sale orders SALEOR 4557 33254 543
Number of inventory turnover INVTUR 95381 111129 105329
Number of purchase orders PURORD 6926 31486 18357 Articles
119
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
N째 3
2009
Table 8. Profit per data quantity in enterprises X, Y, Z. %/FIRM
X Y Z
Profit per number of indexes PPINX 192 148 578
Profit per number of sales offers PPSAOF 1170 167 13773
Profit per number of sale orders PPSAOR 796 63 23904
Profit per number of inventory turnover PPINTU 38 19 123
Profit per number of purchase orders PPPUOR 524 67 707
Table 9. Number of employers and ERP users in enterprises X, Y, Z - USRRAT. FIRM
X Y Z
2004 Employer
ERP user
2005 Employer
ERP user
2006 Employer
ERP user
216 100 201
18 34 146
218 130 244
60 67 172
237 133 251
65 102 191
Table 10. Average values of data quantity per ERP users. %/FIRM
X Y Z
Number of indexes per ERP user INXUSR 540 270 168
Number of sales offers per ERP user SOFUSR 73 239 6
Number of sale orders per ERP user SORUSR 140 618 3
Number of inventory turnover per ERP user INVUSR 2832 2025 625
Number of purchase orders per ERP user PUOUSR 254 602 124
Fig. 7. Number of employees 2004-2006. Table 8 shows that enterprise Z reaches the best profit per selected data sets extracted from the ERP system. Business processes of firm Y generate the lowest profit from the three companies. To evaluate intensity of ERP charge the values should be related to the number of users. The number of employers and ERP users in the enterprises (data available were from the period of 2003-2006 only) is presented in Table 9 and Figure 7. Enterprises X and Z employ approximately the same number of workers and enterprise Y about 50% less. In the investigated period all enterprises increase their number of employers. The number of ERP users is increased in all enterprises too. In 2006, the ratio between ERP users and the total number of employees in the enterprises is 27% for the firm X, 77% for the firm Y and 76% for firm Z. The fastest increase in ERP users is in firm Y (about 100% new ERP user every year). The largest number of ERP users (191) has firm Z. It means that the highest costs of ERP licenses are borne by Firm F. The highest value of the ratio between the total number of employees and ERP users shows wide range of ERP system implementation in the 120
Articles
enterprises. Absolute ratios presented in Table 7 can be regarded as the number of ERP users and recalculated as relative ratios (Table 9 and Figure 8). The average number of data generated by an ERP user represents labour intensity and charge of ERP exploitation of the firms concerned. For example, firm X is the most charged in functional area of construction and technology (number of indexes per ERP user).
Fig. 8. Average values of data quantity extracted from ERP per user for 2003-2006.
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
The discussion shows that the evaluation of the ERP system implementation requires defining priorities and critical business goals. Figure 9 presents the enterprise benchmarking methodology based on ERP systems evaluation. The first step is that every enterprise has to prepare itself before selecting and implementing of ERP. The next step results from the first one and it requires only a proper determination of measurements to business goals.
Selection of critical business goals of ERP system implementation, for example: sales improvement, inventory reduction, claims and liabilities reduction, productivity improvement, etc.
INVTUR PURORD PPINX PPSAOF PPSAOR PPINTU PPPUOR USRRAT INXUSR SOFUSR SORUSR INVUSR PUOUSR Total
1 0 1 1 1 1 1 2 2 2 1 2 1 30
1 2 0 0 0 0 0 0 1 1 2 1 2 20
N° 3
2009
1 1 2 2 2 2 2 2 0 0 0 0 0 29
4. Conclusions Determination of evaluation measurements for example: net income, profit, inventory value, cost, productivity ratio, return on sales ROA, return on sales ROS, return on assets ROA, return on equity ROE, quick ratio QR, current ratio CR, etc.
List of similarly enterprises and benchmarking measurements.
Selection of evaluation period (productive start of ERP for every enterprise).
The data analyze, results interpretation and score evaluation.
Fig. 9. Methodology of enterprise benchmarking based on ERP system evaluation. Table 11 presents the score evaluation for Firms X, Y and Z. Values of the ratios belong to set 0, 1, and 2 (the best value being 2 and the worst 0).
The paper proposes a benchmarking methodology based on ERP system evaluation. The research is based on the case study of three enterprises X, Y and Z that have implemented the same ERP system in 2003. The benchmarking is based on financial ratios and data extracted from the ERP system. The enterprises are investigated against 26 different ratios. The selection of ratios depends on critical goals for the ERP system implementation (inventory reduction, sales increasing, etc.). The result of the benchmarking shows that the implementation of ERP in enterprises X and Z was very good and almost at the same level (about 30 points). The implementation of ERP in enterprise Y has not produced those good effects. Benchmarking can be repeated only for selected functional areas of an enterprise such as sales, production and logistic. Next research will provide benchmarking for other ratios and a larger number of enterprises.
AUTHORS Sławomir Kłos*, Justyna Patalas Maliszewska - University of Zielona Góra, ul. Podgórna 50, 65-246 Zielona Góra, Poland. Tel. +48 68 329 2464. E-mail: s.klos@iizp.uz.zgora.pl . * Corresponding author
References [1]
Table 11. Score evaluation of Firms X, Y and Z. Ratio INCOME PROFIT INVENT COST PRODUC RRINC RRPRO RRINV RRCOS RRPRO INDEX SALEOF SALEOR
Firm X 1 2 0 1 2 2 2 1 0 0 1 1 1
Firm Y 1 0 1 1 1 1 0 0 1 0 0 2 2
Firm Z 2 1 2 0 1 1 1 2 1 2 2 0 0
[2]
[3]
[4]
[5]
Al-Mashari M., “Constructs of process change management in ERP context: A focus on SAP R/3”. In The 6th Americas Conference on Information Systems, Long Beach, CA, 2000, pp. 977-980. Basoglu N., Daim T., Kerimoglu O., “Organizational adoption of enterprise resource planning systems: A conceptual framework”, Journal of High Technology Management Research, vol. 18, 2007, pp. 73-97. Camp R. C., Benchmarking - The Search for Industry Best Practices that Lead to Superior Performance, ASQC Quality Press, 1989. Mabert V.A., Soni A., Venkataramanan M.A., “The impact of organization size on Enterprise resource planning (ERP) implementations in the US manufacturing sector”, The International Journal of Management Science, Omega, vol. 31, 2003, pp. 235-246. Martin M.H., “An ERP strategy”, Fortune, February 1998, pp. 95-97. Articles
121
Journal of Automation, Mobile Robotics & Intelligent Systems
[6]
[7] [8]
[9]
[10]
[11]
[12]
[13]
122
McGinnis T. C., Huang Z., “Rethinking ERP success: A new perspective from knowledge management and continuous improvement”, Information & Management, vol. 44, 2007, pp. 626-634. Miller J G, Meyer A & Nakane J, "Benchmarking Global Manufacturing", Irwin, 1992. Motwani J., Subramanian R., Gopalakris P., “Critical factors for successful ERP implementation: Exploratory findings from four case studies”, Computers in Industry, vol. 56, 2005, pp. 529-544. Wang E. T. G., Lin Ch-L. C., Jiang J. J., Klein G., “Improving enterprise resource planning (ERP) fit to organizational process through knowledge transfer”, International Journal of Information Management, vol. 27, 2007, pp. 200-212. Wei Ch., Wang M.J., “A comprehensive framework for selecting an ERP system”, International Journal of Project Management, vol. 22, 2004, pp. 161-169 Watson G, "Strategic Benchmarking - How To Rate Your Company's Performance Against the World's Best", John Wiley and Sons, 1993. Wu J-H., Wang Y-M., “Measuring ERP success: The keyusers viewpoint of the ERP to produce a viable IS in the organization”, Computers in Human Behavior, vol. 23, 2007, pp. 1582-1596. Xue Y., Liang H, Boulton W.R., Snyder Ch. A., “ERP implementation failures in China: Case studies with implications for ERP vendors”, International Journal of Production Economics, vol. 97, 2005, pp. 279-295.
Articles
VOLUME 3,
N° 3
2009
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
N° 3
2009
DIGITAL FACTORY
Milan Gregor, Štefan Medvecký, Józef Matuszek, Andrej Štefánik
Abstract: The paper presents the results of research and development of the Digital Factory solutions in industry. The implementation of this technology in industry is described and discussed. The results of research and development cover design of assembly system, its processes, simulations model, ergonomic analysis etc. In the paper are presented the solutions developed in the framework of co-operation with industrial partners like Volkswagen Slovakia, Thyssen Krupp – PSL, Whirlpool. The paper contains results of research realized in 3D laser scanning and digitization of large size objects of the current production systems. The developed and validated methodology shows the procedure of 3D laser scanning application by the digitization of production halls, machine tools, equipment, etc. This procedure was tested and validated in chosen industrial companies. The paper presents achieved benefits and future research goals as well. Keywords: digital factory, reverse engineering, simulation, 3D laser scanning.
1. Introduction The economic significance of intense and sustainable production basis in Europe is well supported by the fact that production employed 27 million people in Europe during 2001 and it produced added value of more than 1 300 billion EUR in 230 000 enterprises with 20 (or more) employees. More than 70% of this value was produced by six main spheres: automobiles, electric and optic devices, food, chemistry, materials, semi-finished goods and mechanical engineering [6]. The underdeveloped technology is one of the most significant barriers impeding the rapid expansion of research and development in Central European Region (CER). Technological approaches used daily by High-Tech automotive and electronics factories are, due to being financially demanding, difficultly available to the CER researchers. Research and development in automotive and electronics industries use completely new approaches to designing and testing of new products and production processes. Progressive approaches utilizing the most progressive technologies of Rapid Prototyping, digitalization, Virtual Reality and simulation, are what CER design teams require. The Virtual Reality can be used as by the product development as by the design of production processes, workplaces, production systems, etc. The utilization of Virtual Reality and simulation by the design and optimisation of production processes and systems is often entitled as Digital Factory [4].
This progressive technology, which has already been accepted in the most developed European countries, provides the ways to reduce the amount of necessary time and thus, as well, to reduce the development costs to the level of 10 to 20% of costs required by conventional technologies. Digital Factory currently represents the most progressive paradigm change in both research and industry covering the complex, integrated design of products, production processes and systems [5]. The results of recent year´s research conducted in the framework of international Intelligent Manufacturing Systems (IMS) research program showed that the future for manufacturing lies with new forms of manufacturing strategies. The global networks of self-organizing and autonomous units will create basis for new production concepts. Modelling and simulation have became the dest cisive analytical tool of the 21 century. Global markets require short time to market, high quality products with the lowest possible price. Digital Factory seems to be a solution for above introduced demanding requirements. Different types of software are linked in PLM solutions, which control different parts of the manufacturing cycle. Computer Aided Design (CAD) systems define what will be produced. Manufacturing Process Management (MPM) defines how it is to be built. Enterprise Resources Planning (ERP) answers when and where it is built. Manufacturing Execution System (MES) provides shop floor control and simultaneously manufacturing feedback. The storing of information digitally aids communication, but also removes human error from the design and manufacture process. The European Union has launched new project called ManuFuture – the future development of technologies and production systems. Its main goal is to foster the growth of EU’s competitiveness in production sphere. ManuFuture has published its Strategic Research Agenda (SRA) and a strategic document ManuFuture – a Vision for 2020, which presents a vision of the future development of production in Europe. Vision 2020 covers the following spheres [12]: new products and services with new added value, new enterprise models, advanced industrial engineering, new production technologies, infrastructure and education, research and development system. Its practical steps are oriented on swiftly finishing the construction of the newest progressive technologies, such as [13]: virtual design, virtual enterprise, adaptable enterprise, digital factory, net production, knowledge-based production, rapid prototyping, new materials, intelligent systems, security, reliability, etc. In the sphere of technologies the future development will mainly focus on socalled converging technologies (nano, bio, cognitive) Articles
123
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
and miniaturisation, such as multi-material micro engineering, which enables to combine sensors, process signals and to react to them in micro scale. The new generation production systems are supposed to generate the high Value Added. New designed, sophisticated and complex production systems, are understood among current European scientists as the final products, which can be sold similarly as the other products. This new concepts are built on the principles of Advanced Industrial Engineering, which uses the Digital Factory concept and digitization as a main tool [11].
2. The main productivity drivers st in 21 century The further development and prosperity of any country depends on quality of its engineers responsible for innovations. Investment into education brings almost 8 times higher increase of productivity than investment in capital assets. The productivity and competitiveness improvement in the world was achieved, in years of 1900 till 1990, mainly through the mechanization and automation. The growth during 1990 till 2000 was achieved through IT applications. According to the world leaders in technology development, the digital technologies will be the main driver of productivity and competitiveness improvest ment in 21 Century.
Fig. 1. Technological progress from the productivity point of view. The digitization brought new phenomena, principal fastening of time to market. This was possible thanks the fact that digitization enables to create and test virtual prototypes through which it reduces or totally eliminates the need to create physical prototypes.
3. Production systems innovations The future cannot exist without innovation of production processes and production systems as it cannot exist without the innovation of products. Competitive production systems require redesign as well, new machines and devices, transport systems, control systems, work organisation, etc. Such changes are introduced by teams of specialists, designers and planners. The production systems innovations are realised by principal, revolutionary changes of production, organizational or control principles, which are conducted in long term, time periods. Small, continuous changes are conducted in between stepped changes, sometimes signed as evolution changes. They are realised in a short term time periods, practically by any change of production systems 124
Articles
N° 3
2009
or even production line or mix. These changes are comparable to known Kaizen, continuous process improvement. Any change, even the smallest one, bring risk of success. The change has to be realised by real people who do mistakes as well. The quality and fastness of changes can be supported by 3D digital models of production systems. The dynamic development currently undergoes in the companies running business in the HighTech sphere application of Digital Factory systems. Some years ago the University of Žilina and the University of Bielsko Biala have started to build such complex Digital Factory system [4]. The Digital Factory system utilises 3D digital models of real objects (DMU – Digital Mock Up). DMUs have firstly begun to be used in the sphere of products designing and analyzing. They are starting to be used in the sphere of complex production systems as well, or even of whole factories (for instance in automotive industry). Such digital models are called FMUs – Factory Mock Ups, i.e. digital models of factories. To design whole factories is an extremely complex and difficult problem. Quality of the project determines the future long-term effectiveness of the factory. FMU make it possible to greatly enhance the communication among the design teams, to lower the risks evoked by making wrong decisions and to speed up innovation and increase the efficiency of the innovation process by improving the performance. Mainly classical approaches are being used for digitalisation and geometric analyses of the existing production systems. Information about the real state of the production system is, in case of complex production systems, obtained using the measuring tape, or laser measurers. Using such approach makes digitalisation of the whole enterprise extremely time demanding and expensive. It is also a potential source of waste, inaccuracies and errors. It is much faster, much more effective and qualitatively better to create the 3D models of the existing production systems using the newest 3D laser scanners. These make transforming the existing, real 3D word into its exact 3D digital copy, which correctly reproduces the exact geometry of the recorded space and can simply be used for any computer analyses, a matter of a few moments. Thus obtained 3D digital model (so-called master model) can be used in all designer professions; it can be used by analysts as well as by the factory’s management. Using the Internet it is possible to share such model from anywhere worldwide. Its accessibility makes it easier to eliminate errors. Designers from all over the world can simultaneously work on new projects without any need to travel on to the spot and manually do all the measurements required before they start to design. Extensive research is currently underway, all over the world, in the sphere of utilising the digital methods for digitization, modelling, analysing, simulation, recording and presenting of real objects [1], [6], [7], [14].
4. Digital Factory Digital Factory entitles virtual picture of a real production [11]. It represents the environment integrated by computer and information technologies, in which the reality is replaced by virtual computer models [15]. Such virtual solutions enable to verify all conflict situations
Journal of Automation, Mobile Robotics & Intelligent Systems
before real implementation and to design optimised solutions. Digital Factory supports planning, analysis, simulation and optimisation of complex products production and simultaneously creates conditions and requires team work [5]. Such solution enables quick feedback among designers, technologists, production systems designers and planners. Digital Factory represents integration chain between CAD systems and ERP solutions. One of very important property of Digital Factory is the vision to realize process planning and product development with parallel utilisation of common data. Digital Factory principle is based on three parts [2]: digital product, with its static and dynamic properties, digital production planning and digital production, with the possibility of utilisation of planning data for enterprise processes effectiveness growth. It is very important to gain all required data only one time and then to manage them with the uniform data control, so that all software systems will be able to utilize it. The integration is one of the main conditions for the implementation of Digital Factory. 4.1. The application area of Digital Factory Digital Factory is appropriate mainly as a support for the batch manufacturing of high sophisticated products, their planning, simulation and optimisation. Its main current application area is automotive industry, Mechanical Engineering industry, aerospace and ship building industries as well as electronics and consumer goods industries [2]. 3D digital model of products (DMU – Digital Mock Up) creates currently basic object for the work in digital manufacturing environment [5]. There exists possibility to optimise products, processes and production systems even by the development phase with the utilisation of 3D visualisation and modelling techniques. Such solution brings time to market reduction and significant cost reduction [4]. The system for the design of shop floor 3D layouts and generation of 3D models of production halls is missing in current Digital Factory solutions [7]. It is possible to create the 3D model of production hall directly in CAD systems. Such solution is advantageous by new layouts or by new production systems designs. But, production halls do exists, in majority of real cases. By such conditions, it is often more effective to create 3D model of production hall with the utilisation of Reverse Engineering technologies and 3D laser scanners [8]. The material flow simulation enables to optimise the movement of material, to reduce inventories and to support value added activities in internal logistics chain [9], [10]. The subsystems for effective ergonomics analysis utilise international standards as The National Institute for Occupational Safety and Health (NIOSH), Rapid Upper Limb Assessment (RULA), etc., which enable right planning and verification of man-machine interactions on the single workplaces [3].
VOLUME 3,
N° 3
2009
The highest level of analysis is represented by a computer simulation of production and robotics systems, which enables optimisation of material, information, value and financial flows in the factory [5]. 4.2. The advantages of Digital Factory solutions Digital Factory implementation results directly in economic as well as production indicators improvement. Any slight saving realised in a design and planning phase can bring huge cost reduction in a production operation phase. Thanks to this is payback period by investment in Digital Factory very short. Digital Factory main advantages [5]: reduction of entrepreneurship risk by the introduction of a new production, processes verification before start of production, possibility of virtual “visit“ of production halls, validation of designed production concept, optimisation of production equipment allocation, reduction in required area, bottlenecks and collisions analysis fast changes, better utilization of existing resources, machines and equipment off line programming saving time resources, reduction or full elimination of prototypes, ergonomics analyses, etc. Digital Factory enables to test and reveal all possible production problems and shortages before start of production.
Fig. 2. The Digital Factory advantages. The highest potentials for high quality and low costs of products are in product development and production planning phases. The statistics show that product design and production planning influence about 80 % of production costs [5]. Digital Factory enables product launching time reduction up to 25 - 50%. Estimated cost savings are supposed from 15 to 25%. According to some studies done in industry, using digital manufacturing techniques, twice the amount of design iterations can be processed in 25 percent of the time. The current production equipment is often inflexible by quick changes. That is why the designers of such equipment are looking for new solutions (automatic reconfiguration of production machines) with fully automated control systems, which will be able to find optimized production process and parameters after production Articles
125
Journal of Automation, Mobile Robotics & Intelligent Systems
task definition. According to CIMdata report (March 2003), Digital Factory enables to achieve following financial savings: cost savings by assets reduction about 10%, area savings by layout optimisation about 25%, cost savings by better utilisation of resources about 30%, cost savings by material flows optimisation about 35%, reduction in number of machines, tools, workplaces about 40%, - total cost reduction about 13 %, - production volumes growth about 15 %, - time to market reduction about 30 %. 4.3. Digital Factory implementation methodology Rough procedure of Digital Factory implementation is as follows [5]: (I) definition of total standards and production principles for entire planning operations, creation of primitives and customer databases, (II) first data collection and organisation with the utilisation of data management system. All responsible persons have direct access to the date, their addition, inspection and changes, (III) in the third phase, Digital Factory system improves co-ordination and synchronisation of individual processes throughout their networking supported by workflow management system, (IV) in the fourth phase, Digital Factory system takes automatically some routine and checking activities, which are very time consuming in common systems. Implemented system insures high quality of all outputs.
VOLUME 3,
N° 3
2009
Digital Factory concept, structure of which is shown in Fig. 3. The above-introduced concept increases the borders of current Digital Factory solutions. It endeavours to integrate activities conducted by designers, technologists, and designers of manufacturing systems, planners. It simultaneously tries to increase the offer of individual existing modules. The concept design goes from theoretical studies as well as practical experience gained in industry (VW Slovakia, Whirlpool Slovakia, Thyssen Krupp PSL, Power Train, Farmet, etc.).
6. Digital Factory in the industry The above-mentioned partners have conducted several research studies in industry focused in Digital Factory solutions. The DMU model of a real gearbox was developed using Reverse Engineering technology (3D laser scanning), in the framework of co-operation with VW Slovakia.
5. Digital Factory in a Research The University of Žilina and the University of Bielsko Biala belong among the universities using software solutions for Digital Factory in education and research [4]. These Universities in co-operation with the Central European Institute of Technology started to build their own
Fig. 3. Digital Factory concept [5]. 126
Articles
Fig. 4. Real versus virtual VW Gearbox [6]. Following the Gearbox DMU a set of DMUs of VW production workplaces and transportation equipments was developed.
Journal of Automation, Mobile Robotics & Intelligent Systems
VOLUME 3,
N° 3
2009
Fig. 8. Static Digital Model of Assembly Line [6].
Fig. 5. DMUs of assembly workplaces [6].
The dynamics of production system was added in the 3D simulation environment Quest (see Fig. 9). The set of simulation experiments was conducted with the developed simulation model, which showed bottlenecks stations and the possibilities for performance growth of gearbox assembly line.
Fig. 9. 3D Simulation Model of Gearbox Assembly Line [6]. Fig. 6. VW Slovakia – Real Versus - 3D Digital Model [6]. The design of workplaces was especially checked by an ergonomics analysis whereas manikin concept of Delmia V5 Human was used (see Fig. 7).
Afterwards an FMU of the whole assembly line for gearboxes assembly in VW Slovakia was developed. This FMU represents the complex digital model of the entire assembly line. The final solution is shown in Fig. 10.
Fig. 7. Ergonomics analysis of a manual workplace [6]. The static virtual model of a given gearbox assembly line was developed through integration of individual DMUs into manufacturing system scene as it is shown in Fig.8.
Fig. 10. VW Slovakia – FMU of Gearbox Assembly Line [6].
Articles
127
Journal of Automation, Mobile Robotics & Intelligent Systems
7. How to become Digital? The sphere of creating, modelling and storing 3D digitalised virtual models of real objects is one of the most significant spheres, which are able to radically influence the effectiveness of producers. Research and development in this High-Tech sphere is technically and financially demanding. The most significant automotive and electronics companies are well aware of the constant need to innovate their products, which is why they release a new model every 2-3 months [5]. Innovation can only be successful if it is swiftly put on the market. To fulfil the requirement to shorten the whole production cycle of a product from its design to delivering it to the customer keeping the costs as low as possible is the most important prerequisite of success of every enterprise. The launch of a new product is always connected with the initial chaos, which increases the realisation costs behindhand. The system for the creation of 3D production layouts and the generation of DMUs of production halls or FMUs is what Digital Factory solutions miss today. It is principally possible to design the DMU of production halls and production layouts using the direct CAD system approach. Such solution is convenient when designing new production systems. However, the more frequent case is, that the production halls do already exist. That is the reason why it is often more efficient to create production hall DMU using the Reverse Engineering technologies (e.g. 3D laser scanning). The following Figure 11 shows the basic principle of 3D laser scanning.
128
VOLUME 3,
N° 3
2009
being possible to use them in DMU of production halls, (III) many DMU of existing large objects were created by increase of 2D models (pulling of 2D model in CAD system). These solutions do not assure required precision (deviation higher than 10 centimetres) on the contrary to laser scanning where the deviation is in millimetres, (IV) there does not exist any methodology and no approaches were described to the integration of DMU machines with DMU of production halls and following creation of FMU (Factory Mock Up), (V) up till now, no procedure has been developed for cyclical actualisation of existing DMU (cyclical scanning and automatic identification and comparison of changes), there exist no standards for FMU creation, (VI) there exist no obligatory regulations, which can instruct the designers of new objects to create simultaneously DMU with the real construction and after realization of project to compare the level of unity of real objects with its DMU, through the scanning, (VII) up till now was not developed any approach for integration of production halls DMUs, obtained through laser scanning with the production systems DMUs obtained from digital factory solutions (Delmia). 7.2. Practical procedure for large objects digitization The practical and simultaneously effective procedure for scanning, digitization, modelling, analysis and storing of digital models of large objects do not currently exist. Any workplace which works with 3D laser scanning uses its own approach.
Fig. 11. The principle of 3D Laser Scanning of Production Halls [7]. Reverse Engineering is the step we need to take to achieve high efficiency and accuracy of digitization, not only considering the existing equipment, but also when the production layout themselves come into question. It opens up new opportunities to realize virtual designing. Creation of 3D-DMU of large objects using the 3D scanning is, at the moment, the joining link between virtual reality and real virtuality.
These approaches are characterised mainly by following [7]: procedure of an efficient way of realising 3D laser scanning of large objects, procedure of creating 3D digital models using the obtained 3D scanned data, fulfilling the standards (e.g. technical standards of buildings, construction drawing, etc.), the way of storing, handling and change management of created 3D digital models using a structured database system, procedure of integrating DMUs of real objects with DMUs of production systems created in Digital Factory environment (Delmia), system of digital models presentation, the way of Internet support for using the 3D models.
7.1. The main problems by the digitization of large objects Based on experience of authors as well as conducted analysis, they can be summarized as follows [7]: (I) current approaches prioritize 3D digital models of halls; they are not focused into creation of machines or equipment DMUs, (II) DMU machines and equipment obtained from their designers (e.g. from Catia) have to be simplified,
The procedure developed at the CEIT Žilina [7]: Obtaining of data about digitized object through Reverse Engineering. It is based on computer model of the object (DMU – Digital Mock Up), which is obtained through 3D scanning (digitization) of real, existing objects. It will be used for obtaining of the computer model of the real object, to which no drawings do exist. The Computer Tomography could be used for purpose of Reverse Engineering, it means 2D cuts, which are integrated
Articles
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
into complex 3D model of the object, during next phase. 3D laser scanning is used for the building of 3D digital model of existing layout or by the analysis of static constructions (production halls), etc. The basis of scanning is creation of reference raster, with the support of reference points; software (e.g. Faro Clouds) is used for this step. It enables the integration of 3D scans for the specification of the future virtual model. The 3D objects digital model is obtained from 3D scans through modelling in CAD systems environment (e.g. Autocad, Microstation, Catia, etc.). The software systems (e.g. Faro Scene), supplied by laser scanner producers, are used for data export from gained scans. Created 3D digital model of production hall is saved into DMU models database. Complex, digital model of production system (PPR – Product, Process, Resource) is created in Digital Factory (DF) environment (Delmia). This 3D digital model of production system is integrated into created 3D digital model of production hall. After integration 3D digital model is used for the detailed analysis of the complex production system (e.g. production processes analysis, ergonomics analysis, etc.). The computer simulation, supported by virtual reality (Quest simulation system), is used for the dynamic analyses. Obtained 3D digital model of a real object is further used for the identification of potential collisions, for example in system environment of Navis Works, or Walk Inside. The developed procedure is shown in the Figure 12. 7.3. The means for 3D laser scanning and digitization. Reverse Engineering laboratories in Žilina and Bielsko Biala, which already runs workplaces for acquiring 3D scanned data, utilise different equipment and software systems for scanning of real objects. The mobile measuring arm FARO with laser head is used for measurement and scanning of shape complicated objects. It provides contact or contact-less digitization, supported by PolyWorks software for 3D scanned data processing. The accuracy of scanning when doing contact measurement is 0,05 mm, in contact-less laser
N° 3
2009
measurement 0,03 mm. 3D measuring device MORA MS 10 is used for CNC digitization, providing contact measurement or contact-less scanning, supported by the software INCA 3D for 3D scanned data processing. The accuracy is 1,8 μm. Minolta Vivid 900 is used to scan small objects of, say, 1 metre at distance of about 1,5 metre. Processing of the 3D scanned data is carried out in Geomagic Studio 8. The new 3D laser scanner FARO LS880, with a reach of about 100 meters and with the accuracy of 1 mm on 30 meters is used to scan large objects (e.g. buildings, large machines and equipment, etc.). The Reverse Engineering laboratories have purchased licenses to various innovative, modelling, simulative and optimization programs. Program bundles from Invention Machine (Goldfire Innovator), MSC (Nastran, Patran, Marc, ADAMS,…), PTC (PRO/Engineer, PRO/Mechanica, ...), Dassault Inc. (Catia, Delmia, Quest,…), Ansys, Witness, Mantra 4D, Virtual Reality, AutoCad and other, are available. A special software systems are used for processing of 3D scanned data, like FARO Clouds for the collection of data from 3D laser scanning, FARO Scene for the design of virtual sceneries, Polyworks for polygonization of 3D digital models obtained by laser scanning, Delmia – the comprehensive system for Digital Factory, Quest – the simulation system with the support of virtual reality, with the direct integration to Delmia system, etc. Current digitization technologies enable 3D scanning of large objects with precision of some millimetres (creation of clouds of points, their identification and working out of 3D digital model). These technologies enable, as well, very precise measurement of object dimensions, snap shot the colours, spatial shapes, scanning type and its transformation into digital form, etc. The digitization technologies enable to create digital documentation of complex digital models, which can be later, used for objects analysis, study, design, protection, maintenance, etc. These technologies enable integrated working out of data and using of existing data (e.g. 2D scans, photos, paintings, machines passports, construction projects, etc.). It will be needed to save and archived all obtained
Fig. 12. Methodology of 3D Laser Scanning of Production Halls [7]. Articles
129
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
information in databases of digital objects. Such databases have to be able to save alphanumerical as well as graphical information (2D, 3D, pixel and vector). The created digital models of objects enable utilisation of modelling and simulation methods for testing of objects properties, level of their damage, firmness and fatigue characteristics, important for objects safety (e.g. large buildings, halls, machines, equipment, etc.). Below introduced examples show the 3D laser scanning technologies used in research and industrial applications by partners. The special, high powerful 3D scanners are used for digitization of large objects and creation of virtual scenes. 3D scanner Faro LS 880 (IQVOLUTION) equipped with software FARO Clouds and FARO Scene is used for scanning of production halls. It enables spatial scanning into distance of 100 meters.
N° 3
2009
PSL [8].
Fig. 17. 3D digital model of shop floor - Thyssen Krupp, PSL [8].
Fig. 13. 3D Laser Scanner FARO LS 880. Figures 14 and 15 show results from research and cooperation with industry.
Fig. 18. Factory mock-up - Thyssen Krupp, PSL [8].
Fig. 14. The building up of a transporter DMU – Thyssen Krupp, PSL [7].
Fig. 15. Machine tool and its DMU - Thyssen Krupp, PSL [8].
Fig. 16. Production Hall and Its 3D Model - Thyssen Krupp, 130
Articles
7.4. Economic benefits from digital technologies 3D laser scanning is basis for application of Digital Factory solutions [7], [8]. The authors of this paper estimate, that only during the first phase of transition to the digital solutions (HighTech companies undertaking in 2 the CER) will be required to scan about 150 million m of industrial area. The direct costs of the scanning of surfaces written above will, according the nowadays price relations, create the sum of minimum about € 450 million. The economic benefit can be documented on the next example. According to the analyses of Asea Brown Boveri (ABB) company orders it resulted, that about 20 data from customer in a simple order leads in average to: 200 data till optimization, 2000 data in structure and documentation of product, includes results, calculations, 20000 data in geometrical description, 200000 data in documentation for production, material, planning, NC-control, scheduling, etc. If it is considered, that in the company is in the course of year executed for example 100 orders, we receive 7 data capacity about 2*10 . Let next consider, that in a car industry is every car an individual order, so than for example in case of VW Slovakia, which produces about 300 000 cars per year, it represents data capacity of 12 about 6*10 .
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
Following statistics are known in a project practice of big project companies: € 100 mill investment requested € 10 mill increased costs because of lower transparency and about € 1 mill additional costs and time because of lower transparency, clashes, organization problems and mistakes in suggestions. 3D laser scanners users achieved following costs savings [7]: € 3-4 millions trough the virtual reality. According to the research, consistent application of 3D factory can save 30-40 % additional costs and time in projects. Complex 3D data are basis for detection and elimination of clash causes. It can be saved up to 2 % of investment costs by investment into factories by using detection and elimination clash causes. Created and complex 3D DMUs allow accurate, quick, easy and effective change management. Time, in this case is featured. These planning and management systems are also marked as 3D-CAD-Planning tools (also marked 4D). The automated scanning, based on ahead set plan, allows fast obtaining of 3D DMU. The planning system on the other side allows with one click to realize changes in integrated form, which were in past solved by groups of specialists for months. Among the other benefits of digitization of large objects belong [7]: direct access of researchers and industrialist in digital models of large objects, the growth of quality and availability of information about preserved objects, cost reduction of documentation, analysis, precision of working out and preserving of information about objects, simplification of documents and saving of information about objects, the growth of degree of objects protection, precise monitoring of objects movement (e.g. machines, equipment, etc.), the development of new scientific methods for the maintenance of objects, the growth of productivity and precision of digital models of spatial objects, cost reduction and effectiveness growth by creation of databases of digital models of different objects, support of development of knowledge about 3D laser scanning, digitization, modelling and simulation supported by virtual reality means and through comprehensive databases of digital models.
8. Further research The further research on the area of digital technologies moves the whole scientific and technological basis and opens the possibilities for co-operation in the framework of the European Research Area and international research. The further research on the area of Digital Factory will be focused mainly into following: the simplification of the introductory phase of imple-
N° 3
2009
mentation (e.g. digital product definition), the integration of production hall DMU with product and process DMUs, the simulation of such complex systems. 3D laser scanning is powerful but expensive technology. It needs in depth research not only in development of new equipment for 3D laser scanning but especially the development of new highly productive approaches, supported by user friendly software systems for processing of scanned data and creating of 3D models from it. The research team will focus its further effort in Laser scanning into following: productivity improvement of 3D laser scanning of big real objects, productivity improvement of 3D modelling, development of new algorithm for gathered data squeezing, saving, storage and transfer, establishment and increase of 3D digital models libraries with the possibility of Internet presentations.
9. Conclusion The future outlook shows that next generation products can benefit from digital manufacturing. Any type of process elements are stored so that as modifications are made at any stage of product development, they are made to the entire design and manufacturing process. The University of Žilina, in co-operation with the University of Bielsko-Biala, have long been investing their human and financial resources into obtaining and developing progressive technologies. They have gained extensive experience in application of such technologies as: digitalization, Reverse Engineering, 3D laser scanning, visual data processing, creation of 3D digital models of objects, modelling and simulation of real objects’ properties, creating copies of real object using additive technologies, Rapid Prototyping and Vacuum Casting. ACKNOWLEDGMENTS This paper was supported by the Agency for Support of Research and Development, based on agreement No. APVV-0597-07.
AUTHORS Milan Gregor - Professor at the Central European Institute of Technology (CEIT), University of Žilina, Slovakia. Štefan Medvecký - Professor at the Faculty of Mechanical Engineering, University of Žilina, Slovakia. Jozef Matuszek - Professor at the Faculty of Mechanical Engineering and Computer Science, University of BielskoBiała, Poland. Andrej Štefánik* - Central European Institute of Technology (CEIT), University of Žilina, Univerzitná 6, 010 08 Žilina, Tel.: +421 415 139 258, Fax: +421 415 139 201 Slovakia. E-mail: andrej.stefanik@ceit.eu.sk. * Corresponding author
Articles
131
Journal of Automation, Mobile Robotics & Intelligent Systems
132
Articles
VOLUME 3,
N째 3
2009
Journal of Automation, Mobile Robotics & Intelligent Systems
VOLUME 3,
N° 3
2009
INFocus THE SPOTLIGHT on new n Flying robo-penguin German company Festo created a cybernetic penguin that can not only swim in the water but also fly. AquaPenguin was born from Bionic Learning Network Project. However it has not the feathers, it looks like a real penguin - covered by special fabric, it has a beak, white belly and oval shapes. Framework of a robot is so flexible that it swims smoother than an original. Ambitious Festo's engineers made a step forward that Mother Nature and constructed also an AirPenguin. More information at http://www.festo.com/
n Child-bodied robot can learn His constructor promises that CB2 will speak for next two years. Within last two years the robot mastered walking. It is one from the most technically advanced Japanese robots. He can learn from people. CB2 looks and behaves like a human child. It weighs ca 33 kg and is 130 cm tall. Child Robot with Biomimetic Body explores the environment by watching people and things, learns like a real child. It can remember a facial expression and associate it with the person's mood. The scientist working on CB2 - engineers, psychologists, neurologists - want to mimic the mentality of the little child who can categorize the mother's facial expressions to one of few categories: sadness or joy, for example. Te aim of the research, within which the CB2 has been developed, is making of the intelligent machines with which people will be able to communicate just like with other, real people. The research if finance provided by state Japanese Science and Technology Agency. Professor Minoru Osada from University of Osaka, Japan, who leads CB2 project, explains: “Children have very limited software but they also can learn. Currently CB2 can nearly independent stand up, can walk using 51 servomotors on compressed air. It “skin” is made from grey silicon and equipped with over 200 sensors enable sense feeling - thanks to them the machine knows when someone touch its. As yet, the child robot cannot speak, but Professor Osada claims that in two years CB2 will speak simple sentences and it will be equally developed as two-year human child. In 2050, as Professor maintains, a robotic football team will win a match opposite the human team.
Source: http://www.jst.go.jp/EN
In the spotlight
133 ?
Journal of Automation, Mobile Robotics & Intelligent Systems
VOLUME 3,
N° 3
2009
n The new kind of traffic is coming? In March in Orange area, N.Y., a flying car called "Terrafugia Transition" passed the tests. According to one of the constructors, Carl Dietrich, “Terrafuiga Transition” means a revolution in personal transport. It is the first car with folded wings. The vehicle is able to fly distance of 640 km on one petrol tank. Its fuel consumption is 1 US gallon (3.8 l) fuel on 50 km distance. During flying the car develops speed 184 km per hour. It has 4 wheels, so after landing and wings folding it can join road traffic. According to press, in March the vehicle made six flights. After verification “Terrafugia Transintion” received certifications from the FAA (Federal Aviation Administration). Currently price is 194,000 $. Start with mass-production is planned in 2011. Source: http://www.terrafugia.com/ n Japanese artificial legs Developed by Honda Company artificial, computer controlled legs are destined for old people, patient required rehabilitation of the limbs, and those which improving of walking could help in work factory workers, for example. Let's remember that in 2002 Honda showed first in the world two-legged walking robot ASIMO. “We used technology from ASIMO (Advanced Step in Innovative MObility) to construct artificial legs” - said to AFP agency Masato Hirose, chief engineer in Honda Science and Development Department. Computer control legs weigh 6.5 kg and they are composed of a saddle, leg-shaped handles, and shoes. While the user sit in a saddle just like on a bicycle, movable handles bends in the rhythm of his steps. Motion is enabled thanks to two engines that are operated by sensors located in the shoes. Source: http://world.honda.com/ASIMO/
n Terminal that can recognise the person by vein system In the Institute of Mathematical Machines (IMM) in Warsaw has been developed a biometric terminal that can recognize a person by palm’s vein system, which is unique for everybody. According to the scientists, the system can be used as a work time registration system or access check device. Sampling biometric marks is the most reliable and verified method for identification of the people. Usually biometric systems use facial features, fingerprints, image of iris, and geometry of a palm. Scientists ae looking for new, more forgery proof methods. According to Leon Rozbicki from IMM, biometric method “veins and finger” is the most effective because it is resistant to falsification and gives most of all characteristic points possible to compare with original. The vein system is unchangeable and specific for each person since the fourth year of life. “To scan image of the vein system the infrared is used. Infrared light penetrates into a palm, and reacts with unoxidized blood in veins. Blood lightened in that way give dark colour in the photography. Obtained image is processed to select the data most useful for identification. Is is prepared to further digital analysis. The information about vein system is processed and stored as a pattern not an image” – Mr Rozbicki explains. Biometric terminal is designed to hang on the wall. It projects information messages for user. It is estimated that cost of this device in average large company are refunded in 3-6 months. Source: http://www.imm.org.pl/
134
In the spotlight
VOLUME 3,
Journal of Automation, Mobile Robotics & Intelligent Systems
N° 3
2009
EVENTS SUMMER-AUTUMN 2009 August 08 – 11
ICMET 2009 – International Conference on Mechanical and Electrical Technology, Beijing, China. http://www.iccsit.org/icmet/index.htm
14 – 16
Conference NANOTECH INDIA 2009 – Kochi, Eranakulam, India. The Conference is the first of its kind in India, as it is wholly organized by the private sector for the private sector. http://www.nanotechindia.in
September
th
07 – 09
5 International ICST Mobile Multimedia Communications Conference, London (Kingston University), United Kingdom http://www.mobimedia.org/
07 – 11
9 International DYMAT Conference on the Mechanical and Physical Behaviour of Materials under Dynamic Loading, Brussels, Belgium. http://www.dymat2009.org
08
Seminar on Manufacturing Methods of Composites, Institution of Mechanical Engineers, Manchester, United Kingdom. http://www.imeche.org/events/s1422
15 – 16
Workshop Self-X in Mechatronics and Other Engineering Applications, Paderborn, Germany http://wwwhni.uni-paderborn.de/self-x-in-engineering
18 – 20
WMCTA 2009 – 2 International Workshop on M-Electronic Commerce Technology and Applications, Cairo, Egypt. http://www.iacsit.org/wmcta/index.htm
18 – 20
ICACTE 2009 – 2 International Conference on Advanced Computer Theory and Engineeringm, Cairo, Egypt. http://www.icacte.org/
23 – 25
ICCIS 2009 – International Conference on Computer and Information Science, Amsterdam, Netherlands. http://www.iccs-meeting.org/iccs2009/cfp.html
th
nd
nd
October 09 – 11
IACSIT 2009 – Autumn Conference,Singapore, Singapor. http://www.iacsit.org/2009ac/index.htm
28 – 30
ICCAM 2009 – International Conference on Computer and Applied Mathematics, Venice, Italy. http://www.waset.org/wcset09/venice/iccam/
28 – 30
ICCAT 2009 – International Conference on Computer and Automation Technology, Venice, Italy. http://www.waset.org/wcset09/venice/iccat/
28 – 30
ICBE 2009 – International Conference on Biomedical Engineering, Venice, Italy. http://www.waset.org/wcset09/venice/icbe/
28 – 30
ICICRA 2009 – International Conference on Intelligent Control, Robotics, and Automation, Venice, Italy. http://www.waset.org/wcset09/venice/icicra/ Events
135 ?