Global Perspectives on Artificial Intelligence (GPAI) Volume 3, 2015 doi: 10.14355/gpai.2015.03.002
www.seipub.org/gpai
A Robot Assisted Assembly System for Large and Heavy Components in Products Assembly Zhang Lijian*1, Li Lun2, Yi Wangmin2, Yang Lin2 1
China Academy of Space Technology, No104, Youyi Road, Beijing, China
2
Shenyang Institute of Automation, Chinese Academy of Sciences, No114, Nanta Street, Shenyang City, China zljcast@163.com; 2Lilun@sia.cn; 3zlj008224@163.com; yanglin@sia.cn
1
Abstract The purpose of this paper is to propose a robot-assisted assembly system for the installation of a variety of heavy components in the assembly system. The robot-assisted assembly system is composed of an industrial robot system, a robot path planning system, a visual positioning system and an additional robot motion platform. Stereo vision system is used to improve the accuracy the of robot positioning system. Probabilistic roadmap (PRM) method is used to plan robot path in 3D space with obstacles. The PRM planner divides planning into two phases: the learning phase and the query phase. In the learning phase, collision-free roadmap is built. In the query phase, the nodes of roadmap are connected. We use the A-star algorithm to find a feasible path between the initial node and the goal node. Experiment results show that our method can efficiently make a collision-free path in the workspace with obstacles in the products. Keywords Assembly System; Computer Vision; Robot Path Planning
Introduction In product assembly field, such as aerospace product assembly, some technical components need to be fixed onto the product precisely. These components are characterized with heavy weight, and the installation accuracies of these components are stringently required to guarantee the product quality. For these heavy components, it is difficult for the operator to hold the component manually through the entire assembly time. In recent years, robot-assisted assembly system, combination of the automated positioning and the precision measurement system has been developed to improve the assembly quality and efficiency. With the advantage of high-repeatability accuracy, the industrial robot is becoming an imperative component in product assembly field. Utilizing the high flexibility of the articulated arm robot, the installation of a variety of large and heavy components can be assisted by a single industrial robot. However, the absolute positioning accuracy of the industrial robot cannot meet the precision requirement in product assembly. The methods to improve the assembly accuracy of the industrial robot system are performed with the aid of high-precision measurement technology, e.g. stereo cameras in this paper. Secondly, in order to fix the component to the designated position, the robot has to plan a collision-free path using product exterior structure as the robot environment. In this paper, a robot path planning software system is developed to solve this problem. The Principle of the Proposed System The robot-assisted assembly system is composed of an industrial robot system, a robot path planning software system, a visual positioning system and an additional robot motion platform (FIG 1). The PC connects robot and binocular stereovision sensor simultaneously. The industrial robot is used to transport heavy components along the planning trajectory generated by robot path planning system and re-align them according to the 3D installation information received from visual positioning system. The end effector can be replaced by the fast-change flange to satisfy the needs of various components.
7
www.seipub.org/gpai
Global Perspectives on Artificial Intelligence (GPAI) Volume 3, 2015
FIG.1 THE COMPOSITION OF ROBOT ASSISTED ASSEMBLY SYSTEM
The robot works in a specific environment filled with some fixed obstacles. The robot path planning system provides the off-line programming collision free path with robot position and attitude according to the CAD models. The CAD models include the heavy components and the installation environment. This paper applies the probabilistic roadmap (PRM) method to robot path planning in 3D space with obstacles. The PRM planner divides planning into two phases: the learning phase and the query phase. In the learning phase, collision-free roadmap is built. In the query phase, the nodes of roadmap are connected, and then a feasible path could be searched using the graph searching algorithm. The visual positioning system provides the accurate 3D location information of the mounting holes. The system is equipped with binocular stereovision sensor. Measurement based on binocular stereovision is an important technology to locate object’s spatial position. It simulates the way of human being to deal with the scenery by binocular stereovision. However, the distance from binocular stereovision sensor to the target must be within a reasonable range by robot moving along the off-line planning path. And then it can compute the 3D scene information of the object by shooting two images of the same scene. In the end, the component could be mounted to a reasonable position by visual orientation. The robot motion platform is designed to expand the working space for the assembly system. The robot and its controller are mounted on the motion platform. The platform is fixed to the ground to ensure the stability of the whole assembly system when the robot is in working status. When the assembly task is complete in the designated working area, the platform can be moved to the next one. The platform is also be used to adjust to the best location for the entire robot assisted assembly system in the process. Accuracy Improvement by Stereo Computer Vision The basic principle of stereo vision system is identical to the human binocular vision, and it uses two horizontally displaced cameras to obtain two differing views on a scene. Based on the triangle measure theory, the disparity image obtained from the differing views is used to calculate the 3-d coordinate of the selected points on the assembly product. Stereo Computer Vision Model Based on the pin-hole camera model, projection from a 3-d space point in the world coordinate system to the 2dimension coordinate on the image plane can be expressed by: X wi ui Y wi = Z ci vi M = Z wi 1 1
8
m11 m21 m31
m12 m22 m32
m13 m23 m33
X m14 wi Y m24 wi Z m34 wi 1
(1)
Global Perspectives on Artificial Intelligence (GPAI) Volume 3, 2015
www.seipub.org/gpai
The projection matrix M is composed of interior parameter and exterior parameter of the cameras. From equation (1), we can get the following relationships: X wi m11 + Ywi m12 + Z wi m13 + m14 − ui X wi m31 − uiYwi m32 − ui Z wi m33 = ui m34 X wi m21 + Ywi m22 + Z wi m23 + m24 − vi X wi m31 − viYwi m32 − vi Z wi m33 = vi m34
(2)
If we know the coordinate value of the point in the world coordinate system and the coordinate value of the projection point on the image plane, by utilizing the orthogonality characteristic of the transformation matrix, the coefficient of the projection matrix M can be calculated. In the stereo vision setup, two cameras are used to capture images of the calibration plate, and the transformation matrix between the calibration plate and the two cameras can be calculated, and then the relative position and orientation relationships between these two cameras can be calibrated. Besides this, the interior parameter of the two cameras can also be determined. Robot Hand-Eye Calibration As shown in the FIG 2, the procedure of hand-eye calibration is described as followed. Because we already know the distance between the grid corners of the calibration plate, we can calculate the transformation matrix C between the calibration plate and the two cameras. The transformation matrix X between the end effector and the stereo cameras is the unknown parameters. The transformation matrix between end effector and the robot base coordinate system is denoted by D . By reading the position data of robot x, y, z, a, b, c from the robot controller, the RPY angle can be calculated. cos a cos b cos a sin b sin c − sin a cos c cos a sin b cos c + sin a sin c x sin a cos b sin a sin b sin c + cos a cos c sin a sin b cos c − cos a sin c y D= − sin b cos b sin c cos b cos c z 0 0 0 1
(3)
The relative position and orientation between calibration plate and the robot base coordinate system is unchanged, and the relative position between robot end effector and the stereo vision system is also unchanged, i.e. matrix X is unchanged. We use Wc to denote the calibration plate coordinate system, and Wb to denote the robot base coordinate system. Based on the above descriptions, we can get the following relationships: Wc C1 XD1 =Wb
(4)
Wc C2 XD2 =Wb
(5)
C2 XD2 =C1 XD1
(6)
C1−1C2 X =XD1 D2 −1
(7)
And,
The above equations can be written as: C ′X = XD ′
(1)
In only matrix X is unknown, which is the position coefficient. Redundant equations are used to improve the accuracy of the hand-eye calibration result. 3-D Reconstruction As shown in the figure, assume the coordinate system O − xyz of left camera is on the origin of the world coordinate system, and its image coordinate system is denoted by O1 − X 1Y1 , and its focus length is f1 . Coordinate system of the right camera is Or − xr yr zr , its image coordinate system is Or − X r Yr , and its focus length is f r . According to the perspective transformation model of camera:
9
www.seipub.org/gpai
Global Perspectives on Artificial Intelligence (GPAI) Volume 3, 2015
P
z
zr
Xr O1
Or
Xl
o
xr
or
Yl
Yr x yr
y
FIG. 2 ROBOT HAND-EYE CALIBRATION
FIG 3 3-D RECONSTRUCTION
X l fl sl Yl = 1 X r fr sr Yr = 1
fl
x y 1 z
fr
xr y r 1 zr
(2)
The transformation between O − xyz and Or − xr yr zr can by expressed as a transformation matrix M lr : x xr y y M = = lr r z zr 1
r1 r 4 r7
r2 r5 r8
r3 r6 r9
x tx y t y = ,M z lr t z 1
[R
T]
(3)
In which R, T is the rotation matrix between left camera and right camera, and translation between two origins, and they can be obtained by camera calibration. For any point in the O − xyz coordinate system, the relationship between two points on the image plane of the two cameras is xr f r r1 ρ r yr = f r r4 zr r7
f r r2 f r r5
f r r3 f r r6
r8
r9
zX / f fr tx l l zY / f f r t y l l z t z 1
(4)
So the coordinate value of the point in the space can be determined by: x = zX l / fl y = zYl / fl fl ( f r t x − X r t z ) z = X r (r7 X l + r8Yl + fl r9 ) − f r (r1 X l + r2Yl + fl r3 ) fl ( f r t y − Yr t z ) = Yr (r7 X l + r8Yl + fl r9 ) − f r (r4 X l + r5Yl + fl r6 )
(5)
According to the epipolar geometry, the polar lies on the plane formed by the two camera center and the characteristic point, and epipoles lies on the connection line of two camera optical centers. Based on these two constraints, the matching of the points can be accomplished, and 3-d reconstruction can be established.
10
Global Perspectives on Artificial Intelligence (GPAI) Volume 3, 2015
www.seipub.org/gpai
Robot Path Planning Automatically finding a feasible path for a robot in a fixed environment with obstacles is an important problem in robotics. Robot motion planning is an essential part for this area. The motion planning is a difficult problem and exact algorithms are rarely useful. However, there is a kind of algorithms known as probabilistic roadmap (PRM) methods. Probabilistic roadmap divides planning into two phases. The first phase is a learning phase, during which a roadmap is constructed in the free configuration space C-free. And the second phase is a query phase. The userdefined query configurations are connected with the pre-computed roadmap. The nodes of roadmap are configurations in C-free and the edges of roadmap correspond to free paths. The Learning Phase The learning phase could be considered as construction phase. Its objective is to obtain a reasonably connected graph, which consists of a reasonable quality of collision-free points of configuration space and the local paths for connecting the points. Initially the graph G = (V, E) is empty. Then, repeatedly, a configuration is sampled from C space. If the configuration is collision-free, it is added to the V. The process is repeated until collision-free configurations have been sampled. For every new node q, a number of closest neighbor nodes from V are selected to try to connect the new node using the local planner. If the planner succeeds in getting a feasible path between q and a selected node q’, the edge (q, q’) will be added to E. A number of components in this algorithm are still unspecified. In particular, it needs to be defined how random configurations are created in line, how the closest neighbors are computed in line, how the distance function D used in line is chosen, and how local paths are generated in line. The Query Phase The query phase needs to find a feasible path between arbitrary input configuration qinit and qgoal using the roadmap constructed in the learning phase. Assume for the moment that C-free is connected and that the roadmap consists of a single connected component. The main question is how to connect qinit and qgoal to the roadmap. We try to use the nodes in set V to connect qinit and qgoal by line path Pinit and Pgoal. A feasible path from qinit and qgoal is eventually constructed by concatenating Pinit, the continuous path corresponding to P, and Pgoal. The first question is how to connect qinit and qgoal to set V. Our method is increasing the distance (according to D) and try to connect qinit and qgoal to each node of set V with local planner until one connection succeeds. Then some graph searching algorithm is used to search the actual path to find the path P. It should be noted that the learning phase needs to be executed only once, since the same roadmap can be used to solve many different queries. Our work focuses on finding a feasible collision-free path for the industry robot in a reasonably cluttered environment. We take the KUKA KR30-3 robot as the 6-DOF robot model. qinit
qgoal
FIG. 4 PROBABLITY ROADMAP
FIG 5 FINDING A FEASIBLE PATH
11
www.seipub.org/gpai
Global Perspectives on Artificial Intelligence (GPAI) Volume 3, 2015
The approach is applied in the discretized C-space of the arm. We convert worksapce from the cartecian coordinate to configure space.The robot workspace is configured into joint space. Each revolute joint J i (i = 1, ..., 6) has defined certain internal limits, denoted by lowi and upi , with lowi < upi .Since the robot base is fixed, we present the C-space of such a 6-link articulated robot by {[low1 , up1 ] × ... × [low6 , up6 ]} .Suppose that (k,m)∈ C × C , J i (k ), i = 1,..., 6 denote the position of point J i in the workspace, when the robot is at configuration k. The distance between configuration m and configuration k can be defined by function D: 6
= D(m, k ) ∑ || J i (m) − J i (k ) ||2 i =1
(6)
Where || J i (m) − J i (k ) || is the Euclidean distance between J i (m) and J i (k ) . In the discretized configuration space, we need to set a reasonable resolution for the six joints. we have adopted a heuristic method instead of having uniform resolution along each configuration coordinate. For the i-th coordinate J i of the configure space. N i = (int)
J i max − J imin ∆θi
∆θi = 2arc(
MaxMove ) 2li
(7)
(8)
where J i max and J imin are the limits of joint motions, li is the length between the center of the joint to the farthest point of the end can reach., and MaxMove is a pre-set distance the robot moves along the coordinate at one step. Using this method to discretize resolution of the joint, the joint i of the robot moves ∆θi at each movement step, where (∆θ1 , ∆θ 2 , ∆θ3 , ∆θ 4 , ∆θ5 , ∆θ 6 ) = (1.56,1.56, 2.72,5.33,5.33,8.56)
(9)
After setting the resolution of the six joints, we try to connect each node to the H closest nodes in the configure space. On the other hand, too small will affect the success rates to find a collision-free path. In our experiments, H=20 is fair enough.
FIG 6 ROBOT COLLISION-FREE PATH PLAN
Collison checking for 3D manipulators can be implemented using the discretized configure space for each link of the robot. Each joint of the robot is free to translate and rotate within its limits. When testing for collision, we choose a random six-dimension configure space vector from the links, and check each link against the obstacles.
12
Global Perspectives on Artificial Intelligence (GPAI) Volume 3, 2015
www.seipub.org/gpai
We start finding a feasible collision-free path after the collision-free checking (as shown in FIG 6). The initial node qinit and the goal node goal should be set firstly, and then we employ the graph searching algorithm to connect the needed nodes, the connected nodes and the local paths that connect the adjacent points construct the feasible path connecting the initial node to the goal node. Conclusions In this paper, we propose a method for robot-assisted assembly. We employ the probabilistic roadmap (PRM) method to solve robot motion planning problem in a fixed environment with obstacles. The binocular stereovision sensor is designed to detect the information of installation location. With the visual positioning system, the assembly accuracy of the heavy component is improved greatly, and then the robot will automatically and effectively transport the heavy component to the designated location. REFERENCES
[1]
Lydia E. Kavralu, Petr Svestka, Jean-Claude Latombe, and Mark H. Overmars, “Probabilistic Roadmaps for Path Planning in High-Dimensional Configuration Spaces,” IEEE Trans.Robot.Automat., vol. 12, no. 4, pp. 566-580, August 1996.
[2]
Lydia E. Kavraki, Mihail N. Kolountzakis, and Jean-Claude Latombe, “Analysis of Probabilistic Roadmaps for Path Planning,” IEEE Trans.Robot.Automat., vol. 14, no. 1, pp. 166-171, Feb.1998.
[3]
SUN Han—chang,HU Hua-yong, “Study on Path Planning for UAV Based on Probabilistic Roadmap Method,” Journal of System Simulation, vol. 18, no. 11, Nov. 2006.
[4]
B. Faverjon and P. Tournassoud, “A practical approach to motion planning for manipulators with many degrees of freedom,” Robotics Research 5, H. Minra and S. Arimoto, Eds. Cambridge, MA: MIT Press, p. 65-73, 1990.
[5]
B. Faverjon and P. Tournassoud, “A local approach for path planning of manipulators with a high number of degrees of freedom,” Proc. IEEE Int. Conf. Robotics andAutomation, Raleigh, NC, pp. 1152-1 159, 1987
[6]
K. Kondo, “Motion planning with six degrees of freedom by multistratergic bidirectional heuristic free space enumeration,” IEEE Trans. Robot. Automat. vol. 7, pp. 267–277, 1991.
[7]
Th. Horsch, F. Schwarz, and H. Tolle, “Motion planning for many degrees of freedom - random reflections at C-space obstacles,” Proc. IEEE Int. Conf Robotics and Automation, pp. 3318-3323, San Diego, CA, 1994.
[8]
D. Chalou and M. Gini, “Parallel robot motion planning,” Proc. IEEE Int. Conf. Robotics and Automation, Atlanta, GA, pp. 24-51, 1993.
[9]
J.-P. Laumond, M. Taix, and P. Jacobs, “A motion planner for car-like robots based on a global/local approach,” Proc. IEEE Internet. Workshop Intell. Robot Syst., pp. 765-773. 1990.
[10] K. Kondo and F. Kimura, “Collision avoidance using a free space enumeration method based on grid expansion,” Advanced Robotics, vol. 3. No. 3. pp. 159-175. 1989. [11] D. Henrich and X. Cheng. “Fast distance computation for on-line collision detection with multi-arm robots”. In IEEE Int. Conf. Robotics & Automation, pp. 2514-2519, May 1992. [12] S. Cameron, “A study of the clash detection problem in robotics,” in Proc. IEEE Int. Conf. on Robotics and Automation, vol. 1, 1985, pp. 488–493. [13] Huang Xianlong, Liang Bin, Wu Hongxin. “A survey on robotics collision avoidance planning, ” Aerospace Control, vol. 1, pp. 34-40, 2002 [14] Lozano-Perez T. “Spatial planning: A configuration space approach,” IEEE Transaction on Computers, 1983, pp. 108-120. [15] Sanjeev Sccreeram and John T.Wen, “A Global Approach to Path Planning for Redundant Manipulators,” IEEE Transactions on Robotics and Automation, 1995, pp. 152-160. [16] N. Rahmanian-Shari and I. Troch, “Collisionavoidance control for redundant articulated robots,” Robotica, vol. 13, pp. 159168, 1995
13