www.as-se.org/ccse
Communications in Control Science and Engineering (CCSE) Volume 3, 2015
Distributed Coordination Control of Multi-Robot with Electronic Tags Binbin Cen1,a, Xianzhong Zhou*2,ab Department of Control and Systems Engineering, SME, Nanjing University
a
Research Center for Novel Technology of Intelligent Equipments at Nanjing University, Nanjing, China
b
477636880@qq.com; *2zhouxz@nju.edu.cn
1
Abstract Distributed coordination control has been an active research field in the multi-robot communities. In this paper, we present an approach for coordination and tracking of multi-robot system. The approach relies on the communication of robots and a RFID (Radio Frequency Identification)-based location method. The RFID module provides the relatively accurate location, which is shared between the robots. Our goal is to design a control algorithm for each robot to keep a desired shape of formation and simultaneously track a single target. The approach balances the formation shape and the tracking error, and the experimental results demonstrate the success of the proposed approach regarding the performance indicators. Keywords Distributed Coordination; Formation; Tracking; RFID; Multi-robot
Introduction Coordination is one of the most interesting fields in Artificial Intelligence and Robotics. In some respects, a single robot has difficulties in accomplishing tasks. The robots working together can not only complete complex tasks, but also improve the efficiency. The formation and tracking of multiple mobile robots have been extensively studied in the last decades. In [1, 2, 3], potential function approach was proposed for formation control. The basic of this approach is to create an energy function of distance between any two robots. In [4], behavior-based formation control was used for multi-robot. In [5, 6], decentralized formation control avoiding collision and obstacles for multi-robot were given. In [7, 8], the communication and location were considered to make an optimum environment for coordination. In [9], a passive proximity binary sensor-based multiple target tracking system was investigated, which can achieve the self-organized tracking capabilities without the intervention of human operators. In [10], a dynamic distributed algorithm was proposed for tracking objects that move fast in a sensor network. However, these researches were proposed without considering the real robot control or making too many constraints. In this paper, we propose a distributed control method for coordination of multiple mobile robots. Unlike a centralized control method, which integrates all robots’ information and decides every robot’s next action, in this approach, each robot can take decisions on its own behavior according to its knowledge of the environment and its neighbors’ status. The main contribution of this paper is that our control algorithm makes it possible to coordinate behavior while tracking a single target. The group trajectory is dynamic changing due to the target’s behavior or goal doesn’t be known by the group in advance. The paper is organized as follows. In the next section, we give the system architecture, and the coordination control algorithm design. Then we present the experiment and simulation results. Finally, we draw some conclusions. System Architecture The system architecture for multi-robot is shown in Fig.1. The modules for each robot are organized into three layers: the sensing actuator layer, the planning and decision-making layer and the communication layer. Every robot perceives the environment and exchanges data with others at the same time, in order to integrate information
28
Communications in Control Science and Engineering (CCSE) Volume 3, 2015
www.as-se.org/ccse
that can help robot make an optimal decision. Robots regard other ones’ status and the environment information as their knowledge. Except some emergency events that robot will take action immediately such as avoiding collision, robot decides its behavior by its knowledge and task. The task is informed by the command from the upper computer, through communication based on Zigbee network that can provide stable and low-power transmission environment. A distributed coordination method is implemented on every robot so that robot should perform according to the team goal, as we describe in the next section right now. Robot Knowledge
Planning and decision Task
Decision Communication
Other robots
Information integration
Behavior planning
Actuator
Sensor
Environment FIG. 1 SYSTEM ARCHITECTURE
Distributed Coordination Design Our goal is to design a distributed control algorithm for robots that can coordinate its behavior so as to maintain a desired formation while tracking a target. We consider the motion model of the mobile robot and study the relative position of robots for the formation control. Problem Formulation We describe the problem as cooperation and tracking, the expectation is like: đ?‘Ľđ?‘˜,đ?‘Ą = đ?‘“(đ?‘Ľđ?‘Ąđ?‘Žđ?‘&#x;đ?‘”đ?‘’đ?‘Ą,đ?‘Ą ) ďż˝ k = 1, ‌ , n đ?‘Śđ?‘˜,đ?‘Ą = đ?‘“(đ?‘Śđ?‘Ąđ?‘Žđ?‘&#x;đ?‘”đ?‘’đ?‘Ą,đ?‘Ą )
(1)
đ?‘Ľđ?‘˜,đ?‘Ą ,đ?‘Śđ?‘˜,đ?‘Ą is the k-th robot’s desired position at time t, đ?‘Ľđ?‘Ąđ?‘Žđ?‘&#x;đ?‘”đ?‘’đ?‘Ą,đ?‘Ą ,đ?‘Śđ?‘Ąđ?‘Žđ?‘&#x;đ?‘”đ?‘’đ?‘Ą,đ?‘Ą is the target’s position at time t, function đ?‘“() generates the desired position. Our robot is like the shown in Fig. 2, it has two drive wheels and one follow wheel.
We define two basic behaviors as rectilinear and rotate motion. The robot’s pose is đ?‘ƒđ?‘Ą = (đ?‘Ľđ?‘Ą , đ?‘Śđ?‘Ą , đ?œƒđ?‘Ą )đ?‘‡ at time t, v is the average velocity, Îľ(t) is the external disturbance, the rectilinear motion in T cycle is like: đ?‘Ľđ?‘Ą + vT cos đ?œƒđ?‘Ą đ?‘ƒđ?‘Ą+đ?‘‡ = ďż˝ đ?‘Śđ?‘Ą + vT sin đ?œƒđ?‘Ą ďż˝ + Îľ(t) đ?œƒđ?‘Ą
(2) 29
www.as-se.org/ccse
Communications in Control Science and Engineering (CCSE) Volume 3, 2015
FIG. 2 MOBILE ROBOT
And the rotate motion is like: đ?‘Ľđ?‘Ą đ?‘ƒđ?‘Ą+đ?‘‡ = ďż˝ đ?‘Śđ?‘Ą ďż˝ + Îľ(t) đ?œƒđ?‘Ą + Îą
Îąis the change of angle,
Îą=2
vT đ??ż
L is the distance between two wheels, so in sampling period T, the pose changing process is like below:
đ?‘ƒđ?‘Ą+t1
đ?‘ƒđ?‘Ą+đ?‘‡
đ?&#x2018;Ľđ?&#x2018;Ą đ?&#x2018;&#x192;đ?&#x2018;Ą = ďż˝đ?&#x2018;Śđ?&#x2018;Ą ďż˝ đ?&#x153;&#x192;đ?&#x2018;Ą đ?&#x2018;Ľđ?&#x2018;Ą = ďż˝ đ?&#x2018;Śđ?&#x2018;Ą ďż˝ + Îľ(t), t1 < đ?&#x2018;&#x2021; đ?&#x153;&#x192;đ?&#x2018;Ą + Îą
LÎą â&#x17D;Ąđ?&#x2018;Ľđ?&#x2018;Ą + ďż˝vT â&#x2C6;&#x2019; ďż˝ cos đ?&#x153;&#x192;đ?&#x2018;Ą+đ?&#x2018;Ą1 â&#x17D;¤ 2 â&#x17D;˘ â&#x17D;Ľ LÎą =â&#x17D;˘ + Îľ(t) + Îľ(t + t1) đ?&#x2018;Śđ?&#x2018;Ą + ďż˝vT â&#x2C6;&#x2019; ďż˝ sin đ?&#x153;&#x192;đ?&#x2018;Ą+đ?&#x2018;Ą1 â&#x17D;Ľ â&#x17D;˘ â&#x17D;Ľ 2 đ?&#x153;&#x192;đ?&#x2018;Ą+đ?&#x2018;Ą1 â&#x17D;Ł â&#x17D;Ś
(3) (4) (5) (6) (7)
As time goes on, the robotâ&#x20AC;&#x2122;s state will be affected seriously by accumulated error, but the robot itself doesnâ&#x20AC;&#x2122;t know and estimate its state without regard to the error. We use RFID (Radio Frequency Identification) to rectify the position. The ID numbers are stored in each robot in advance. When the robot via electronic tags that dispersed on the ground, it will recognize ID and get relatively accurate position. Fig. 3 shows the card reader and electronic tags.
FIG. 3 CARD READER AND ELECTRONIC TAGS
The pose will be rectified like: đ?&#x2018;Ľđ?&#x2018;&#x2013;đ?&#x2018;&#x2018; + Ď cos đ?&#x153;&#x2018; đ?&#x2018;&#x192;đ?&#x2018;Ą = ďż˝ đ?&#x2018;Śđ?&#x2018;&#x2013;đ?&#x2018;&#x2018; + Ď sin đ?&#x153;&#x2018; ďż˝ , Ď Ďľ[0, r], Ď&#x2020;Ďľ[0,2Ď&#x20AC;] đ?&#x153;&#x192;đ?&#x2018;Ą 30
(8)
Communications in Control Science and Engineering (CCSE) Volume 3, 2015
www.as-se.org/ccse
(đ?&#x2018;Ľđ?&#x2018;&#x2013;đ?&#x2018;&#x2018; , đ?&#x2018;Śđ?&#x2018;&#x2013;đ?&#x2018;&#x2018; )is the centre of electronic tags, r is the radius of induction area, Ď is the location error. So the location accuracy is insured that we can study the relative position of the robots for the formation and coordination. Formation Control The robots have a number of predefined formations to be adopted. Usually, four typical formations are proposed: line, column, wedge, and diamond. Fig. 4 is the following formations while tracking.
FIG. 4 FOUR FORMATIONS
Each robot acquires the target robotâ&#x20AC;&#x2122;s position by Zigbee based wireless network, then calculates its own desired position according to the formation information. đ?&#x2018;Ľđ?&#x2018;&#x2018;đ?&#x2018;&#x2019;đ?&#x2018; đ?&#x2018;&#x2013;đ?&#x2018;&#x;đ?&#x2018;&#x2019;đ?&#x2018;&#x2018; = đ?&#x2018;Ľđ?&#x2018;Ąđ?&#x2018;&#x17D;đ?&#x2018;&#x;đ?&#x2018;&#x201D;đ?&#x2018;&#x2019;đ?&#x2018;Ą + đ?&#x2018;&#x2018; cos(đ?&#x153;&#x192;đ?&#x2018;Ąđ?&#x2018;&#x17D;đ?&#x2018;&#x;đ?&#x2018;&#x201D;đ?&#x2018;&#x2019;đ?&#x2018;Ą + đ?&#x203A;˝) ďż˝ đ?&#x2018;Śđ?&#x2018;&#x2018;đ?&#x2018;&#x2019;đ?&#x2018; đ?&#x2018;&#x2013;đ?&#x2018;&#x;đ?&#x2018;&#x2019;đ?&#x2018;&#x2018; = đ?&#x2018;Śđ?&#x2018;Ąđ?&#x2018;&#x17D;đ?&#x2018;&#x;đ?&#x2018;&#x201D;đ?&#x2018;&#x2019;đ?&#x2018;Ą + đ?&#x2018;&#x2018; sin(đ?&#x153;&#x192;đ?&#x2018;Ąđ?&#x2018;&#x17D;đ?&#x2018;&#x;đ?&#x2018;&#x201D;đ?&#x2018;&#x2019;đ?&#x2018;Ą + đ?&#x203A;˝)
(9)
đ?&#x2018;&#x2018;, đ?&#x203A;˝ are the distance and angle between robot and target. The numbers2,3,4,5 in a formation are assigned by the upper computer when all the robots initialize their poses, and once assigned it will never change dynamically. The numbers of position are arranged separately as in Fig. 4 in order to reduce displacement distance of the group while changing a formation. Compared with the current position(đ?&#x2018;Ľđ?&#x2018;Ą , đ?&#x2018;Śđ?&#x2018;Ą ), we use (đ?&#x2018;Ľđ?&#x2018;&#x2018;đ?&#x2018;&#x2019;đ?&#x2018; đ?&#x2018;&#x2013;đ?&#x2018;&#x;đ?&#x2018;&#x2019;đ?&#x2018;&#x2018; , đ?&#x2018;Śđ?&#x2018;&#x2018;đ?&#x2018;&#x2019;đ?&#x2018; đ?&#x2018;&#x2013;đ?&#x2018;&#x;đ?&#x2018;&#x2019;đ?&#x2018;&#x2018; ) to calculate the orientation đ?&#x153;&#x192;đ?&#x2018;&#x2018;đ?&#x2018;&#x2019;đ?&#x2018; đ?&#x2018;&#x2013;đ?&#x2018;&#x;đ?&#x2018;&#x2019;đ?&#x2018;&#x2018; , then Îą = đ?&#x153;&#x192;đ?&#x2018;&#x2018;đ?&#x2018;&#x2019;đ?&#x2018; đ?&#x2018;&#x2013;đ?&#x2018;&#x;đ?&#x2018;&#x2019;đ?&#x2018;&#x2018; â&#x2C6;&#x2019; đ?&#x153;&#x192;đ?&#x2018;Ą , combine (9) with (7),(6), and without regard to the disturbance, LÎą ďż˝ cos(đ?&#x153;&#x192;đ?&#x2018;Ą + Îą) = đ?&#x2018;Ľđ?&#x2018;&#x2018;đ?&#x2018;&#x2019;đ?&#x2018; đ?&#x2018;&#x2013;đ?&#x2018;&#x;đ?&#x2018;&#x2019;đ?&#x2018;&#x2018; 2 ďż˝ LÎą đ?&#x2018;Śđ?&#x2018;Ą + ďż˝vT â&#x2C6;&#x2019; ďż˝ sin(đ?&#x153;&#x192;đ?&#x2018;Ą + Îą) = đ?&#x2018;Śđ?&#x2018;&#x2018;đ?&#x2018;&#x2019;đ?&#x2018; đ?&#x2018;&#x2013;đ?&#x2018;&#x;đ?&#x2018;&#x2019;đ?&#x2018;&#x2018; 2 đ?&#x2018;Ľđ?&#x2018;Ą + ďż˝vT â&#x2C6;&#x2019;
(10)
We can easily get robotâ&#x20AC;&#x2122;s velocity v. The procedure the robot taken is that it rotates to change the orientation first, and moves straight to the desired position next. Coordination The coordination method is based on a common protocol that is applied on every robot and used for coordinating behaviors to keep desired formation and track the target. The protocol contains communication of robots, process of self-assessment and rules of collision and obstacle avoidance, so that robot can execute its task without guidance. The approach is described as self-assessment based on acquaintance net. Each robot evaluates its performance of the current step, as the proximity to the expected position, and transmits the assessed value and its (x, y) to its acquaintance net. Acquaintance net is consist of acquainted neighbors and can communicate with each other, but its relationship will change if the formation is changed. Robot compares its assessment with its neighborsâ&#x20AC;&#x2122;, regarded as one of the influence factors to coordinate behaviors. Robot utilizes that data to figure out whether it should move faster or slower to reach a desired formation, of course, the velocity should not beyond the max. The velocity variation is effected by acquainted neighbors, đ?&#x2018;&#x203A;
â&#x2C6;&#x2020;đ?&#x2018;Łđ?&#x2018;&#x2014; = ďż˝ đ?&#x2018;&#x201C;( đ?&#x2018;&#x2019;đ?&#x2018;&#x2013; , đ?&#x2018;&#x2019;đ?&#x2018;&#x2014; ) đ?&#x2018;&#x2013;=1
(11) 31
www.as-se.org/ccse
Communications in Control Science and Engineering (CCSE) Volume 3, 2015
đ?&#x2018;&#x2019; is the assessment, n is the number of neighbors, function đ?&#x2018;&#x201C; compares đ?&#x2018;&#x2019;đ?&#x2018;&#x2013; with đ?&#x2018;&#x2019;đ?&#x2018;&#x2014; , calculates the influence that robot i gives.
Now letâ&#x20AC;&#x2122;s discuss when the coordination should occur. If robot receives data in every one cycle T and makes a coordination, it seems to be too frequent and doesnâ&#x20AC;&#x2122;t take the actual communication delay into account. In another way, as the robot storages the total translational distance, it makes a coordination while the distance is more than every 10 centimeters. The error is a problem since every step-length is uncertain and the distance is inexact. We use RFID to calibrate the position, so another way is that robot makes a coordination when it recognizes the electronic tags and gets relatively accurate position. This control method presents the superiority, since it is distributed and takes advantage of relatively accurate position which can probably help make a good performance in coordination. To ensure that the robots avoid collision with each other, we adopt some rules such as potential field method implemented on every robot. Fig. 5 is the robot distance. Robot
Robot
distance FIG. 5 ROBOT DISTANCE
We define the safe distance between any two robots as đ?&#x2018;&#x2018;đ?&#x2018; đ?&#x2018;&#x17D;đ?&#x2018;&#x201C;đ?&#x2018;&#x2019; , the force asđ??šâ&#x192;&#x2014;đ?&#x2018;&#x2013;,đ?&#x2018;&#x2014; . ďż˝đ??šâ&#x192;&#x2014;đ?&#x2018;&#x2013;,đ?&#x2018;&#x2014; ďż˝ = ďż˝
The robot i total force is the sum of đ??šâ&#x192;&#x2014;đ?&#x2018;&#x2013;,đ?&#x2018;&#x2014; vector.
0, đ?&#x2018;&#x2018;đ?&#x2018;&#x2013;đ?&#x2018; đ?&#x2018;Ąđ?&#x2018;&#x17D;đ?&#x2018;&#x203A;đ?&#x2018;?đ?&#x2018;&#x2019; â&#x2030;Ľ đ?&#x2018;&#x2018;đ?&#x2018; đ?&#x2018;&#x17D;đ?&#x2018;&#x201C;đ?&#x2018;&#x2019; đ?&#x2018;&#x2DC;đ?&#x153;&#x2013;â&#x201E;? đ?&#x2018;&#x2DC; , đ?&#x2018;&#x2018;đ?&#x2018;&#x2013;đ?&#x2018; đ?&#x2018;Ąđ?&#x2018;&#x17D;đ?&#x2018;&#x203A;đ?&#x2018;?đ?&#x2018;&#x2019; < đ?&#x2018;&#x2018;đ?&#x2018; đ?&#x2018;&#x17D;đ?&#x2018;&#x201C;đ?&#x2018;&#x2019; đ?&#x2018;&#x2018;đ?&#x2018;&#x2013;đ?&#x2018; đ?&#x2018;Ąđ?&#x2018;&#x17D;đ?&#x2018;&#x203A;đ?&#x2018;?đ?&#x2018;&#x2019;
(12)
đ?&#x2018;&#x203A;
đ??šâ&#x192;&#x2014;đ?&#x2018;&#x2013; = ďż˝ đ??šâ&#x192;&#x2014;đ?&#x2018;&#x2013;,đ?&#x2018;&#x2014;
(13)
đ?&#x2018;&#x2014;=1,đ?&#x2018;&#x2014;â&#x2030; đ?&#x2018;&#x2013;
n is the number of robots. The đ??šâ&#x192;&#x2014;đ?&#x2018;&#x2013; orientation will make the variation of the đ?&#x153;&#x192;đ?&#x2018;Ą , according to (10), the velocity v changes, too. The information of obstacles is known, and avoiding obstacles is simply considered in Fig. 6.
FIG. 6 OBSTACLE AVOIDANCE
While robot moves into the area of line of dashes, it is in danger of crashing into the obstacle. The safe distance is also defined as đ?&#x2018;&#x2018;đ?&#x2018; đ?&#x2018;&#x17D;đ?&#x2018;&#x201C;đ?&#x2018;&#x2019; . If the orientation of robot is blocked by obstacles, it should be adjusted in a certain way.
Firstly, we choose an orientation along the edge of the obstacle. If the distance is less than
đ?&#x2018;&#x2018;đ?&#x2018; đ?&#x2018;&#x17D;đ?&#x2018;&#x201C;đ?&#x2018;&#x2019; 2
, the orientation đ?&#x153;&#x192;đ?&#x2018;Ą
should outward slant a small angle. Once the robot makes the choice of which edge to follow, it wonâ&#x20AC;&#x2122;t change to the other side in case of oscillating.
Overall, the robot distributed control algorithm is described as: 1) Initialize pose and acquire task. 2) Calculate the desired position and decide the đ?&#x153;&#x192;đ?&#x2018;Ą , v.
3) If RFID signal comes, update pose from encoder, use its knowledge from communication in every cycle T to coordinate v.
32
Communications in Control Science and Engineering (CCSE) Volume 3, 2015
www.as-se.org/ccse
4) If the other robots are too near, adjust đ?&#x153;&#x192;đ?&#x2018;Ą , and v will be a litter faster. 5) If obstacles around, adjust đ?&#x153;&#x192;đ?&#x2018;Ą , and v changes the same as in 4. 6) If task is completed, over; else back to 2.
Simulation and Experiment In this section we describe the process and tools used for verifying the approach proposed above and show the results of experiment in our simulation. Our simulation and experiment contains one real robot with a coordinator, and four virtual robots created in upper computer which uses the development environment Microsoft Visual Studio 2010. The upper computer uses serial communication to exchange information with the coordinator that has created Zigbee wireless network. The real robot regarded as the target joins the network and exchanges information via the wireless network. The robot is equipped with two encoders for wheels, card reader for RFID, and infrared sensors. When the command center initializes all robotsâ&#x20AC;&#x2122; position and assigns tasks, the robot begins to execute its own task. In our simulation, we set a narrow road and one obstacle that are known to all robots. Fig. 7 shows the interface designed for task announcement and displaying the robots status. We give the initial location (120, 288, set unit to centimeter, the same below) and final location (180, 5) of the target robot, it will find the position gradually. The trajectory is unknown to the group, but the targetâ&#x20AC;&#x2122;s position can be observed by other robots. We set the other robots initial locations are (80, 240), (200, 280), (50, 270), (260, 250). Fig. 7 also presents the real-time results in the map and the status data in the bottom right corner. At first, the formation is line. When anyone finds itself close to the narrow road, it will inform others to change the column formation. While the last one almost passes that road, the distance to the target is so far that it will inform others to change the wedge formation to keep up faster. We are satisfied with the results that the robots perform well while maintain the formation and simultaneously track the target. Another picture is our experiment environment.
FIG. 7 THE INTERFACE WITH RESULTS AND EXPERIMENT ENVIRONMENT
33
www.as-se.org/ccse
Communications in Control Science and Engineering (CCSE) Volume 3, 2015
We use Matlab R2014a to analyze the data obtained from the experiment as in Fig.8. Robot3 300
250
250
200
200
150
150
y
y
Robot2 300
100
100 actual desired
50 0
0
50
100
50
150 x
200
250
0
300
actual desired 0
50
100
300
250
250
200
200
150
150
100
250
300
200
250
300
100 actual desired
50 0
200
Robot5
300
y
y
Robot4
150 x
0
50
100
50
150 x
200
250
300
0
actual desired 0
50
100
150 x
FIG. 8 THE ACTUAL AND DESIRED POSITION CONTRAST
The red line is the desired trajectory, and the green line is the actual trajectory, not the robot estimation. The trajectory in every picture consists of three segments that are corresponding with three formations respectively. In order to approximate the real robot disturbance error, we set the random error in every step is εϵ[0,0.5], which is at most 4% error, but the robots can also form a desired formation quickly and be close to the desired position. We define the distance between the actual and desired position as the indicator. From the formation firstly forming on, if the distance less than 8cm is fine, then the four robots in sequence are 96.3%, 88.9%, 72.7%, 68.2%. If the distance less than 10cm is fine, then it becomes 100%, 92.6%, 83.3%, 81.8%. The results also show that robots 2 and 3 perform better than robots 4 and 5, because they are closer to the target and relatively insensitive while the target suddenly changes its orientation. Conclusions In this paper, we have described an approach to distributed coordination control and tracking of a group of mobile robots. We give the system architecture, formation shape and primarily describe the coordination method that include communication and rules of action. The results and analysis have showed that the robots perform well in our simulation and experiment. Our next work is that more real robots will join the formation and tracking tasks instead of virtual robots. And the method for robustness of the group performance deserves further investigation. REFERENCES
[1]
Olfati-Saber, R., Murray, R.M., Distributed cooperative control of multiple vehicle formations using structural potential functions. In: The 15th IFAC World Congress, Barcelona, 21–26, July,2002.
[2]
Zou, Y., Pagilla, P.R., Misawa, E.A. Formation of a group of vehicles with full information using constraint forces. ASME J. Dyn. Syst. Meas. Control 129, 654–661,2007.
[3]
Leonard, N.E., Fiorello, E., Virtual leader, artificial potentials and coordinated control of groups. In: Proceedings of the 40th IEEE Conference on Decision and Control, 2968–2973, 2001.
34
Communications in Control Science and Engineering (CCSE) Volume 3, 2015
[4]
www.as-se.org/ccse
Balch, T. and Arkin, R.C, Behavior-based formation control for multi-robot teams. IEEE Transactions on Robotics and Automation, 14(6), 1998.
[5]
Dimarogonas, D.V., Loizou, S.G., Kyriakopoulos, K.J., Zavlanos, M.M., A feedback stabilization and collision avoidance scheme for multiple independent non-point agents. Automatica 42(2), 229–243, 2006.
[6]
Liang, Y., Lee, H.H., Decentralized formation control and obstacle avoidance for multiple robots with nonholonomic constraints. In: Proceedings of the American Control Conference, 5596–5601, Minneapolis, 14–16, June, 2006.
[7]
Jung, D. and Zelinsky, A, Grounded symbolic communication between heterogenoeus cooperating robots. Autonomous Robots, 8(3), 2000.
[8]
Gutmann, J.-S., Weigel, T., and Nebel, B, Fast, accurate, and robust self-localization in the RoboCup environment. In RoboCup-99: Robot Soccer World Cup III, 304–317, 1999.
[9]
Jiang F, Hu J. Cooperative multi-target tracking in passive sensor-based networks. Wireless Communications and Networking Conference (WCNC), 2013 IEEE, 2013:4340-4345.
[10] Alaybeyoglu, K. Erciyes, A. Kantarci, and O. Dagdeviren. Tracking Fast Moving Targets in Wireless Sensor Networks. IETE Technical Review, 2010, 27(1):46-53.
35