Journal of Automation, Mobile Robotics and Intelligent Systems, vol. 13, no. 2 (2019)

Page 1

VOLUME 13, N° 2, 2019 www.jamris.org (ONLINE)

Journal of Automation, Mobile Robotics and Intelligent Systems pISSN 1897-8649 (PRINT) / eISSN 2080-2145

Publisher: ŁUKASIEWICZ Research Network – Industrial Research Institute for Automation and Measurements PIAP

WWW.JAMRIS.ORG  •  pISSN 1897-8649 (PRINT) / eISSN 2080-2145 (ONLINE)  •  VOLUME  13, N° 2, 2019 Indexed in SCOPUS


Journal of Automation, Mobile Robotics and Intelligent Systems A peer-reviewed quarterly focusing on new achievements in the following fields: •  Fundamentals of automation and robotics  •  Applied automatics  •  Mobile robots control  •  Distributed systems  •  Navigation •  Mechatronic systems in robotics  •  Sensors and actuators  •  Data transmission  •  Biomechatronics  •  Mobile computing Editor-in-Chief

Typesetting

Janusz Kacprzyk (Polish Academy of Sciences, PIAP, Poland)

PanDawer, www.pandawer.pl

Advisory Board

Webmaster

Dimitar Filev (Research & Advenced Engineering, Ford Motor Company, USA) Piotr Ryszawa (PIAP, Poland) Ministry of Science Kaoru Hirota (Japan Society for the Promotion of Science, Beijing Office) and Higher Education Editorial Office Witold Pedrycz (ECERF, University of Alberta, Canada) Republic of Poland ŁUKASIEWICZ Research Network Co-Editors – Industrial Research Institute for Automation and Measurements PIAP Roman Szewczyk (PIAP, Warsaw University of Technology) Al. Jerozolimskie 202, 02-486 Warsaw, Poland Oscar Castillo (Tijuana Institute of Technology, Mexico) tel. +48-22-8740109, e-mail: office@jamris.org Marek Zaremba (University of Quebec, Canada) Ministry The reference version of the journal is e-version. Printed in 100 copies.

of Science

Executive Editor

Ministry of Science Katarzyna Rzeplinska-Rykała, e-mail: office@jamris.org Poland) ´ and Higher(PIAP, Education

Articles are reviewed, excluding advertisements and descriptions of products. and Higher

Education If in doubt about the proper edition of contributions, for copyright and reprint Republic of Poland permissions please contact the Executive Editor.

Republic of Poland

Associate Editor

Piotr Skrzypczynski ´ (Poznan ´ University of Technology, Poland)

P ublishing of “Journal of Automation, Mobile Robotics and Intelligent Systems” – the task Ministry of Science financed under contract 907/P-DUN/2019 and Higher Education from funds of the Ministry of Science and Republic of Poland Higher Education of the Republic of Poland allocated to science dissemination activities.

Statistical Editor ´ Małgorzata Kaliczynska (PIAP, Poland)

Ministry of Science and Higher Education Republic of Poland

Editorial Board:

Mark Last (Ben-Gurion University, Israel) Anthony Maciejewski (Colorado State University, USA) Krzysztof Malinowski (Warsaw University of Technology, Poland) Andrzej Masłowski (Warsaw University of Technology, Poland) Patricia Melin (Tijuana Institute of Technology, Mexico) Fazel Naghdy (University of Wollongong, Australia) Zbigniew Nahorski (Polish Academy of Sciences, Poland) Nadia Nedjah (State University of Rio de Janeiro, Brazil) Dmitry A. Novikov (Institute of Control Sciences, Russian Academy of Sciences, Moscow, Russia) Duc Truong Pham (Birmingham University, UK) Lech Polkowski (University of Warmia and Mazury, Olsztyn, Poland) Alain Pruski (University of Metz, France) Rita Ribeiro (UNINOVA, Instituto de Desenvolvimento de Novas Tecnologias, Caparica, Portugal) Imre Rudas (Óbuda University, Hungary) Leszek Rutkowski (Czestochowa University of Technology, Poland) Alessandro Saffiotti (Örebro University, Sweden) Klaus Schilling (Julius-Maximilians-University Wuerzburg, Germany) Vassil Sgurev (Bulgarian Academy of Sciences, Department of Intelligent Systems, Bulgaria) Helena Szczerbicka (Leibniz Universität, Hannover, Germany) Ryszard Tadeusiewicz (AGH University of Science and Technology in Cracow, Poland) Stanisław Tarasiewicz (University of Laval, Canada) Piotr Tatjewski (Warsaw University of Technology, Poland) Rene Wamkeue (University of Quebec, Canada) Janusz Zalewski (Florida Gulf Coast University, USA) ´ Teresa Zielinska (Warsaw University of Technology, Poland)

Chairman - Janusz Kacprzyk (Polish Academy of Sciences, PIAP, Poland) Plamen Angelov (Lancaster University, UK) Adam Borkowski (Polish Academy of Sciences, Poland) Wolfgang Borutzky (Fachhochschule Bonn-Rhein-Sieg, Germany) Bice Cavallo (University of Naples Federico II, Napoli, Italy) Chin Chen Chang (Feng Chia University, Taiwan) Jorge Manuel Miranda Dias (University of Coimbra, Portugal) Andries Engelbrecht (University of Pretoria, Republic of South Africa) Pablo Estévez (University of Chile) Bogdan Gabrys (Bournemouth University, UK) Fernando Gomide (University of Campinas, São Paulo, Brazil) Aboul Ella Hassanien (Cairo University, Egypt) Joachim Hertzberg (Osnabrück University, Germany) Evangelos V. Hristoforou (National Technical University of Athens, Greece) Ryszard Jachowicz (Warsaw University of Technology, Poland) Tadeusz Kaczorek (Bialystok University of Technology, Poland) Nikola Kasabov (Auckland University of Technology, New Zealand) ´ Marian P. Kazmierkowski (Warsaw University of Technology, Poland) Laszlo T. Kóczy (Szechenyi Istvan University, Gyor and Budapest University of Technology and Economics, Hungary) Józef Korbicz (University of Zielona Góra, Poland) Krzysztof Kozłowski (Poznan University of Technology, Poland) Eckart Kramer (Fachhochschule Eberswalde, Germany) Rudolf Kruse (Otto-von-Guericke-Universität, Magdeburg, Germany) Ching-Teng Lin (National Chiao-Tung University, Taiwan) Piotr Kulczycki (AGH University of Science and Technology, Cracow, Poland) Andrew Kusiak (University of Iowa, USA)

Publisher: ŁUKASIEWICZ Research Network – Industrial Research Institute for Automation and Measurements PIAP   All rights reserved © Articles

1


Journal of Automation, Mobile Robotics and Intelligent Systems Volume 13, N° 2, 2019 DOI: 10.14313/JAMRIS/2-2019

Contents 40

3

Preface to Special Issue on Robot Perception and Control Cezary Zieliński, Krzysztof Tchoń DOI: 10.14313/JAMRIS/2-2019/12 5

On the Efficiency of Population-Based Optimization in Finding Best Parameters for RGB-D Visual Odometry Aleksander Kostusiak, Piotr Skrzypczyński DOI: 10.14313/JAMRIS/2-2019/13 15

Overhead Vision System for Testing Swarms and Groups of Wheeled Robots Jakub Wiech, Zenon Hendzel DOI: 10.14313/JAMRIS/2-2019/14 23

Estimation of Orientation and Position of the Manipulator Using the Extended Kalman Filter Based on Measurements From the Accelerometers Agnieszka Kobierska, Piotr Rakowski, Leszek Podsędkowski DOI: 10.14313/JAMRIS/2-2019/15 28

Evaluation of Simple Microphone-based Mechanomyography (MMG) Probe Sets for Hand Stiffness Classification Igor Zubrycki, Grzegorz Granosik DOI: 10.14313/JAMRIS/2-2019/16

2

Articles

Determination of the Relationship Between EMG Signals and Hand Grips Using a Commercial Myo Armband Michał Błędowski, Andrzej Wołczowski DOI: 10.14313/JAMRIS/2-2019/17 48

Manipulator Control System for Remote USG Examinantion Adam Kurnicki, Bartłomiej Stańczyk DOI: 10.14313/JAMRIS/2-2019/18 60

Modified Position-Force Control for a Manipulator Geometrically Constrained by Round Obstacles Alicja Mazur, Mirela Kaczmarek, Joanna Ratajczak, Wojciech Domski DOI: 10.14313/JAMRIS/2-2019/19 68

Using LabVIEW and ROS for Planning and Coordination of Robot Missions, the Example of ERL Emergency Robots and University Rover Challenge Competitions Agnieszka Węgierska, Kacper Andrzejczak, Mateusz Kujawiński, Grzegorz Granosik DOI: 10.14313/JAMRIS/2-2019/20


Journal of Automation, Mobile Robotics and Intelligent Systems

VOLUME 13,

N° 2

2019

Preface to Special Issue on Robot Perception and Control

DOI: 10.14313/JAMRIS/2-2019/12 Since 2011 once every two years Journal of Automation, Mobile Robotics and Intelligent Systems has published a regular issue devoted to a particular aspect of robotics research conducted in Poland. The papers composing each such issue have been selected by the Program Committee of the National Conference on Robotics organized by the Department of Fundamental Cybernetics and Robotics, Electronics Faculty of Wrocław University of Technology. The selection has been based on the excellence of the papers published in the conference proceedings in Polish, and necessarily presented and discussed during these conferences. These papers contain most insightful achievements that had been attained during the last two years preceding every such a conference. It should be stressed that the papers put together in those issues of Journal of Automation, Mobile Robotics and Intelligent Systems have been by no means a simple translation into English of the conference papers. The results reported have been updated, described more comprehensively and in a wider context. The new papers have been subjected to the regular Journal of Automation, Mobile Robotics and Intelligent Systems review procedure. Gratitude should be expressed to all of the reviewers who provided in depth comments enabling many clarifications and overall improvements of the papers. The papers published in this issue of Journal of Automation, Mobile Robotics and Intelligent Systems are the results of research presented at the 15th National Conference on Robotics held in Polanica Zdrój from 5th till 9th of September 2018. As the title of this editorial suggests, the recent robotics research in Poland has mainly concentrated on robot perception and control. Perception is understood in robotics as the process of acquiring, processing, and interpreting data obtained from sensors in order to understand the situation existing in the environment, for the purpose of making decisions. Those decisions pertain both to the control of the robot, treated as a device, and to the task that it has to execute. Herein a broad view of robotics is taken, hence it encompasses all kind of robots, including telemanipulation systems as well as prosthesis. The fifth issue of Journal of Automation, Mobile Robotics and Intelligent Systems stemming from the National Conference on Robotics is thus devoted to diverse aspects of robot perception and control. The following overview briefly characterizes the selected papers. The first three papers of this selection are devoted to the organization of the perception process. The paper “On the efficiency of population-based optimization in finding best parameters for RGB-D visual odometry”, authored by A. Kostusiak and P. Skrzypczyński, focuses on mobile robot odometry based on the visual stream obtained from RGB-D cameras. The presented approach avoids the computationally intensive stage of data processing as a result of which the map of the environment is produced. Instead, it is based on the direct extraction of geometric transformations from the data stream to reproduce the camera motion trajectory. The paper describes the particle swarm and evolutionary algorithm optimization process leading to the evaluation of the visual odometry parameters. The article “Overhead vision system for testing swarms and groups of wheeled robots”, written by Z. Hendzel and J. Wiech, deals with the localization of mobile robots forming a swarm. For that purpose each of the robots is equipped with a binary marker that is detected by a global camera overhanging the environment in which the robots act. The paper describes the calibration of the system and the marker detection algorithm resulting in the position and orientation of each robot, whose accuracy depends on the resolution of the camera. A. Kobierska, P. Rakowski and L. Podsędkowski wrote the paper entitled “Estimation of orientation and position of the manipulator using the extended Kalman filter based on measurements from the accelerometers”, which concentrates on proprioception in medical mini-manipulators used for measuring lengths of femoral bones subjected to surgery. Cumbersome and relatively expensive encoders or resolvers used in large manipulators were substituted by light-weight and cheap accelerometers able to produce the solutions to the direct kinematics problem for the mini-manipulator. As the signals from the accelerometers contain significant noise they had to be further processed by an extended Kalman filter. The next three papers concentrate on the interaction between an operator and a robotic system or a prosthesis. All of them highlight perception and the use of its results for the purpose of control of the device. The work “Evaluation of simple microphone-based mechanomyography (MMG) probe sets for hand stiffness classification”, authored by I. Zubrycki and G. Granosik, is devoted to the design of mechanomyographic sensors based on electret microphones, for use by operators controlling the stiffness of telemanipulator grippers. The data

3


Journal of Automation, Mobile Robotics and Intelligent Systems

VOLUME 13,

N° 2

2019

collected from several subjects was processed by three alternative machine learning algorithms classifying grasp stiffness that the device operator will be able to produce remotely. The paper “Determination of the relationship between EMG signals and hand grips using a commercial myo armband”, delivered by M. Błędowski and A. Wołczowski, describes the study of the relationship between signals produced by skeletal muscles and grasping movements of the palm. Electromyography is the technique of recording those signals. The studied relationship can be used in the process of control of bioprosthesis or a telemanipulator equipped with a gripper. The readings obtained from myosensors located in an arm-band were extended by measurements received from bend and pressure sensors located in a sensory glove. As a result of processing the thus acquired data the state of the operator’s hand can be classified. “Manipulator control system for remote USG examination” reports on the work conducted by A. Kurnicki and B. Stańczyk. It presents a telemanipulation system used for remote acquisition of ultrasonographic data about the state of internal organs of a patient. The system is composed of a haptic interface generating the motion of the USG head and reflecting the forces exerted by the head onto the patient. The remote robot is equipped with a camera stereopair enabling the doctor to locate the sensoric head in the required place. The paper focuses on the control aspects of this master-slave system. The last two papers solve two different control problems. The first deals with position-force control of a manipulator and the second provides insight into the design of software enabling mission coordination and planning for heterogeneous robots executing separate tasks. The authors A. Mazur, M. Kaczmarek, J. Ratajczak and W. Domski of the article “Modified position-force control for a manipulator geometrically constrained by round obstacles”, looks at the problem of manipulator position-force control when round obstacles exist in the environment. The method of obtaining joint-space orthogonalisation between force and motion vectors is presented. A mathematical proof that the resulting control algorithm causes the tracking errors to converge to zero is presented. The paper “Using LabVIEW and ROS for planning and coordination of robot mission, the example of ERL Emergency Robots and University Rover Challenge competitions”, authored by A. Węgierska, K. Andrzejczak, M. Kujawiński and G. Granosik, delves into the problem of creating mission coordination and planning software for robots taking part in robotic challenges. The software enables heterogeneous groups of robots to be teleoperated or to act partially autonomously. It also monitors the mission execution thus providing data for future analysis. The paper also describes the experience gained in using the created software during robotics challenges. All of the subjects of the papers composing this issue of Journal of Automation, Mobile Robotics and Intelligent Systems, briefly characterized above, are the topics of current deliberations of the robotics community conducting research concerned with robot perception and control. Each of the papers gives a thorough insight into a particular problem, providing its formulation, background, and deriving its solution. We hope that this selection of papers will be found useful both by researchers and practitioners involved in diverse aspects of robotics. Editors: Cezary Zieliński, Warsaw University of Technology, Faculty of Electronics and Information Technology

4

Articles

Krzysztof Tchoń, Wrocław University of Technology, Faculty of Electronics


Journal of Automation, Mobile Robotics and Intelligent Systems

O

E

P B

P

VOLUME 13,

-B O RGB-D V O

N° 2

2019

F

Submi ed: 4th February 2019; accepted: 15th April 2019

Aleksander Kostusiak, Piotr Skrzypczyński DOI: 10.14313/JAMRIS/2-2019/13 Abstract: Visual odometry es mates the transforma ons between consecu ve frames of a video stream in order to recover the camera’s trajectory. As this approach does not require to build a map of the observed environment, it is fast and simple to implement. In the last decade RGBD cameras proliferated in robo cs, being also the sensors of choice for many prac cal visual odometry systems. Although RGB-D cameras provide readily available depth images, that greatly simplify the frame-to-frame transforma ons computa on, the number of numerical parameters that have to be set properly in a visual odometry system to obtain an accurate trajectory es mate remains high. Whereas se ng them by hand is certainly possible, it is a tedious try-and-error task. Therefore, in this ar cle we make an assessment of two popula on-based approaches to parameter op miza on, that are for long me applied in various areas of robo cs, as means to find best parameters of a simple RGB-D visual odometry system. The op miza on algorithms inves gated here are parcle swarm op miza on and an evolu onary algorithm variant. We focus on the op miza on methods themselves, rather than on the visual odometry algorithm, seeking an efficient procedure to find parameters that minimize the es mated trajectory errors. From the experimental results we draw conclusions as to both the efficiency of the op miza on methods, and the role of parcular parameters in the visual odometry system. Keywords: Par cle Swarm Op miza on, Evolu onary Algorithm, Visual Odometry, RGB-D

1. Introduc on The new generation of RGB-D cameras allowed for development of new solutions for Visual Odometry (VO) and Simultaneous Localization and Mapping (SLAM) [24]. Sensors that are available on the market since 2010, when the Kinect v1 has been introduced, are inexpensive, but yield accurate measurements, suf icient for indoor localization [21]. A VO algorithm computes camera motion between the consecutive keyframes, and estimates the trajectory (consecutive camera poses) without building any explicit representation of the environment [23]. Visual odometry is conceptually and practically simpler than a full SLAM system. In VO the estimated trajectory of the camera is not guaranteed to be optimal in the light of all collected data. Usually the estimated trajectory drifts with the distance traveled. However, an implementation of the VO method can be adopted as a “front-end”

for a full SLAM system, which is then combined with an optimization-based “back-end” for post-processing of the obtained camera trajectory. This trajectory is represented as a pose-graph whose edges are constraints between the camera poses, and is optimized to obtain a best explanation of all the collected frameto-frame transformations [5]. In the classic approach to VO monocular images are often used [23], which requires sophisticated algorithms to estimate the spatial transformations between the neighboring frames from the correspondences between sets of 2D points that are determined in the image space [19]. The problem becomes much easier if RGB-D frames are used, as they provide readily available depth information that can be used to turn the 2D visual features into 3D points, keeping however the correspondences established between the 2D features using local point descriptors. Then, estimation of the transformation (rototranslation) between the two sets of 3D points can be accomplished applying closed formulas [15]. In spite of their simplicity RGB-D VO algorithms are very useful in several areas of robotics. A VO system can be applied as a stand-alone localization method [20], used as a front-end in pose-based SLAM [5], exploited to pre-process RGB-D data in global localization [26], or combined with other methods of estimating the robot’s pose [12]. There are many different solutions to the problem of visual odometry using RGB-D frames. An early implementation employing sparse point features has been presented in [4], while a feature-less, dense approach was proposed in [16]. The work of Endres et al. [10] is often considered a standard implementation of the feature-based RGB-D VO matched to an optimization back-end, inally solving the full SLAM problem. It is dif icult to say exactly what blocks a RGBD VO system should be composed of, and according to which principles one should choose parameters in these blocks. In [5] we tackled this problem at the level of the building blocks, showing that poor performance of the VO front-end hardly can be compensated by a more developed optimization back-end. This result makes it clear that parameters of the VO algorithm are of pivotal importance for any RGB-D visual navigation system. On the other hand, in some of the best known RGBD-based localization systems, e.g. the system developed by Endres et al. [9, 10], numeric parameters were selected for a given RGB-D sequence by exhaustive search in the parameter space. Those parameters go5


Journal of Automation, Mobile Robotics and Intelligent Systems

verned the feature detector parameters, feature matching and outliers detection. While such a strategy made it possible to obtain impressive results in terms of the trajectory estimation accuracy on benchmark RGB-D sequences [25], it is inef icient due to the time required by the exhaustive search when the objective function is the minimal discrepancy between the estimated and the reference (i.e. ground truth) trajectory. Moreover, the found parameters are appropriate only for the given experiment, and it is unclear if they are transferable to any other environment or sequence of RGB-D frames. Therefore, we try to ind the best parameters for a simple RGB-D VO system using a more ef icient approach, which gives also some promise of generalization to other, environments, not seen before by the localization system. This approach is population-based optimization, which evaluates a population of possible solutions (called individuals or particles) in order to ind the best one, but at the same time applies some heuristic strategy to create an offspring that should inherit properties from the best solutions found so far, allowing for further improvements. This very general scheme may be implemented in many different ways, leading to Genetic Algorithms [14], Evolutionary Algorithms [3], Particle Swarm Optimization [7] or the recent Cuttle ish Algorithm [8]. Algorithms from this broad family are considered for our parameters search problem, due to their global optimization ability, and because they can handle complex, non-differentiable search spaces, still showing good explorative properties. Although classic genetic algorithms have proven to be useful search methods in many robotics-related applications, it is not clear how a genetic algorithm should be con igured and parametrized for our problem, as the results largely depend on the population size and the strategy of creating new individuals. Therefore, we apply two algorithms that require a minimal setup with respect to the meta-parameters (i.e. parameters of the optimization algorithm itself, not the VO method). These are the Particle Swarm Optimization [7] and an Evolutionary Algorithm in the ecologyinspired variant proposed by Annunziato and Pizzuti [2]. These two methods can deal with very dif icult optimization problems [22] and they have proven their usefulness in our earlier research, for instance in the development of a dynamics model for simulation of a hexapod robot [6]. This article is an improved and extended version of the conference paper [17] published in Polish, which provides more quantitative results (e.g. RPE plots that were omitted in the conference paper), but introduces also a novel on-line optimization procedure for the point feature detector.

2. Visual Odometry Architecture 2.1. System Structure The RGB-D VO system used in this research employs the popular OpenCV library for most of the RGBD data processing tasks. The VO structure (Fig. 1) is 6

VOLUME 13,

N° 2

2019

Fig. 1. Block scheme of the simple RGB-D visual odometry system embedded into the investigated population-based optimization method serving as the main part of the itness evaluation block. Therefore, during the initialization process, the program sets the few meta-parameters of the population and particles/individuals in the investigated algorithm, then loads the irst RGB image from the assigned sequence, and extracts keypoints of the salient visual features. Only keypoints with corresponding depth information are harvested for further processing, while these located in the areas of degraded depth data are discarded. This process is repeated for all incoming RGB-D data frames. Then, point features from the newest frame are matched to the ones from the previous frame with cross-checking of the matchings to clear out spurious associations as soon as possible [18]. We use RANSAC [13] procedure twice to remove the remaining bad-matches while transformations between the consecutive frames are computed [18]. In each of the RANSAC instances, we draw three point pairs, which is the minimal number of points required by the Kabsch algorithm [15] to determine the rototranslation between two successive frames. Once a candidate transformation is computed, we apply this transformation to the whole set of keypoints from the current frame and check how far these points are located from their matching counterparts from the previous frame. Feature points lying too far from their counterparts, according to a given Euclidean distance threshold, are considered as outliers. If the ratio between the number of outliers and inliers is not satisfactory (i.e. we have too few inliers), then we discard the candidate transformation and draw another three points. However, if too many draws were needed, then we increase the distance threshold. All point pairs that passed through the RANSAC iltration are then used to calculate the rotation and translation between the considered RGB-D frames. Finally, the camera pose is updated by concatenating the newly computed rototranslation with the existing camera pose. Motivated by the results of previous experiments [18], we use the AKAZE algorithm [1] for feature detection and description. This detector-descriptor al-


Journal of Automation, Mobile Robotics and Intelligent Systems

lows to achieve point feature detection and matching results that are very similar to the popular SURF detector-descriptor performance, but with the use of a binary descriptor, which requires less computation power in the matching process. Moreover, the AKAZE license allows for free usage even in commercial applications, while SURF is patented. The AKAZE detector searches for maxima of normalized and approximated determinant of image Hessian, which are then compared to the threshold value τA to ind keypoints at different scale levels. The scale space is built in a similar manner as in SIFT, but with the use of Fast Explicit Diffusion algorithm instead of a Gaussian kernel. To detect an interest point the determinant of Hessian is calculated for each image in the nonlinear scale space and weighted accordingly. The local maxima are picked as salient point candidates and compared with other candidate points. The AKAZE descriptor is very similar to the BRIEF descriptor, with the difference that instead of particular pixel values it compares regions average intensity and average horizontal and vertical derivatives of the image intensity function. The computation of AKAZE descriptor starts with estimating the orientation by using a histogram method, and the pattern is rotated accordingly. Finally, the descriptor vector is generated by performing binary tests of average areas and the mean of the horizontal and vertical derivatives in the areas. 2.2. System Parameters Both the ef iciency of keypoint matching between the consecutive frames, and the small residual distance errors between the sets of corresponding features are of great importance for achieving good quality of the estimated camera trajectory. On the other hand – the strive for small residual errors can lead to rejections of many matched pairs in the irst RANSAC loop, which further can lead to computation of the rototranslation by using a relatively small number of keypoints, which in turn decreases quality of the inal solution. Thus, for achieving a good quality trajectory the selection of RANSAC parameters is crucial: the Euclidean distance thresholds dE,1 and dE,2 (respectively for the irst and the second RANSAC loop), and the inliers to outliers ratios Γo,1 and Γo,2 (also for the irst and the second RANSAC loop). The dE,1 and dE,2 parameters are the initial thresholds in the respective RANSAC procedures. The VO system needs also to set the AKAZE detection threshold τA adequately to the scene, because the number of detected features directly depends on this parameter. In turn, the number and quality of the point features in luences the quality of the recovered trajectory. Eventually, the ive described above parameters constitute the vector θ = [dE,1 , dE,2 , Γo,1 , Γo,2 , τA ], and are optimized by population-based methods. Two strategies are applied to the problem of inding the best parameters: - all parameters are optimized together in a ivedimensional search space,

VOLUME 13,

N° 2

2019

- parameters are optimized hierarchically – after ixing the best RANSAC thresholds, we only seek the best AKAZE detector threshold, assuming that it is more environment-speci ic.

3. Popula on-based Op miza on Methods 3.1. Objec ve Func ons In the optimization process, it is important to choose a suitable function that shows how much the particles/individuals it to the environment. In the investigated case, the VO system is the environment, and the itness function has to be related to the quality of the estimated trajectory. For this reason, we chose the Absolute Trajectory Error (ATE) and the Relative Pose Error (RPE) as the objective functions. They were proposed in [25] and are used commonly in robotics research for SLAM and VO evaluation. The ATE speciies the translational errors between the estimated camera poses along the trajectory, and the ground truth trajectory poses. The ground truth trajectory is obtained from an external motion capture system [21]. Because ATE compares absolute distances between poses taken from two synchronized trajectories, these trajectories have to be aligned prior to ATE computation. This is implemented by inding a rotation between these two rigid sets of points that minimizes the distance between them assuming a common starting point [25]. Conversely, the RPE calculates the difference in transformation (that is why it has rotational and translational parts) which would exist after following the estimated trajectory and the ground truth trajectory independently for the given number of frames (or time amount, in our case 1 s), and then computing the rototranslation between the estimated trajectory pose and its counterpart from the ground truth one. In order to determine the errors we need to have the estimated trajectory T = {T1 , T2 , . . . , Tk } ∈ SE(3), and the ground truth trajectory Tgt = gt gt gt {Tgt , T , . . . , T } ∈ SE(3), where T and T are cai 1 2 i k mera poses for the i-th frame in the sequence expressed by 4×4 homogeneous matrices, and k is the number of camera poses in the trajectory. The ATE for the i-th frame is computed as: EATE = Tgt i i

−1

Ti

(1)

and the ATE RMSE value for the whole trajectory is computed as the Root Mean Square Error of (1) for all nodes of T and Tgt : v u k uX 2 1 RMSE ATE =t . (2) EATE i k i=1 The RPE for i-th frame is given by the equation: −1 gt ERPE = (Tgt Ti+1 i i )

−1

T−1 i Ti+1 .

(3)

Then, we can obtain the relative translational RPEt(i) or rotational RPEr(i) error at i-th frame taking the translational or rotational part of ERPE and computing i 7


Journal of Automation, Mobile Robotics and Intelligent Systems

VOLUME 13,

the Euclidean norm or Euler angle, respectively. Similarly to ATE, the RPE RMSE for the whole trajectory is obtained calculating the RMSE over the respective perframe errors for the whole trajectories: v u k uX 2 1 RMSE RPEr = t Rot ERPE , (4) i k i=1 v u k uX 2 1 RMSE RPEt = t Trans ERPE , (5) i k i=1 where RPERMSE and RPERMSE stand for the rotational r t and translational RPE RMSE, respectively, while Rot(.) is the operator that extracts rotational part of the transformation as the Euler angle, and Trans(.) is the operator that extracts translational part of the same transformation and calculates its Euclidean norm. Having the ground truth trajectory Tgt , the estimated trajectory T(θ) that depends on the VO parameters θ, and the ATE and RPE given by (1) and (3), respectively, we de ine the optimization problem of VO system parameters with the use of ATE: argmin FATE = θ

k X

Tgt i

−1

Ti (θ).

N° 2

2019

From these data we compute the velocities and positions of each particle in the parameter space for the next iteration, as given by the formulas: i i vi+1 m = vm +c1 ∗ rand() ∗ (lbest − pm )

+c2 ∗ rand() ∗ (gbest − pim ), pi+1 = m

pim + vi+1 m ,

(8)

where vim is the velocity of the m-th particle pim , c1 and c2 are constant values, lbest is the best parameter vector found by the given particle, gbest is the globally best i+1 parameter vector, while pi+1 m and vm are the new positions and velocities of the m-th particle. We perform these operations until one of the stop criteria is reached. These stop criteria are: 1) a satisfying itness value has been achieved, 2) no itness improvement has been noticed for several consecutive iterations, 3) the maximum allowed number of iterations has been exceeded.

(6)

i=1

Similarly, for the translational part of RPE we minimize the following form: argmin FRPEt = θ

k X

Trans

Fig. 2. Block scheme of the PSO algorithm

−1 −1 gt (Tgt Ti+1 i )

i=1

T−1 i (θ)Ti+1 (θ)

.

(7)

3.2. Par cle Swarm Op miza on This approach simulates a lock of birds lying around in search of a corn ield [7]. The block scheme in Fig. 2 shows the general structure of this method. During the initialization stage, we draw a population of parameters that are represented by m particles. Each particle has n parameters (in our case n=5 or n=1, depending on the considered optimization strategy) that are the particle’s position in the ndimensional space, and the corresponding number of velocities. The velocities have no direct physical interpretation in the VO algorithm, but they control exploration of the search space – the higher the velocity, the bigger the parameter changes between the consecutive iterations. Each particle is evaluated to determine how good the parameter set it de ines in the search space its to the “environment”, which in our case is the VO problem. Thus, the itness measure is de ined by the VO performance. Namely, the itness value depends on the translational RPE RMSE or ATE RMSE value, according to the chosen optimization variant. For the m-th particle, we keep the best parameter vector found up to the current iteration of the algorithm, as well as the globally best parameter vector found by any particle. 8

The implemented PSO is the most simple variant of the algorithm, which is characterized by fast computation of the particle updates and high speed of convergence. Unfortunately, fast convergence can lead to the loss of diversity among the particles, resulting in premature convergence. A number of solutions to this problem has been proposed in the literature, such as the perturbed PSO algorithm [27], that introduces additional perturbance to the global best solution in order to maintain diversity. However, more complicated PSO variants are more time consuming, because they perform additional computations for each particle update. Hence, we stick with the canonical PSO variant, observing however the behavior of the particles (see Fig. 7). The visualizations suggest that suf icient diversity is preserved among the particles till the inal iteration, even though the stop criteria is the lack of itness improvement for several iterations. The PSO meta-parameters have been chosen in such a way, that results should be obtained in a reasonable amount of time (several hours). We have set c1 =c2 =2 and limited the maximum allowed velocity for particles to the range from -0.005 to 0.005. The velocities are initialized as random values close to zero, as suggested in [11]. We have used 40 particles to effectively use available threads for parallel computation for maximum 20 iterations. In all experiments the PSO optimization has ended detecting no improvement for several iterations, and newer reached the maximum iterations limit.


Journal of Automation, Mobile Robotics and Intelligent Systems

VOLUME 13,

N° 2

2019

Fig. 3. Block scheme of the evolu onary algorithm 3.3. Evolu onary Algorithm As an alternative to the PSO algorithm we have chosen an variant of the Evolutionary Algorithm (EA), a self-adapting algorithm inspired by natural ecosystems [2]. Contrary to typical Genetic Algorithms, it requires setting a minimal number of parameters. Few changes have been made in order to adopt the algorithm described in [2] to our problem. We use real numbers (of float type) instead of binary strings to code the genome. Initially, we specify the maximum size of the population r and the number of individuals in the irst generation, for which we draw the initial parameters (genomes). Successive generations are created and evaluated in a loop until one of the stop criteria is met. The stop criteria are exactly the same as used for PSO. In the loop we draw the individuals that interact with other individuals, while the remaining individuals undergo a mutation process. The individuals are drawn with the probability Pi (where i is the iteration number), which is equal to the ratio between the population size in the i-th generation, and the maximum allowed population size. Among the individuals that have to interact with the others we draw with the same probability Pi those that will ight with others, while the remaining ones can reproduce. In the ight interaction the stronger individual (having better itness value) always wins, while the weaker one disappears. The reproduction is accomplished by randomly exchanging parts of the genomes in a single-point crossover operation. The crossover point is drawn from a normal distribution. Mutation is implemented as initiation of a new individual with one gene randomly changed with respect to its “parent” individual, which in this action is also preserved. If any of these actions would lead to exceeding the maximum population size limit, then the weakest individual from the population disappears. For fair comparison we set parameters of EA to operate on a population of approximately the same size as in PSO. In our experiments, the initial number of individual equals 10, and their maximal number is 40. We set the maximum number of iterations to 20.

4. Off-line Op miza on Results As an important aim of our research was to demonstrate that the VO parameters optimized using the proposed approach are transferable between different en-

vironments, we have used two different sequences of RGB-D frames: fr1_desk and fr1_room from the popular TUM RGB-D Benchmark [25] as the learning sequences. Through the whole paper the parameter set obtained using the fr1_desk sequence is denoted OP1, while the one obtained using the fr1_room sequence is denoted OP2. For veri ication of the results we used the putkk_Dataset_1_Kin_1 sequence from an entirely different publicly available dataset [21], which was newer used for optimization. Tab. 1. Number of point features per frame detected by the VO with the AKAZE detec on threshold τA op mized by the Evolu onary Algorithm with ATE-based fitness func on RGB-D data sequence fr1_desk fr1_room putkk_Dataset _1_Kin_1

min. num. of points 105 12

aver. num. of points 659 509

max. num. of points 1435 1301

42

328

795

The three RGB-D sequences used in our research are characterized by different size of the scene, different dynamics of the moving camera, and different objects being observed. All that results in different numbers of feature points that are detected in a single RGBD frame (Tab. 1). In the TUM RGB-D Benchmark sequences the Kinect sensor was moved by hand, slowly in fr1_desk and much faster in fr1_room, with the trajectory spanning much larger volume of space in the latter case. Contrarily, in the PUTKK dataset the Asus Xtion sensor was attached to a wheeled robot, which resulted in fast, but smoother motion. Thus, the minimal number of features extracted from a frame is much smaller for fr1_room, mostly due to fast rotations of the camera that are present in this sequence. The number of features never drops to such a small value for putkk_Dataset_1_Kin_1, but the average and maximum number of features are smaller than for both TUM RGB-D Benchmark sequences due to the larger room size and the limited range of depth perception in the RGB-D sensor. Example visualizations of the environments used in the research are depicted in Fig. 4. Our experiments began with optimization of all ive system parameters: the AKAZE detection threshold τA and the RANSAC parameters. We used the PSO algorithm con igured with the ATE-based objective 9


Journal of Automation, Mobile Robotics and Intelligent Systems

VOLUME 13,

Fig. 4. Visualiza on of the colored point clouds registered with the ground truth trajectories from the fr1_desk sequence (a), and the putkk_Dataset_1_Kin_1 sequence (b) function. The optimization session did not exceed nine iterations. Table 2 contains the optimal parameter values found in this experiment. The ATE RMSE and RPE RMSE scores for these parameter sets, and their manually entered counterparts are collected in Tab. 3. Tab. 2. VO parameters that got op mized and their best values Method OP1 OP2 Manually

dE,1 0.059 0.042 0.03

Γo,1 0.812 0.916 0.80

dE,2 0.042 0.012 0.003

Γo,2 0.836 0.855 0.80

τA 0.0021 0.0013 0.002

Tab. 3. ATE RMSE and RPE RMSE comparison for two sequences Error metric ATE RMSE [m] Trans. RPE RMSE [m] Rot. RPE RMSE [◦ ] Error metric ATE RMSE [m] Trans. RPE RMSE [m] Rot. RPE RMSE [◦ ]

fr1_room manually OP1 1.573 1.728 0.348 0.358 22.632 26.201 Dataset1_Kin1 manually OP1 0.829 0.912 0.019 0.020 0.374 0.407

OP2 0.288 0.115 2.529 OP2 0.697 0.013 0.248

Differences in the quality of the frame-to-frame transformations estimation for the different parameter vectors are clearly visible on the plots on the RPE RMSE (Fig. 5), that shows the “local” quality of the trajectory estimation, i.e. it neglects the in luence the inaccuracy of the previous parts of the trajectory has on the current camera pose. It is apparent from the comparison of these plots that the parameters optimized on a challenging enough sequence (i.e. OP2 that involved much faster motion of the Kinect and more diversi ied objects in the ield of view) make it possible to achieve relative translations without large errors. To complete the quantitative results for the PSO experiment we show also the qualitative results in the form of trajectories with the ATE visualized on them. Figure 6 presents the ATE plots for fr1_desk sequence in the irst row, the fr1_room sequence in the second row, and for the veri ication sequence putkk_Dataset_1_Kin_1 in the third row. To demonstrate how the particles behave in the 10

N° 2

2019

Fig. 5. Plots of transla onal RPE measured on putkk_Dataset_1_Kin_1 for the VO parameters set manually (a), and op mized using fr1_desk (b) or fr1_room (c) multi-dimensional search space we show in Fig. 7 the evolution of particles during the OP2 experiment. Red dots represent initial positions of particles, blue dots – transitional positions, and green dots the inal positions achieved when the stop criteria occurred. The bigger black dot represents the best particle, i.e. the optimal parameters vector. We decided to reuse obtained RANSAC iltration parameters in the next tests because the parameters obtained in the OP2 optimization process on fr1_room sequence allowed to achieve the best results and changed insigni icant for different environments. Hence, in further experiments we have only optimized the AKAZE detector τA parameter, but this time with the use of two different approaches inspired by nature. Table 4 shows the best values of the τA parameter obtained after optimization with the use of PSO algorithm with the alternative ATE-based and RPE-based objective functions, and with the use of the EA with only RPE-based objective function. Table 5 collects error values of the VO system while using these parameters. The obtained detection threshold τA is signi icantly lower in case of using ATE RMSE instead of RPE RMSE in optimization and thus allows to take into account more points during trajectory estimation, but in exchange for slightly larger temporary RPE errors. Tab. 4. The AKAZE detector parameter τA and its op mal values according to different op miza on variants Method and detection threshold Method and detection threshold

PSO ATE 0.000623 EA ATE 0.000703

PSO RPE 0.001367 EA RPE 0.001412

However, during veri ication, this does not allow to achieve signi icantly better ATE RMSE results. For this reason, it is better to use the detection threshold obtained with the use of RPE-based objective function, because this accelerates the calculations. In Tab. 5 we have also included results for the VO system with SURF detector-descriptor, for which the same optimization procedure was applied to the detection threshold. This demonstrates that our approach can be transferred to visual odometry systems of different parameters. Figure 8 presents the comparison of the recovered trajectories with the ATE visualized. In general, all sets of parameters used in the experiment allowed for correct trajectory estimation. It follows that both investigated optimization methods are suitable for automa-


Journal of Automation, Mobile Robotics and Intelligent Systems

VOLUME 13,

N° 2

2019

Fig. 6. ATE error plots (black lines represent the ground truth trajectory, blue lines the es mated trajectory, and red segments the Euclidean errors). Subfigures (a), (b), (c) are obtained for the fr1_desk sequence, (d), (e), (f) for the fr1_room sequence, and (g), (h), (i) for the putkk_Dataset_1_Kin_1 sequence. The parameters were either set manually in (a), (d) and (g), or obtained by PSO op miza on in the OP1 experiment (b), (e) and (h), or in the OP2 experiment in (c), (f) and (i) Tab. 5. Trajectory es ma on results for τA obtained through op miza on

Error metric ATE RMSE [m] Trans. RPE RMSE[m] Rot. RPE RMSE [◦ ]

PSO ATE 0.290 0.120 2.403

PSO RPE 0.292 0.114 2.585

fr1_room EA EA ATE RPE 0.295 0.312 0.119 0.115 2.382 2.632

tic search of the best parameter values for a RGB-D VO system. However, there are some important differences. The PSO algorithm searched through a signi icantly larger area in the parameter space. In all experiments the optimization with PSO was about ivefold longer than with the use of the EA (58 h versus 11 h for the ATE-based objective function). Moreover, conigurations with the RPE-based objective function required about 10% less time than those employing the ATE-based objective function. In all these cases the program running under Linux used all 12 threads available on the i5 CPU desktop PC, with nearly 100% CPU load. Having in mind the computation time and similar results, we recommend EA with the RPE-based objective function as a handy software tool to quickly ind reasonable parameters of a VO system. However, for a more thorough optimization (e.g. for a particular environment that does not change much in time) the use of PSO is recommended, due to its better exploratory properties. It seems that the use of ATE-based objective function does not have any signi icant positive effect on the inal results, thus the RPE-based objective

SURF EA RPE 0.369 0.122 2.615

putkk_Dataset1_Kin1 PSO PSO EA EA ATE RPE ATE RPE 0.662 0.614 0.624 0.628 0.009 0.012 0.009 0.013 0.172 0.232 0.175 0.239

function is recommended for both optimization methods, as it allows for faster computation of the itness values.

5. On-line Op miza on of VO Parameters The results obtained in the off-line experiments encouraged us to go one step further and to try tuning parameters of the AKAZE detector on-line, which should make it possible to adapt the VO system to changing environment properties, such as the amount of texture that directly in luences the number of point features, or changing lighting. In this case we were also optimizing the AKAZE percentile parameter τP , besides τA detection threshold. From the initial PSO experiments we have noticed that the obtained estimations were much worse in scenarios where less than 250 keypoints remained after RANSAC iltration. Therefore, we tried to use the PSO method to ind parameters allowing for retrieving a satisfying number of keypoints in those demanding frames. For the sake of speed, the on-line optimization was initialized only 11


Journal of Automation, Mobile Robotics and Intelligent Systems

VOLUME 13,

N° 2

2019

Tab. 6. Trajectory es ma on results for the AKAZE parameters obtained through on-line op miza on sequence fr_desk fr_room Dataset_1

ATE [m] 0.1225 0.3233 0.5000

Trans. RPE [m] 0.0758 0.1212 0.0097

Rot. RPE [◦ ] 2.5094 2.5794 0.1739

value, as it re lects the local changes to the accuracy of the estimated trajectory. This time we also used 40 particles, but the irst one was reserved for the already best parameters vector, found during off-line optimization. Next 11 particles (the used PC can run 12 threads simultaneously) constituted a set of previously best parameters, and they were calculated only once during the PSO optimization. In Tab. 6 we present results obtained for the same three sequences we already used in the off-line experiments, while Fig. 9 presents the corresponding ATE and RPE plots.

Fig. 7. Evolu on of par cles during op miza on on fr1_room sequence. In (a) the x-axis shows Γo,1 , and the y-axis Γo,2 , while in (b) x-axis is for dE,1 , and y-axis for dE,2 . In both subfigures the z-axis shows the τA parameter. See main text for the meaning of the color dots

Fig. 9. Results of the on-line op miza on experiments on three sequences: fr1_desk (a,b), fr1_room (c,d), and putkk_Dataset1_Kin1 (e,f)

Fig. 8. Comparison of ATE results: fr1_room in (a), (c), (e), and putkk_Dataset1_Kin1 in (b), (d), (f). The first row (a,b) shows PSO/ATE results, the second row (c,d) PSO/RPE results, and the third row (e,f) GA/RPE results when we had less than 50 points during the VO operation. In the experiment only RPE was used as the itness 12

Although the AKAZE threshold was changed online by the PSO procedure, the inal ATE RMSE and RPE RMSE results were in general not improved as compared to the best off-line results. Only for the putkk_Dataset1_Kin1 sequence the results improved slightly. This is the sequence with a relatively smooth camera motion and constant lighting conditions, which suggests that the population-based optimization method is too slow to adapt on-line to sudden changes in the incoming data.

6. Conclusion We demonstrated how simple to implement population-based optimization methods can be applied to the task of inding best parameters for a RGB-D visual odometry system. The proposed procedure can replace the exhaustive search that is sometimes


Journal of Automation, Mobile Robotics and Intelligent Systems

used to ind parameters that guarantee best results on the popular benchmark sequences. Although its main advantage is the improved speed (hours instead of several days), we have shown that the computed parameters generalize to other sequences, as long as they involve similar environments and camera dynamics. We demonstrated also that the parameters of RANSAC, which is widely used in VO and SLAM are of great importance to provide a proper trade-off between the number of point features processed for frame-toframe transformations and the matching accuracy. We presented also a novel approach for on-line optimization of selected VO parameters that are expected to change along with the changes in the observed environment. Whereas the preliminary results of on-line optimization show only a small improvement, there is still room for further research with this approach using faster parameter search methods. The online performance could be also improved by a massively parallel implementation of the standard PSO algorithm on a GPU, but we do not consider this a feasible approach for VO on-board a mobile robot due to the energy consumption constraints.

AUTHORS

Aleksander Kostusiak∗ – Poznań University of Technology, Institute of Control, Robotics and Information Engineering, ul. Piotrowo 3A, 60-965 Poznań , Poland, e-mail: aleksander.kostusiak@doctorate.put.poznan.pl. Piotr Skrzypczyński – Poznań University of Technology, Institute of Control, Robotics and Information Engineering, ul. Piotrowo 3A, 60-965 Poznań , Poland, e-mail: piotr.skrzypczynski@put.poznan.pl. ∗

Corresponding author

REFERENCES [1] P. Alcantarilla, J. Nuevo, and A. Bartoli, “Fast Explicit Diffusion for Accelerated Features in Nonlinear Scale Spaces”. In: Procedings of the British Machine Vision Conference 2013, Bristol, 2013, 13.1–13.11, 10.5244/C.27.13. [2] M. Annunziato and S. Pizzuti, “Adaptive parameterization of evolutionary algorithms driven by reproduction and competition”. In: Proceedings of the European Symposium on Intelligent Techniques (ESIT 2000), 2000, 246–256. [3] J. Arabas, Lectures in evolutionary algorithms (in Polish: Wykłady z algorytmów ewolucyjnych), WNT: Warsaw, 2001. [4] A. Bachrach, S. Prentice, R. He, P. Henry, A. S. Huang, M. Krainin, D. Maturana, D. Fox, and N. Roy, “Estimation, planning, and mapping for autonomous light using an RGB-D camera in GPSdenied environments”, The International Journal of Robotics Research, vol. 31, no. 11, 2012, 1320– 1343, 10.1177/0278364912455256.

VOLUME 13,

N° 2

2019

[5] D. Belter, M. Nowicki, and P. Skrzypczyń ski, “On the Performance of Pose-Based RGB-D Visual Navigation Systems”. In: D. Cremers, I. Reid, H. Saito, and M.-H. Yang, eds., Computer Vision – ACCV 2014, 2015, 407–423, 10.1007/978-3-31916808-1_28. [6] D. Belter and P. Skrzypczyń ski, “Populationbased Methods for Identi ication and Optimization of a Walking Robot Model”. In: K. R. Kozłowski, ed., Robot Motion and Control 2009, 2009, 185–195, 10.1007/978-1-84882-985-5_18. [7] R. Eberhart and J. Kennedy, “A new optimizer using particle swarm theory”. In: MHS’95. Proceedings of the Sixth International Symposium on Micro Machine and Human Science, 1995, 39–43, 10.1109/MHS.1995.494215. [8] A. S. Eesa, Z. Orman, and A. M. A. Brifcani, “A novel feature-selection approach based on the cuttle ish optimization algorithm for intrusion detection systems”, Expert Systems with Applications, vol. 42, no. 5, 2015, 2670–2679, 10.1016/j.eswa.2014.11.009. [9] F. Endres, J. Hess, N. Engelhard, J. Sturm, D. Cremers, and W. Burgard, “An evaluation of the RGBD SLAM system”. In: 2012 IEEE International Conference on Robotics and Automation, 2012, 1691– 1696, 10.1109/ICRA.2012.6225199. [10] F. Endres, J. Hess, J. Sturm, D. Cremers, and W. Burgard, “3-D Mapping With an RGB-D Camera”, IEEE Transactions on Robotics, vol. 30, no. 1, 2014, 177–187, 10.1109/TRO.2013.2279412. [11] A. Engelbrecht, “Particle swarm optimization: Velocity initialization”. In: 2012 IEEE Congress on Evolutionary Computation, 2012, 1–8, 10.1109/CEC.2012.6256112. [12] J. M. Falquez, M. Kasper, and G. Sibley, “Inertial aided dense & semi-dense methods for robust direct visual odometry”. In: 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2016, 3601–3607, 10.1109/IROS.2016.7759530. [13] M. A. Fischler and R. C. Bolles, “Random sample consensus: A paradigm for model itting with applications to image analysis and automated cartography”, Communications of the ACM, vol. 24, no. 6, 1981, 381–395, 10.1145/358669.358692. [14] D. E. Goldberg, Genetic Algorithms in Search, Optimization, and Machine Learning, AddisonWesley Longman Publishing Co., Inc.: Boston, MA, USA, 1989. [15] W. Kabsch, “A solution for the best rotation to relate two sets of vectors”, Acta Crystallographica Section A: Crystal Physics, Diffraction, Theoretical and General Crystallography, vol. 32, no. 5, 1976, 922–923, 10.1107/S0567739476001873. [16] C. Kerl, J. Sturm, and D. Cremers, “Robust odometry estimation for RGB-D cameras”. In: 2013 IEEE International Conference on 13


Journal of Automation, Mobile Robotics and Intelligent Systems

Robotics and Automation, 2013, 3748–3754, 10.1109/ICRA.2013.6631104. [17] A. Kostusiak and P. Skrzypczyń ski, “Populationbased methods for optimization of parameters in a rgb-d visual odometry system (in Polish: Optymalizacja parametró w systemu odometrii wizyjnej rgb-d metodami populacyjnymi)”, Prace Naukowe Politechniki Warszawskiej. Elektronika, vol. 196, 2018, 461–470. [18] A. Kostusiak, “The Comparison of Keypoint Detectors and Descriptors for Registration of RGBD Data”. In: R. Szewczyk, C. Zieliń ski, and M. Kaliczyń ska, eds., Challenges in Automation, Robotics and Measurement Techniques, 2016, 609– 622, 10.1007/978-3-319-29357-8_53. [19] A. Kostusiak, “Frame-to-Frame Visual Odometry: The Importance of Local Transformations”. In: M. Kurzynski, M. Wozniak, and R. Burduk, eds., Proceedings of the 10th International Conference on Computer Recognition Systems CORES 2017, 2018, 357–366, 10.1007/978-3-31959162-9_37. [20] A. Kostusiak, M. Nowicki, and P. Skrzypczyń ski, “On the Application of RGB-D SLAM Systems for Practical Localization of the Mobile Robots”, Journal of Automation, Mobile Robotics and Intelligent Systems, vol. 11, no. 2, 2017, 57–66, 10.14313/JAMRIS_2-2017/17. [21] M. Kraft, M. Nowicki, A. Schmidt, M. Fularz, and P. Skrzypczyń ski, “Toward evaluation of visual navigation algorithms on RGB-D data from the irst- and second-generation Kinect”, Machine Vision and Applications, vol. 28, no. 1, 2017, 61–74, 10.1007/s00138-016-0802-6. [22] S.-K. Oh, H.-J. Jang, and W. Pedrycz, “A comparative experimental study of type-1/type-2 fuzzy cascade controller based on genetic algorithms and particle swarm optimization”, Expert Systems with Applications, vol. 38, no. 9, 2011, 11217–11229, 10.1016/j.eswa.2011.02.169. [23] D. Scaramuzza and F. Fraundorfer, “Visual Odometry: Part I the First 30 Years and Fundamentals”, IEEE Robotics Automation Magazine, vol. 18, no. 4, 2011, 80–92, 10.1109/MRA.2011.943233. [24] P. Skrzypczyń ski, “Mobile Robot Localization: Where We Are and What Are the Challenges?”. In: R. Szewczyk, C. Zieliń ski, and M. Kaliczyń ska, eds., Automation 2017, 2017, 249–267, 10.1007/978-3-319-54042-9_23. [25] J. Sturm, W. Burgard, and D. A. Cremers, “Evaluating Egomotion and Structure-from-Motion Approaches Using the TUM RGB-D Benchmark”. In: Proc. of the Workshop on Color-Depth Camera Fusion in Robotics at the IEEE/RJS International Conference on Intelligent Robot Systems (IROS), 2012. [26] J. Wietrzykowski and P. Skrzypczyń ski, “PlaneLoc: Probabilistic global localization in 3-D 14

VOLUME 13,

N° 2

2019

using local planar features”, Robotics and Autonomous Systems, vol. 113, 2019, 160–173, 10.1016/j.robot.2019.01.008. [27] Z. Xinchao, “A perturbed particle swarm algorithm for numerical optimization”, Applied Soft Computing, vol. 10, no. 1, 2010, 119–124, 10.1016/j.asoc.2009.06.010.


Journal of Journal of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

VOLUME 2019 VOLUME 13,13, N°N°2 2 2019

O������� ������ S����� ��� ������� S����� ��� ������ �� ������� R����� �u����ed: 16th January 2019; accepted: 16th May 2019

Jakub Wiech, Zenon Hendzel DOI: 10.14313/JAMRIS/2-2019/14 Abstract: The paper describes how to use ArUco markers to determine the posi�on and orienta�on o� wheeled robots in �� space. �t is preceded b� a general descrip�on o� the testbed and a detailed descrip�on o� the marker detec�on algorithm along with the camera calibra�on using the ChaArUco markers. The camera has been described and calibrated using the pinhole camera model, taking into account distor�on o� the lens. The second part o� the ar�cle describes the wheeled robots with their mechanical construc�on. Keywords: swarm robo�cs, �ision s�stem, wheeled robots

1. �ntrod�c�on

16

In mobile robotics one of the biggest challenges is determination of the robot’s position and orientation in space. It can be achieved locally using on board sensors or globally using a external positioning system. In case of algorithm testing of groups and swarms of robots the best choice is a external positioning system. Depending on the requirements the positioning systems are either based on passive or active markers. The most precise as well as the most expensive systems are multicamera motion capture systems, for example: Vicon Vantage V5, OptiTrack Prime 17W or PhaseSpace X2E Impulse. These system are characterized by real time tracking with low latency for 6 DoF tracking of ground and aerial robots. Other cheaper ways of determining position and orientation are systems based on �iducial binary markers (�ruco, �RTag, RUNE-Tag) which are commonly used in virtual reality applications [2,4,11]. This approach is less precise but cheaper than the motion capture system. In the case of robotic swarms, the local interaction between robots is important, i.e. robots locate and communicate, for example, only with their nearest neighbors. This is a necessary requirement to be able to distinguish a swarm from a multi-robotic system [1]. This requirement exist practically due to the limited range of sensors and communication modules, which is associated with equipping all robots with the right set of sensors. In order to verify the control algorithm, it is possible to replace the robot sensors with limited information on the nearest neighbors and obstacles provided by the external vision system. The information refers to all obstacles and robots that are at a distance R simulating the maximum range of the robot’s sensors.

The paper presents a description of a vision system for determination of robots position and orientation in space and testing the control algorithms with example of leader following. For performance evaluation of proposed vision system the paper ends with four experiments and discussion of the results.

2. Robots

Objects used in experimental research are nonholonomic two-wheeled robots. Due to the targeted use of robots in multi-robot systems, robot groups or swarms, the small size of the robot (in this case 160 mm in diameter) is important, as well as a simple mathematical model describing the dynamics of the robot.

(a)

(b)

Fig. 1. ������� �����: �� � ���������� �� � ������� ��������� �� ������ T

Where in (�ig. 1), α = [α1 , α2 ] are wheels rotation T angles and generalized coordinates, M = [M1 , M2 ] are driving torques. The two-wheeled robot dynamics model is described by the matrix equations in papers [6] and [7]. 2.1. Control Algorithm

The robot control algorithm is a follow-up control, the leader and the follower are following the desired trajectory, the important difference is that the tra15


Journal Journal of of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

VOLUME N°22 2019 2019 VOLUME 13,13, N°

jectory of the robot following the leader is generated online based on their relative position, the control scheme diagram is shown on ��ig. 2). The follower maintains the desired distance from the leader acting as if it were connected by a virtual damped spring ��ig. 1b). Spring parameters are selected experimentally. Robots are controlled by a PD controller.

(a)

Fig. 2. Control scheme diagram 2.2. Robot Design The mobile robots used in experimental research are symmetric two-wheeled robots with two ball casters ��ig. 3a). The robot is equipped with a WiFi module for communication with an external vision system that transmits the distance value between the geometric center of the robot and the centers of its nearest neighboring robots along with their angular position relative to the i-th robot. The robot PCB is also its frame to which the motors and robot support wheels are mounted ��ig. 3b). The robot is controlled by the popular 8 bit Atmega2560 microcontroller, which allows programming in several languages. Assembler, C, Arduino language and also by using custom libraries, it’s possible to program the robot directly through Arduino addon in MathWorks Matlab / Simulink package . The last solution signi�icantly speeds up the process of design and veri�ication of the control algorithm. With a large number of robots, it is important to easily charge and reprogram the robots, it can be achieved by adding an additional wireless communication module and adjusting the robots to use the docking station or contact charging.

3. Robo�� Testbe�

The robotic testbed is used for experimental research and testing of control algorithms of swarm or a group of wheeled robots. It consists primarily of two systems, a vision system and a wireless communication system ��ig. 4). The vision system is intended to determine the location of a given robot in space and its position relative to other robots. The camera of the vision system is located above the surface on which the robot movement is tested at a distance enabling obtaining the desired accuracy of the position and orientation of the robots in the group. The wireless communication system is based on the WiFi wireless communication standard and enables the reading of measurement data from the robot, such as the angular velocity of the driving wheels. Vision system and wireless communication system are controlled from a 16

Articles

(b)

Fig. 3. Wheeled robot: a) – with marker b) – without marker computer application that allows to view the camera image, send commands as well as receive and display data from mobile robots. The tested robots are twowheeled mobile robots. Two-wheeled robots are often used to verify the control algorithms of groups and swarms of wheeled robots due to their simplicity and low construction cost. In the case of swarms, this is a big advantage considering the requirements of the swarm existence, i.e. large numbers of robots and possibly the simplest mechanical structure of the robot, which translates into low costs of series production. 3.1. The Vision System Algorithm

Identi�ication and determination of the robot position and orientation in space is possible by using ArUco markers. ArUco �iducial marker are square binary coded tags with identi�ication number, with the control bits. Image processing algorithm based on four corners of the marker �inds the coordinates x, y, from the center of the marker. The robot binary identi�ication number is compared to the previously prepared database of robots’ ID’s. Based on the identi�ication number and marker orientation, the algorithm determines the orientation of all robots in the �ield of view of the camera. Information about the orientation and location of the robot is recorded and depending on the used control algorithm, it can be sent to any robot. Another type of ArUco markers is the ChArUco checkerboard. The ChArUco marker is a chessbo-

17


Journal of Automation, Mobile Robotics and Intelligent Systems Journal of Automation, Mobile Robotics and Intelligent Systems

VOLUME 13, N° 2 2019 VOLUME 13, N° 2 2019

(�ig. 6). Knowing a few or a dozen items on the chess-

Fig. 6. Calibra�o� process board using the method described in [14] we are able to determine the internal parameters of the camera. Camera calibration process is in other words the determination of the matrix of internal camera parameters, lens distortion parameters and the rotation and translation matrix describing the transformations between the global coordinate system and the local camera coordinate system. The matrix of internal parameters together with image distortion parameters are used for projection of x, y, z coordinates from the camera described in the plane of the camera image. The internal parameters of the camera are of form   f1 γ u0 A =  0 f2 v0  , (1) 0 0 1

(a)

(b)

Fig. 4. Research testbed: 1- camera, 2- router, �-computer, �,�,�- mobile robots: a) – schema�c diagram b) – research testbed ard with ArUco markers inside the white chessboard boxes (�ig. 5). It is usually used to calibrate the camera, allowing the calibration of partially covered markers, which is not possible with an ordinary black and white chessboard.

(a)

(b)

Fig. 5. Marker examples: a) – ArUco marker b) – ChArUco chessboard �.�. �a���a �a�ib�a���

18

Before using the camera, it should be calibrated. Calibration is carried out once, in case of using a new camera or after changing its construction, for example after changing the lens. For camera calibration, black and white checkers or ChArUco markers are used in various angular positions with respect to the camera

where, f1 , f2 - focal lengths in pixels, u0 , v0 - coordinates of the image center, γ - axis scale factor. The rotation and translation matrix is written in a combined form, i.e.   r11 r12 r13 t1 [R|t] = r21 r22 r23 t2  , (2) r31 r32 r33 t3

where, rij - element of rotation matrix, t1 - element of translation matrix. The matrices of internal camera parameters as well as rotation and translation are used in the simplest camera model, i.e. the pinhole camera model. A pinhole camera is a camera without a lens with a small aperture in the shape of a small hole. The light passing through the hole casts the inverted image of the object on the camera sensor. In such a simpli�ied model, the relationship between coordinates of a point in a three-dimensional global system and the coordinates of the projection of the point on the camera image are expressed by the formula     xi Ui  yi   s  Vi  = A[R|t]   zi  1 1 (3) ,   f 1 γ u0 r11 r12 r13 t1 A[R|t] =  0 f2 v0  r21 r22 r23 t2  0 0 1 r31 r32 r33 t3 Articles

17


Journal Journal of of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

VOLUME N°22 2019 2019 VOLUME 13,13, N°

where, Ui , Vi - coordinates of the point on the image plane in the camera reference system according to the pinhole camera model, s - scale factor. If the camera is equipped with a lens, the pinhole camera model is insuf�icient to properly transform the coordinates of the image. The presence of the lens introduces image distortion and thus complicates the description of the camera model. We can distinguish two types of distortion. Radial distortion causing image distortion symmetric to the lens radius, examples are barrel and pincushion distortion. The second type of distortion is tangential distortion resulting from non-parallelism of the lens with the camera optical sensor (�ig. 7).

(a)

(b)

The camera used on the research testbed is a camera with a resolution of 1920x1080p, 120 FPS with an OmniVision OV4689 optical sensor . After performing the calibration algorithm, the following camera parameters were obtained       −0.26 ± 0.03 1262 0 371 k1 1284 637 , k2  = −1.36 ± 0.03 , A= 0 0 0[ ] 1 [ k3 ] 2.86 ± 0.03 p1 0.02 ± 0.01 = p2 0.05 ± 0.01 (8)

�.�. �a���� ����c��� ��g��i��� Detection of markers is carried out on the basis of an algorithm developed in publication [5]. The process of ArUco markers detection and binary code extraction is shown in (�ig. 8)

(c)

Fig. 7. džĂŵƉůĞƐ ŽĨ ĚŝƐƚŽƌƟŽŶ ŽĨ ƚŚĞ ŝŵĂŐĞ͗ ĂͿ ʹ ƌĂĚŝĂů ƉŝŶĐƵƐŚŝŽŶ ďͿ Ͳ ƌĂĚŝĂů ďĂƌƌĞů ĐͿ Ͳ ƚĂŶŐĞŶƟĂů The correct camera model for calibration is derived by combining the pinhole camera model with the correction of radial and tangential distortions. The model describes a matrix equation ( )  (r) (t) [ ] [ ] ũ D s + δu + δu u u i i i ui ( )  + u0 , (4) = (r) (t) vi v0 Dv v˜i + δv + δv i

i

where, ui , vi - coordinates of the point on the image plane in the camera’s reference frame after taking into account distortion of the image, ũi , v˜i - projection of (r) (r) coordinates of a point on the image plane, δui , δvi - coordinates of the point in the image plane shifted ra(t) (t) dially, δui , δvi - coordinates of a point on the image plane shifted due to tangential distortion, Du , Dv conversion factors of millimeters to pixels, su - scale factor. The coordinates of a point Pi (xi , yi , zi ) on the image plane corresponds to the relationship resulting from the pinhole camera model [ ] [ ] f xi ũi = . (5) v˜i zi y i (r)

(r)

Radial distortion δui , δvi and tangential distortion (t) (t) δui , δvi are expressed by equations [ ] [ ( )] (r) δui ũi ( 1 + k1 ri2 + k2 ri4 + k3 ri6) , (6) = (r) v˜i 1 + k1 ri2 + k2 ri4 + k3 ri6 δvi [

(t) δui (t) δvi

]

( )] 2p1 ũi v˜i + p2 (ri2 + 2ũi 2) , = 2p2 ũi v˜i + p1 ri2 + 2v˜i 2 [

(7)

p1 , p2 where, k1 , k2 , k2 - radial distortion coef�icients, √ - tangential distortion coef�icients, ri = ũi 2 + v˜i 2 . 18

Articles

Fig. 8. DĂƌŬĞƌ ĚĞƚĞĐƟŽŶ ĂůŐŽƌŝƚŚŵ Marker detection algorithm can be divided into 5 stages: 1) Image segmentation - extraction of the most distinctive contours from camera image presented in shades of gray using local adaptive thresholding .

2) �xtraction of contours and image �iltration - obtained image with highlighted contours is subjected to the algorithm in from [3], giving a set of contours from which emerge rectangular contours resembling ArUco markers . 3) �xtract binary code markers - the �irst step is to change from perspective projection for orthogonal projection using a homographic matrix and performing thresholding from [9]. The resulting binary image is divided by a rectangular grid where each cell is assigned a value of 1 or 0 depending on the color of the cell.

4) Marker identi�ication and error correction - Identi�ication numbers are determined based on the binary value inside the marker border for all 4 possible marker rotations. The obtained values are compared with the dictionary of possible marker identi�ication numbers.

19


Journal of Journal of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

VOLUME 2019 VOLUME 13,13, N°N°2 2 2019

5) Corners position determination and pose estimation of the marker - knowledge of the position of the marker’s corners is required to determine the marker pose. In the case of ArUco, the corners of the markers are determined by the linear regression algorithm of the pixels on the edge of the marker to determine their intersection.

Aruco library available in OpenCV repository in Python language allows for markers detection using the command cv2.aruco.detectM arkers. For marker’s pose estimation the command cv2.aruco.estimateP oseSingleM arkers can be used.

4. Experiments

To determine the marker localization error a radial positioning error map was experimentally obtained. The robot arena was �illed with ��0 markers spaced 68mm from each other forming a grid (black dots). The ex and ey positioning errors of each √ marker were

used to calculate the radial error er = + is shown in color on the error map (�ig. 9). e2x

e2y

which

Fig. 10. Change of fps with increase of numbers of robots

system in context of robotics two experiments were performed. The vision system determines the position and orientation of markers. In the �irst experiment a marker is placed on a programmable slider moving the set distance. In the second experiment the vision system is used in example of leader following. In both cases the values of marker’s position and orientation are compared with values from encoders. 4.1. Programmable Slider

Fig. 9. Vision system error map

20

As it is shown on the error map, the radial error increases with the distance from the center point (0,0) and its highest value reaches 15 mm in the corners of the camera �ield vision. To evaluate the usefulness of the proposed vision system in testing swarms and groups of robots a relation between the camera FPS and number of robots was determined (�ig. 10). After necessary calculations for marker detection and pose estimation the camera fps drops form 120 to 22f ps with addition of new robots (�ig. 10). In case of 13 markers fps drops to 60f ps, for 24 markers drops below 30f ps. The experiment shows that the vision system is well suited for groups and swarms of robots up to 24 robots. For 30f ps the minimum step size in robot control algorithm is 0.03s, bigger step size would result in unacceptable numerical errors in the control algorithm. For performance evaluation of the proposed vision

The Aruco Marker is placed on the moving programmable slider traversing the desired distance with given velocity slope. The marker is rotating along its axis with set angle and angular velocity. The slider is being moved by a toothed belt connected to a stepper motor with an encoder. The marker is mounted on the servo-motor shaft located on the moving cart (�ig. 11) [12].

Fig. 11. The slider The slider was positioned along the x axis of the camera. The sliders set distance was 0.81m from the �irst limit switch to the next and the marker set rotation angle was 4 π2 . The resulted sliders traverse distance and marker rotation angle are depicted in (�ig. 12c) and (�ig. 12a). The maximum positioning error in x axis was 2mm (�ig. 12d) whereas the maximum orientation angle error was 0.02rad (�ig. 12b). Articles

19


Journal Journal of of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

VOLUME N°22 2019 2019 VOLUME 13,13, N°

Fig. 12. �lots of the mar�er orienta�on and posi�on val�es from vision system: a) – the angle of orienta�on β of the mar�er b) – the angle of orienta�on error based on odometry and camera data c) – mar�er posi�on d) – mar�er posi�on error based on odometry and camera data

4.2. Following the Leader The leader marked as 1 follows the set trajectory while the following robot 2 has to follow the leader keeping the distance 20cm from the center of the leader.

(a)

(b) (a)

(c)

(b)

Fig. 13. Leader following: a)- paths of leader’s and follower’s geometric centers �with speci�ed posi�ons every 1s) ,b) – the distance error of maintaining desired distance between robots from the vision system

(d)

20

Articles

The paths were obtained by approximating the measuring points from the vision system. The average difference in distance in individual time moments marked on the movement path, ��ig. 13) is 20cm . The maximum distance error is 0.042m, while the average error is 0.041m. To determine the position and orientation of the camera the odometry was used. The

21


Journal of Journal of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

(a)

VOLUME 2019 VOLUME 13,13, N°N°2 2 2019

slope of the leader�s frame (�ig. 14b), it can be noticed that in 15.5s there was a wheels slip, which is also visible on the leader�s path in (�ig. 13a). The leader had to move on a straight line, make one loop and continue to move along the straight line. The shape of the obtained trajectory is affected by the blurring of the image resulting from the movement of robots and the initial placement of the robots. Due to the low speed, we assume that the image of the markers is sharp. In order to correctly assess the error of determining the position and orientation of the robots, it is necessary to compare the results with an additional video system with known camera parameters. To determine the estimate values of the position and orientation error, we compare the measurements with the known position and orientation of the marker. The uncertainty of determining the position of the markers is: ∆x = 2mm, ∆y = 3mm while orientation ∆β = 0.02rad.

5. Results Discussion

(b)

(c)

Fig. 14. Plots of the β angle of the leader frame and robot frame orienta�on error� a) – the angle of orienta�on β of the leader frame determined by the �ision system b) – the angle of orienta�on β of the leader frame determined by the odometry c) – the di�erence of the leader�s orienta�on angle based on odometry data and camera from 2s to 14s

22

change of distance error between robots is shown in (�ig. 13b), while the difference in the angle β of the leader frame on the basis of data from odometry and camera is shown in (�ig. 14c). Based on the chart of the

To evaluate whether the results we gathered are correct we will compare our results with known publications on using the Aruco Markers with position and orientation accuracy estimation. According to our research the radial error increases with the distance from the center of the camera �ield vision (point (0,0)). Similar results were obtained in [8] where Aruco Markers were tested in virtual experiments. The radial error is dependent with the distance between a marker and the camera. ” The further a marker is from the camera, the bigger the error in the 3D estimations. Under 4m the distance error keep below 5cm”. The same is true in our experiment. The biggest difference is in case of the orientation error. The Authors have shown that the orientation error is below 0.02◦ under 4m of distance camera-marker. In our case the error is 0.02rad. The discrepancy may be the result of the ”inaccuracy of method used to estimate the real true pose” as the Authors concluded. In publication [13] the Authors used Aruco markers for visual SLAM and concluded that the mean positioning error is less than 2% calculated from the distance camera-marker. In the distance equal 2.5m the positioning error is less than 5cm which is the same as in previous publication. It is worth adding that when the marker is directly below the camera the positioning error drops below 1mm (�ig. 9). In publication [10] the Authors have shown that it is possible to achieve positioning accuracy less than 0.075mm along X and Y axis and 0.3mm along Z axis. Which explains high accuracy in our experiments near point (0,0).

6. Summary

The above article shows how thanks to the previously calibrated camera, the vision system allows registering the position and orientation of up to 24 mobile robots in space with 30 fps or 13 robots with 60 fps. The ArUco marker, located in the geometric center of the robot, gives the opportunity to track the actual trajectory of the robot. The accuracy of the position Articles

21


Journal Journal of of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

measurement depends on the resolution and accuracy of the camera calibration as well as the distance from center of the camera �ield vision. The longer the distance the higher the positioning error. Simplicity of robot construction, ease of programming and availability of programming libraries such as OpenCV and ArUco enable easy implementation and veri�ication of robot control algorithms.

AUTHORS

Jakub Wiech – Rzeszow University of Technology, Faculty of Mechanical Engineering and Aeronautics , e-mail: j.wiech@prz.edu.pl. Zenon Hendzel – Rzeszow University of Technology, Faculty of Mechanical Engineering and Aeronautics , e-mail: zenhen@prz.edu.pl. ∗

Corresponding author

REFERENCES

[1] G. Beni, “From Swarm Intelligence to Swarm Robotics”. In: E. Şahin and W. M. Spears, eds., Swarm Robotics, 2005, 1–9, 10.1007/978-3-540-305521_1. [2] F. Bergamasco, A. Albarelli, E. Rodolà , and A. Torsello, “RUNE-Tag: A high accuracy �iducial marker with strong occlusion resilience”. In: CVPR 2011, 2011, 113–120, 10.1109/CVPR.2011.5995544.

[3] D. H. Douglas and T. K. Peucker, “Algorithms for The Reduction of the Number of Points Required to Represent a Digitized Line or its Caricature”, Cartographica: Int. J. Geogr. Inf. Geovis., vol. 10, no. 2, 1973, 112–122, 10.3138/FM57-6770U75U-7727.

VOLUME N°22 2019 2019 VOLUME 13,13, N°

stems, Man, and Cybernetics, vol. 9, no. 1, 1979, 62–66, 10.1109/TSMC.1979.4310076.

[10] D. C. Popescu, M. O. Cernaianu, P. Ghenuche, and I. Dumitrache, “An assessment on the accuracy of high precision 3D positioning using planar �iducial markers”. In: 21st International Conference on System Theory, Control and Computing (ICSTCC), 2017, 471–476, 10.1109/ICSTCC.2017.8107079. [11] P. C. Santos, A. Stork, A. Buaes, C. E. Pereira, and J. Jorge, “A real-time low-cost markerbased multiple camera tracking solution for virtual reality applications”, Journal of Real-Time Image Processing, vol. 5, no. 2, 2010, 121–128, 10.1007/s11554-009-0138-9.

[12] J. S. Tutak and J. Wiech, “Horizontal Automated Storage and Retrieval System”, Advances in Science and Technology Research Journal, vol. 11, no. 1, 2017, 82–95, 10.12913/22998624/68470. [13] R. S. Xavier, B. M. F. d. Silva, and L. M. G. Gonzalves, “Accuracy Analysis of Augmented Reality Markers for Visual Mapping and Localization”. In: 2017 Workshop of Computer Vision (WVC), 2017, 73–77, 10.1109/WVC.2017.00020.

[14] �. �hang, “A �le�ible new technique for camera calibration”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 11, 2000, 1330–1334, 10.1109/34.888718.

[4] M. Fiala, “ARTag, a �iducial marker system using digital techniques”. In: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), vol. 2, 2005, 590–596, 10.1109/CVPR.2005.74.

[5] S. Garrido-Jurado, R. Muñ oz-Salinas, F. J. MadridCuevas, and M. J. Marı́n-Jimé nez, “Automatic generation and detection of highly reliable �iducial markers under occlusion”, Pattern Recognition, vol. 47, no. 6, 2014, 2280–2292, 10.1016/j.patcog.2014.01.005. [6] J. Giergiel and W. �� ylski, “Description of motion of a mobile robot by Maggie’s equations”, Journal of Theoretical and Applied Mechanics, vol. 43, no. 3, 2005, 511–521. [7] M. Giergiel, �. Hendzel, and W. �� ylski, Modelowanie i sterowanie mobilnych robotów kołowych, Wydawnictwo Naukowe PWN: Warszawa, 2013. [8] A. Lopez-Ceron and J. M. Canas, “Accuracy Analysis of Marker-Based 3D Visual Localization”. In: XXXVII Jornadas de Automatica Workshop, 2016.

[9] N. Otsu, “A Threshold Selection Method from Gray-Level Histograms”, IEEE Transactions on Sy22

Articles

23


Journal of Automation, Mobile Robotics and Intelligent Systems

VOLUME 13,

N° 2

2019

Estimation of Orientation and Position of the Manipulator Using the Extended Kalman Filter Based on Measurements From the Accelerometers Submitted: 6th February 2019; accepted 15th April 2019

Agnieszka Kobierska, Piotr Rakowski, Leszek Podsędkowski

DOI: 10.14313/JAMRIS/2-2019/15 Abstract: Authors present the kinematic structure of measurement arm along with its construction for efficient estimation of orientation and position of the manipulator using extended Kalman filter. The major innovation of the arm is that it only uses accelerometers as gravity sensors for determining relative positions of the links. This article presents the problem of position estimation based on measurements with high noise and the use of the extended Kalman filter to limit the impact of noise on the measurement. Repeatability tests were performed using custom made test stand. Keywords: EKF, accelerometers, measuring arm

1. Introduction Large industrial manipulators make use of optical encoders or resolvers to measure the angular position of the joints. The dimensions of these encoders or resolvers do not affect the operating of the manipulator. These sensors also do not restrict the movement of manipulator arm during operation. However, for special mini manipulators, finding a cost-efficient sensor of required dimension is a challenge. The manipulator used in this study is a disposable medical device used for measuring the change in length of the femoral bone before and after the hip replacement surgery. Hence, a low-cost sensor is required for accurate measurement. Recently, accelerometers are being employed as cheaper alternatives to standard sensors for angular measurement [3][4][5][7]. In addition, the use of accelerometers reduces the overall components in the assembly, which makes the sterilization process easier. Our research is focused on obtaining accurate measurements. However, one of the major disadvantages of using accelerometers, in this application, is the lack of arbitrary positioning of the joint whose angular position needs to be determined. When the joint rotates around the vertical axis, the accelerometers are found to be incapable of detecting changes in the joint angle. However, rotation around the vertical axis can be avoided by developing an appropriate kinematic construction of the manipulator arm. This ensures continuity of angular measurement and the correct operating of the manipulator [2].

2. Position Estimation The manipulator (measurement arm) with six degrees of freedom is based on the kinematics described by the Denavit–Hartenberg (D-H) notation. Table 1 defines the position of the tip (Link 6) of the arm with respect to the base of the arm (Link 0). The manipulator arm has no driving elements and is used for measurement in static position. The individual parts of the manipulator (Fig. 1) are connected by rotary joints with one degree of freedom. Tab. 1. D-H notation of manipulator Link name

Nr

Link 2

2

Link 1

θi (θ0)

di [mm]

ai [mm]

θ2 (0°)

0

80

100

-5

1

θ1 (90°)

Link 3

3

θ3 (0°)

19.25

Link 5

5

θ5 (90°)

24.25

Link 4

Link 6

4

6

13

θ4 (0°)

θ6 (90°)

0

αi

20

-90°

5

-90°

20

-90°

0

Link 4

90°

45°

Link 3

Link 6

Link 2

Link 1 XG

Link 0 ZG

YG

Link 5 ሬࡸԦ

Fig. 1. The manipulator links connected by rotary joints To determine the coordinates of the gravity vector in each link coordinate system and the angles of rotation of individual joints, accelerometers (type FXLS8471Q) are attached to each of the links (from 0 to 6).

23


Journal of Automation, Mobile Robotics and Intelligent Systems

VOLUME 13,

The method used for measurement of angles using accelerometers is presented in [1]. The study also presents the possibility of achieving accurate measurements and indicates the directions for further improvements. This study presents the problem of position estimation based on measurements with noise and the use of extended Kalman filter (EKF) to limit the impact of noise on the measurement. The state vector of the object (process) is called the minimum set of all internal variables whose knowledge at a given point of time together with the knowledge of future waveforms of input variables (when omitting nonmeasurable disturbance variables) allows unambiguous determination of future time courses of output variables [6]. In this study, the number of parameters (variables) describing the position of the tip in the base system is 6: (3 coordinates of positions: x, y, z and 3 coordinates of orientation: Roll (z-axis), Pitch (y-axis), Yaw (x-axis)). The dependence of the position of the tip relative to the base is determined by the vector L = [Lx, Ly, Lz]: from the center of the coordinate system of base up to the characteristic point on Link 6 (6L = [Lx6, Ly6, Lz6]T). The orientation parameters are determined by the rotation matrix R, describing the orientation of the tip layout relative to the base. To completely describe the object’s state in the global coordinate system, we still need to determine the location of its base, which similarly has 6 parameters: (3 coordinates of position: x, y, z and 3 coordinates of orientation: Roll (z-axis), Pitch (y-axis), Yaw (x-axis)). In case of the manipulator used in this study, only the two global parameters concerning the orientation are possible to determine: Roll (z-axis) and Pitch (y-axis). Representation of the L vector in the base system (Link 0):

 Lx 6  L     0 6 6 6 ; (1) L = A0 L L =  y 6   Lz 6     1  where A, matrix of transformations, according to the D-H notation, is equal to: Ai = Rot ( Z ,θi )Trans( Z ,di )Trans( X ,ai ) Rot ( X ,αi )

cθi − sθi cα i  sθ cθi cα i Ai =  i  0 sα i   0 0

sθi sα i −cθi sα i cα i 0

ai cα i  ai sα i   di  1 

(2) (3)

where cθi or cαi – means cosine of the angle, sθi or sαi – means sine of the angle. Matrix of transformation of the arm takes the following form:

24

A06 = A01 A12 A23 A34 A45 A56 (4) and depends on the angles θi in the manipulator joints: A = f(θ) with θ=[θ1,…,θ6]. (5) The coordinate system XG, YG, ZG was established in such a way that the axis ZG of the gravity system was directed downwards, and the XG axis coincided Articles

N° 2

2019

with the projection of the axis of rotation of the joint to the level. Two angles βy and βz have been defined with the rotation of the system 0 in relation to the G system, where βy determines the angle between the axes ZG and Z0, and βz determines the angle between the axes YG and Y0. The condition for the correct adoption of the gravity system is the implementation of the transformation of the gravitational acceleration version in the D-H system associated with the arm link meets in the form: 0 RG0 ( β yβz ) E0 = 0 1

where:

R = RY ,β y RZ ,β z 0 G

 cβ ycβ z  =  sβ z  − sβ yc β z 

−c β y s β z cβ z sβ y sβ z

(6) sβ y   0  c β y 

(7)

is the rotation matrix of the transition from the “0” system to the gravity system. The static data obtained from the measurements are used to determine the estimated parameters of the system model. The data in the registered sample are only the measure of the real parameters with some errors. During data processing, the state of the system is estimated taking into account the measurement probability parameters (measurement noise, measurement system errors). The probability parameters of the estimation is also estimated. With regards to the described manipulator, the estimation of the position depends on the accuracy of the accelerometers in measuring the individual variables θi and angles βy and βz. For this manipulator: – The state vector is presented as: T

(8) ( 8 *1 ) – The exit vector as a measure of the acceleration direction: (9) Z = [ E0 ; E1 ; E2 ; E3 ; E 4 ; E5 ; E6 ](21*1) X = θ1θ2θ3θ 4θ5θ6 β y β z 

where E i = [exi e yi ezi ]T determines versors of the gravitational acceleration in a D-H coordinate frame coupled with the i-th link of the arm. The dependence between the Ei versor and the readings from the accelerometer sensors has the form:  axi    a yi  (10) Ei = K i  , a z  i  1   

where: Ki is the calibration matrix determined as in [1] taking into account the transition from the sensor system to the D-H system of the arm.


Journal of Automation, Mobile Robotics and Intelligent Systems

VOLUME 13,

The output function h estimating the output vector based on the state variables is set as:

Zˆ = h( X ) = Eˆ 0 ; Eˆ 1 ; Eˆ 2 ; Eˆ 3 ; Eˆ 4 ; Eˆ 5 ; Eˆ 6  0 RGi Eˆ i = 0 , 1

(11) (12)

T

Eˆ i =  RGi 3,1 RGi 3,2 RGi 3,3 

(13)

i i 0 1 where: RG = RG R0 …Ri −1 is determined from the D-H notation for individual joints based on ai, θi. For the presented manipulator, the measurement equations (8–13) are nonlinear. Since the measurement data from accelerometers are characterized by high noise, it is necessary to use a filter that limits the impact of noise on the accuracy of determining the position of the manipulator tip. An EKF [8, 9] can be used for this purpose, which also allows for the optimal data fusion from various sources. The described system is characterized by high data redundancy (21 measurement signals with 8 state variables). The EKF is a two-phase recursive algorithm: in the first phase based on the state from the previous step Xˆ k −1|k −1 is determined the estimated value of the current state Xˆ k|k −1 from the moment k on the basis of measurements to the moment k–1.

Xˆ k|k −1 = AXˆ k −1|k −1

(14) For the presented arm with static measurements, the matrix A relating the current state with the previous one is in the form of an identity matrix: A = I ( 8 x 8), because it is rewritten from the previous state value (record was saved in (14) to keep the standard EFK appearance). As this is a static measurement, there are no drives that induce movement of the manipulator links. As a consequence, in the EKF prediction equation, the value associated with control vector in equation (14) is assumed to be zero. The next step in the prediction phase is to determine the covariance matrix P for the vector Xˆ k|k −1 in the form:

Pk|k −1 = APk −1|k −1 AT + Qk −1 = Pk −1|k −1

(15)

where: Pk −1|k −1 – covariance matrix P at time k−1; Qk−1 – the process noise covariance matrix. It is assumed Qk−1 = 0 because the system is static and the process noise wk–1 has value 0. Next, the second phase (correction stage) helps determine the measurement estimate for the state vector xˆ k in the form: Zˆ k = h Xˆ k (16) After calculating the expected value of the output vector, the residual of the measurement vector is determined in the form of the difference between the real measurement of and estimated on the basis of the model at time k:

N° 2

rk = Z k − Zˆ k

2019

(17)

The covariance matrix of the estimate of the residual measurement vector is calculated from the formula:

Sk = HPk|k −1 H T + R (18) where the matrix H is determined on the basis of partial derivatives of the output vector with respect to the state vector variables:

∂h (19) ∂X and R is the covariance of the measurement noise co2 variance R = I (21*21)σ acc , where σ acc is the standard deviation of the accelerometer noise specified by the sensor manufacturer. To determine how much the estimate should be corrected, to get closer to the actual state, we determine Kalman’s gain in the form: H=

K k = Pk|k −1 H kT Sk−1 (20) using the individual components of the formula calculated in equations (15), (18), (19). In the last step of the correction phase, the corrected state vector and the corrected covariance matrix of the estimate are calculated in the form:

Xˆ k|k = Xˆ k|k −1 + K k rk

Pk|k = ( I − K k H k ) Pk|k−1

(21)

(22) The magnitudes determined in the equations (21) and (22) are then used as the input values for subsequent measurements carried out at time k + 1. The measurement stand (Fig. 2) consists of the measurement arm made using 3D printing technology and electronics mounted on the device in the form of a central module (32-bit microprocessor ARM Cortex-M3) with display and seven miniature circuit boards of the accelerometers mounted on other links of the arm. The software responsible for data logging and communication with the user has been written in [10]. The data obtained consisted of 30 measurements of the L position of the manipulator tip (Link 6) with respect to the base system (Link 0). The arm position was kept unchanged and the final data were analyzed in Excel.

( )

Fig. 2. Measurement arm made in 3D printing technology Articles

25


-25,10 -25,15

Journal of Automation, Mobile Robotics and Intelligent Systems

Each measurement consisted of accelerometer readings repeated 400 times and converted using the EKF. The EKF was initialized as described in [1]. The results of the position measurements are presented in Figs. 3, 4, 5 and the results of the orientation meassample number urements are presented in Figs. 6, 7, 8. Both are in 0,00 sample number comparison results 5 7 the 9 11 13 15 of 17 estimation 19 21 23 25 based 27 29 on 0,00 1 3 with the average -0,20 1 3 values 5 7 described 9 11 13 15in17[1]. 19 21 23 25 27 29 -0,20 -0,40

sample number -0,40 0,00 sample number -0,60 0,00 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 -0,60 -0,20 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 -0,80 -0,20 -0,80 -0,40 -1,00 -0,40 Lx ( EFK) Lx (Average) Lx [mm] -1,00 -0,60 Lx ( EFK)

Lx [mm]

-0,60 -0,80

Lx (Average)

sample number

-0,80 -183,56 sample number -1,00 -183,56 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 Lx ( EFK) Lx (Average) Lx [mm] -183,58 -1,00 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 Lx ( EFK) Lx (Average) Lx [mm] -183,58 Fig. 3. Position measurements in Lx direction -183,60 -183,60 -183,62 sample number -183,56 -183,62 sample number -183,64 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 -183,56 -183,58 -183,64 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 -183,66 -183,58 -183,60 -183,66 -183,68 -183,60 -183,62 Ly (EKF) Ly (Average) Ly [mm] -183,68 -183,62 Ly (EKF) Ly (Average) Ly [mm] -183,64

Lz (EKF) Lz (EKF)

Lz [mm] -25,15 Lz [mm]

-183,66 -183,68

Ly (EKF) Ly (EKF)

Ly [mm] -183,68 Ly [mm]

Ly (Average) Ly (Average)

Fig. 4. Position measurements in Ly direction

3,18 3,2 3,16 3,18 3,14 3,16 3,12 3,14 3,1 3,12 γ [°] 3,08 3,2 3,1 γ [°] 3,06 3,18 3,08 3,2 3,04 3,16 3,06 3,18 3,02 3,14 3,04 sample number 3,16 3 3,12 3,02 3,14 sample number 3,1 3 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 3,12 Roll (Average) Roll (EKF) 3,08 3,1 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 Roll (Average) Roll (EKF) 3,06 3,08 Fig. 3,046. Orientation measurements of γ angle (Roll) 3,06 3,02 3,04 sample number β [°] number 3,023 1 3 5 7 9 11 13 15 17 19 21sample 23 25 number 27 29 sample 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 β [°] 3 sample number -1,40 1 3 5Roll 7 (Average) 9 11 13 15 17 19 21 25 27 29 Roll23(EKF) 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 -1,40 -1,45

Roll (Average)

-1,45 -1,50 -1,50 -1,55

β [°]

-1,55 -1,60 1 β [°] -1,40 -1,60 -1,65 1 -1,40 -1,45 -1,65 -1,70

sample number

-24,85

1

3

5

7

9 11 13 15 17 19 21 23 25 27 29

-24,90

-25,00 -25,05 -25,10

Lz [mm]

Lz (EKF)

Lz (Average)

Fig. 5. Position measurements in Lz direction

26

Based on the position measurements, the standard deviations for position measurements using EKF were: σx = 0.04 mm, σy = 0.004 mm, σz = 0.01 mm, while based on the calculated average the standard deviation values were: σx = 0.04 mm, σy = 0.01 mm, σz = 0.02 mm. γArticles [°]

3,2 3,18

sample number sample number 21 23 25 27 29

5

7

9 11 13 15 17 19 21 23 25 27 29

3

5

7

9 11 13 15 17 19

-1,45 -1,50 -1,70 -1,75 -1,50 -1,55 -1,75 -1,80

Pitch (Average) Pitch (Average)

-1,55 -1,60 -1,80 -1,60 -1,65

Pitch (EFK) Pitch (EFK)

-1,65 -1,70

Fig. 7. Orientation measurements of β angle (Pitch) -1,70 -1,75 -1,75 -1,80

Pitch (Average) Pitch (Average)

Pitch (EFK) Pitch (EFK)

3,2 3,15 3,1

sample number

3 1

3

5

7

9 11 13 15 17 19 21 23 25 27 29

Yaw (Average)

Yaw (EKF)

Fig. 8. Orientation measurements of α angle (Yaw)

-24,95

-25,15

Roll (EKF)

3

3,05 -24,80

2019

γ [°] 3,2 γ [°]

-1,80 α [°] 3,25

-183,64 -183,66

Lz (Average) N° 2 Lz (Average)

VOLUME 13,

Based on the orientation measurements, the standard deviations for orientation measurements using EKF were: σγ = 0.004°, σα = 0.018°, σβ=0.002°. While for orientation measurements based on the calculated average the standard deviation values were: σγ = 0.009°, σα = 0.014°, σβ = 0.010°.

3. Discussion In the figures above, a significant difference can be observed between the mean values obtained using both methods. According to our analysis, the results obtained using EKF are more accurate as this method takes into account all measurement data for estab-


Journal of Automation, Mobile Robotics and Intelligent Systems

lishing a single state variable, while the conventional method, presented in [1], does not include all the measurement data. Therefore, using EKF limits the effects of noise on the measurement and provides accurate results even with the static nature of the system.

4. Conclusions The presented prototype manipulator is used for determining the difference between the primary measurement before joint replacement and the secondary measurement after joint replacement; therefore, it is not important to determine the value of the L vector in the global system. Although the obtained results meet the requirements of these types of devices, the influence of factors such as deviation of the axis of the articulated joints from the vertical position, external forces, accuracy, and repeatability of arm fixing on markers is unknown. However, further work in this area will allow to estimate the total impact of all the factors associated with each link of the measurement arm.

AUTHORS

Agnieszka Kobierska* – Lodz University of Technology, Institute of Machine Tools and Production Engineering, 1/15 B. Stefanowskiego Street, 90-924 Lodz, e-mail: agnieszka.kobierska@p.lodz.pl.

Piotr Rakowski – Lodz University of Technology, Institute of Machine Tools and Production Engineering, 1/15 B. Stefanowskiego Street, 90-924 Lodz, e-mail: piotr.rakowski@edu.p.lodz.pl.

Leszek Podsędkowski – Lodz University of Technology, Institute of Machine Tools and Production Engineering, 1/15 B. Stefanowskiego Street, 90-924 Lodz, e-mail: leszek.podsedkowski@p.lodz.pl. *Corresponding author

VOLUME 13,

N° 2

2019

[4] M. El-Gohary and J. McNames, “Human Joint Angle Estimation with Inertial Sensors and Validation with A Robot Arm”, IEEE Transactions on Biomedical Engineering, vol. 62, no. 7, 2015, 1759–1767, 10.1109/TBME.2015.2403368. [5] M. El-Gohary and J. McNames, “Shoulder and Elbow Joint Angle Tracking With Inertial Sensors”, IEEE Transactions on Biomedical Engineering, vol. 59, no. 9, 2012, 2635–2641, 10.1109/ TBME.2012.2208750.

[6] E. Jezierski, Dynamika Robotów (Robot Dynamics), Wydawnictwa Naukowo-Techniczne, 2006 (In Polish).

[7] M. Quigley, R. Brewer, S. P. Soundararaj, V. Pra­ deep, Q. Le, and A. Y. Ng, “Low-cost accelerometers for robotic manipulator perception”. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, 2010, 6168–6174, 10.1109/ IROS. 2010.5649804.

[8] R. Manish, “Linear and Non-linear Estimation Techniques: Theory and Comparison”, 2014, arXiv:1406.5556. [9] P. Żak, “Master Manipulator Orientation Determination Method Using Extended Kalman Filter”. In: 18th International Conference on Mechatronics – Mechatronika, Brno, 2018, 276–280.

[10] P. Poryzała, A. Kobierska, L. Podsędkowski, and P. Rakowski, “Prototyp urządzenia wspomagającego śródoperacyjną kontrolę długości kończyn dolnych podczas zabiegu endoprotezoplastyki stawu biodrowego (Prototype of a device for intraoperative measurements of limb length changes during total hip arthroplasty)”, Przegląd Elektrotechniczny, vol. 93, no. 8, 2017, 129–132, 10.15199/48.2017.08.34.

REFERENCES

[1] P. Rakowski, A. Kobierska, L. Podsędkowski, and P. Poryzała, “Accuracy and Repeatability Tests on 6D Measurement Arm”, Mechanics and Mechanical Engineering, vol. 21, no. 2, 2017, 425–436. [2] A. Kobierska, L. Podsędkowski, P. Rakowski, S. Rejmonczyk, J. Kaszowski, K. Olejniczak, B. Golasiński, and K. Kaczmarek, “A Methodology of Orthopaedic Measurement Arm Workspace Determination”, Mechanics and Mechanical Engineering, vol. 21, no. 2, 2017, 185–191.

[3] Peng Cheng and B. Oelmann, “Joint-Angle Measurement Using Accelerometers and Gyroscopes – A Survey”, IEEE Transactions on Instrumentation and Measurement, vol. 59, no. 2, 2010, 404–414, 10.1109/TIM.2009. 2024367.

Articles

27


Journal Journal of of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

VOLUME N°22 2019 2019 VOLUME 13,13, N°

���������� �� S����� M���������-����� M��������������� ����� ����� S��� ��� ���� S�������� �������������� �ub���ed: 0�th February 2019; accepted: 20th May 2019

Igor Zubrycki, Grzegorz Granosik DOI: 10.14313/JAMRIS/2-2019/16 Abstract: We describe simple to build mechanomyography sensors, with one or two channels, based on electret microphones. We evaluate their applica�on as a source of informa�on about the operator�s hand s��ness, which can be used for changing a robot�s gripper s��ness during teleopera�on. We explain a data ac�uisi�on procedure for further employment of a machine-learning. Finally, we present the results of three experiments and various machine learning algorithms. �upport vector classi�ca�on, random forests, and neural-network architectures (fullyconnected ar��cial neural networks, recurrent, convolu�onal� were compared in two experiments. In �rst and second, two probes were used with a single par�cipant, with probes displaced during learning and tes�ng to evaluate the in�uence of probe placement on classi�ca�on. In the third experiment, a dataset was collected using two probes and seven par�cipants. �s a result of the singleprobe tests, we achieved a (binary� classi�ca�on accuracy of ���. �uring the mul�-probe tests, large crosspar�cipant di�erences in classi�ca�on accuracy were noted, even when normali�ing per-par�cipant. Keywords: ���, acous�c myography, teleopera�on, mechanomyography

�� ��trod�c�o�

28

28

In this paper, we evaluate how simple mechanomyography probes can be used for hand stiffness classi�ication. We are particularly interested in the use of such microphone-based probes to complement our teleoperation system for robotic gripper teleoperation based on wearable sensor/haptic glove and a vision system. The teleoperation environment provides unique challenges for teleoperation devices: the need for real-time data acquisition and processing, ease of use even during stressful situations, robustness and low cost. Robots may be teleoperated in an environment where the operator has limited perception capability and therefore has to proceed cautiously. In such situations, a person would change the stiffness of his hand to limit contact forces during unexpected collisions or when interacting with unknown objects [10]. In this paper, we present a design and results of research using a mechanomyography probe for estimating the operator’s hand stiffness. Various grippers can change own’s stiffness using a designated mechanical structure [5] or through sensor-based control mechanism (impedance control)

[16]. The control mechanism could automatically select such value, but in an unknown environment (such as rescue or exploratory robotics), where teleoperation is the dominating mode of the control, such value should be somehow connected to the actions of the operator. However, the cognitive load of the operator overwhelmed by the number of controls of the teleoperation interface can negatively in�luence his effectiveness [13]. Instead of ”classic teleoperation,” immersive teleoperation (where the operator’s body behavior controls various parameters of the teleoperated robot) can lessen the cognitive load and be more ”intuitive.” Myosignals could be, in such a scenario, used for setting the stiffness parameters of the gripper. Myosignals have been already used to change the stiffness of multi-�inger grippers, to improve their robustness to crashes [3]. Also, such signals were used to change the impedance of an industrial manipulator in teleoperation mode [4]. They were also used for prosthetics control. Such prosthetics could have one or more degrees of freedom through using one or multiple myosignal probes [19]. Myosignals could trigger prede�ined motions [19] or, after calibration, be used to control the movement of each joint or closure of grip [17]. Wołczowski et al. lead a multi-year effort on creating a biosignal acquisition for the control of a multifunctional hand prosthesis, based on probes capable of both MMG and EMG measurement [32]. Various types of myosignals could be used, but electromyography is the most widely used, due to a long history and very advanced probing schemes achieved. It depends on measuring very small voltages; therefore, changes in sensor placement and skin impedance can lead to large differences in amplitude, shape, and signal delay [22]. Mechanomyography, which depends on measuring mechanical vibrations of muscles is less dependant on such issues [36]. While less prevalent than electromyography, there were several applications of mechanomyography probes for medical and rehabilitation uses [8]. As there are several challenges regarding how mechanomyographic signals could be acquired, there is no one standard method for signal acquisition and analysis. A review of mechanomyography sensor development is presented in [15, 20]. Signals were read using accelerometers, piezometers or microphones, or combinations of such. Similarly, a signal analysis could be done using multiple methods; in time or frequency domains. Our goal was to design an inexpensive but usable


Journal of of Automation, Automation, Mobile Journal MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

setup, based on electret microphone connected to a standard PC for classifying hand stiffness based on the mechanomyographic signal. Mechanomyography has several features that make it an interesting alternative to electromyography. Mainly, the simplicity of setup where probes do not need to be as precisely placed as in electromyography (while still position and relative sensor orientation can have an impact as described in [18]), as well as, robustness to changes in air or skin moisture or occurrence of strong radio / EM signal [9]. This is because of the nature of low-frequency audio signals which are sensed in mechanomyography [36]. The design described in this paper is strongly in�luenced by the results of Silva et al. from PRISM laboratory, from their long-term project on using mechanomyography for prosthetics control [29]. The team created a two-sensor (accelerometer, microphone) per probe set in which accelerometer was used to estimate and possibly eliminate the in�luence of motion on the sensor readouts. Motion artifacts are important factors for mechanomyography, with a more signi�icant effect on accelerometer based MMG. Posatskiy and Chau presented such a comparative study of microphones and accelerometers and also found that for both types of sensors non-vanishing motion artifact harmonics were present [26]. Especially for telemanipulation, where the operator moves his limbs, such in�luence is important, and in this paper, we investigate the effect of dynamic motions on hand-stiffness classi�ication accuracy. The apparent ease of setup makes it convenient to use for a teleoperation system, as operators installing the probes would not need prolonged training. Also in hectic situations as the ones occurring in rescue situations, possible probe displacement should not result in misclassi�ications. In Section 2 we present the design of two types mechanomyographic probes. We describe their design goals and properties as well as fabrication details. In subsection 2.2 we describe how the data acquired and processed. In Section 3 we evaluate the machine learning algorithms for classifying hand stiffness based on the data acquired using the probes described in Section 2. Particularly, we present details of three experiments two of which evaluated the use of plastic and silicone probes and classi�ier robustness to sensor placement. In third, we evaluated multi-subject stiffness classi�ication using two-probe set. In Section 4 we present how the classi�ication pipeline can be integrated into a Robot Operating System network, particularly for the teleoperation using operator’s arm and hand movement. We �inish the paper with summary and conclusions.

2. Da�a �c��isi�on graphic Probe

�sing

Mechanomyo�

2.1. The Design of Mechanomyographic Probe For the acquisition of a mechanomyographic signal (MMG) we manufactured three sets of probes, each

VOLUME 2019 VOLUME 13,13, N°N° 2 2 2019

(a) � �ra� �e�ign of mechanomyographic probe

(b) Photograph of mechanomyographic probe

Fig. 1. Mechanomyographic probe

consisting of microphones in a harness. We used a capacitive, electret microphone, which did not require phantom power. Microphones Marnsnaska DZ0289 have (declared) 2.2 kOhm impedance and sensitivity of -52 dB with bandwidth 30 Hz - 15 kHz. The lower value is declared by the manufacturer, and comes from measuring the microphone in hearing range, other works reported lower actual values [23]. Watakabe evaluated the properties of a condenser microphone (Panasonic/ Matsushita WM-034B) when attached to a cylinder with an air chamber, �inding that with the correct combination of length and diameter of the cylinder, the cut-off frequency could be reduced to 2- 4 Hz, for diameters of the cylinder over 10 mm [31]. Due to use of electret microphone, it could be directly connected to a PC’s audio card (in our case, built in motherboard audio card, Asus Z97-K). To maximize the simplicity of the setup we did not use additional ampli�iers or preampli�iers. A dedicated Python application with GUI recorded the audio signal (see Fig. 3). The �irst version of the mechanomyographic probe uses 3D FDM printed harness (Fig. 1a) (PLA material), in the second one the harness is made from rubber silicone (DRagonSkin, shore 30A), with an additional plastic harness for mounting the rubber part. The mold for rubber silicone was also made using the Articles

29

29


Journal of Automation, Mobile Robotics and Intelligent Systems Journal of Automation, Mobile Robotics and Intelligent Systems

microphone plastic mount

silicone rubber base

(a) Silicone-rubber based mechanomyographic probe

(b) Photograph of silicone-based mechanomyographic probe

Fig. 2. Silicone-rubber based mechanomyographic probe

VOLUME 13, N° 2 2019 VOLUME 13, N° 2 2019

gister low frequencies of the MMG signal and to enable free movement of the microphone’s membrane. The air column has a diameter of 13 mm and a height of 2 mm, which is a combination with good signal-to-noise ratio [28]. The motivation behind designing two probes was to evaluate whether simplistic, easy to make plastic probe could deliver adequate results. The main issue with the plastic probe is the lack of dampening as because of the relative softness of the silicone forms a low pass �ilter, attenuating high frequencies while the closed air column provides nearoptimal signal ampli�ication parameters [27]. The probes do not include an additional accelerometer as in [27]. Authors motivated the addition of such a device to �ilter the in�luence of the limb movement on the signal but did not provide an algorithm to do so. Such setup would require an additional realtime board which could record two streams of data in parallel which would greatly complicate the setup. Both probes are mounted using elastic harnesses and placed over a selected muscle group. No further preparation is needed. Additionally, we designed a simple two-probe set, with two probes of the second version attached to the cloth, 13 cm apart (Fig. 4). Using the PLA material to print the harness, the material can be easily melted to weld to the cloth (linen). A rubber harness was attached between the two probes. The two-probe harness was designed to enable easy setup for the operator, as he/she only needs to slide the harness onto the forearm. However, due to a constant distance between the probes, there is the risk that the probes cannot be placed accurately over a particular muscle group. Cooperation of this setup with multiple users is investigated in one of the experiments. Even with two probes, they can still be connected to a (stereo) audio input either by soldering the GND lines together and soldering the positive rails to selected parts of a stereo audio jack or by using a Y-type microphone adapter that combines the two microphone inputs into two channels. Even with this setup, no additional circuitry is used. 2.2. �a�a ����i�i��� a�� ����a�a���

Fig. 3. A graphical user interface for recording mechanomyography data

30

30

FDM process, and its design is available on the project’s website. We designed the probe to have a cylinder shaped air column between the microphone membrane and the skin. In the case of the plastic probe, the air column is closed by the seal formed by the skin while in case of the silicone-rubber probe, we closed the air column with additional silicone membrane on the bottom of the device. Such a closed air column is needed to reArticles

The audio signal received by the probe’s microphone is analyzed online using a dedicated Python program with PyAudio, pyWavelets and Scipy libraries (program available on the project repository [35]). In standard setting the audio data is acquired with 16 kHz frequency (16384 samples per second). We use either Discrete Fourier Transform or Wavelet Packet Decomposition to form feature vectors for classi�ication. As the interesting signal is in the range between 5 - 50 Hz [20] and to reduce the computational cost we can downsample the signal. The signal is twice downsampled using the decimate SciPy function [2] which consists of low-pass �iltering with FI� �ilter with Hamming window, and decimating (i.e., keeping every 8th sample) to selected frequency (256 samples per second). On computer with i7-4790 processor, the computation time for decimating 1s one-channel recording


Journal of of Automation, Automation, Mobile Journal MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

VOLUME 2019 VOLUME 13,13, N°N° 2 2 2019

(a) An example of mechanomyographic signal for a relaxed hand

(b) An example of mechanomyographic signal for s��ened hand

Fig. 5. Examples of mechanomyographic signal Fig. 4. A two-probe set using the silicone probes registered with 16kHz frequency is around 750µs. In case of using magnitudes of Fourier coef�icients calculating the coef�icients for the original signal is faster than downsampling and calculating the coef�icients from the downsampled signal ( 157µs±1µs vs 798µs± 4). However, in case of using features from Wavelet Packet Decomposition, without downsampling a deeper level of decomposition would be needed ( 12th instead of 6th) to acquire desired resolution of frequency sampling. For a 1s recording, decimation and wavelet packet decomposition on 6th level takes around 1.8ms ± 5.35µs, compared to 12th level wavelet packet decomposition 76.3ms ± 811µs. An example of downsampled (decimated) signal for a relaxed and stiffened hand is shown in Fig. 5a and Fig. 5b. For the experiments described below, a signal window consists of varying lengths of signals, acquired from 0.25 to 1s.

�. ��a��a��� �� �a��i�� ��a��i�g ��g��i���� ��� ��a��i��i�g �a�� ������� The assumed use of the system is for teleoperation. What is important in practice, during the interaction with the teleoperation system, an operator will typically grasp virtual (non-existing) objects with some light feedback provided by a haptic feedback subsystem. Therefore, we can assume that the operator never produces a signi�icant force in the arm that would not be happening in the same time as a contra force – there will not be a signi�icant force output on the envi-

ronment by the operator’s hands. In that case, a single mechanomyographic probe may be enough, while in case of a hand coming into contact with the environment the signals from pairs of muscle groups would need to be registered [14]. However, the operator will move the hand during the teleoperation, so the system needs to recognize the stiffness even when the hand is moving. There are several issues in classifying hand stiffness using mechanomyography, all being the result of multiple and non-stationary signal sources registered in the audio signal of the probe. There is stiffness information in the myographic signal [14] but it also has information about the motion of the limb, which is dif�icult to decouple [27]. Also, the mechanomyographic signal is nonstationary – stiffening the hand results in a momentary peak in signal (see a peak at 1s at Fig. 5b), followed by a signal with slightly different (compared to relaxed hand) amplitude and frequency. �uscle fatigue also in�luences the vibrations of muscles. As in this case, the mechanomyographic signal is de facto an audio signal, background noises (from, for example, movement of probe cables, operators clothes) are also registered. Conversely, because the signal is registered by a microphone, motion artifacts have less in�luence on the recording that in case of using an accelerometer. Also, although less than in the electromyographic signal, the positioning of the probe could in�luence the registered signal. We assume that the probe is mounted by the operator, who could not be a specialist in good signal acquisition or anatomy. Therefore, we cannot guarantee that the probe will always be put in the Articles

31

31


Journal Journal of of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

same place only that it will be in some area around the particular muscle group. All previously described issues make a simple signal �iltering to acquire the stiffness classi�ication infeasible, and most previous approaches are data-driven, machine learning based, where new data are classi�ied using a model that was created using previously acquired and labeled data (supervised learning). Instead of calibrating the sensor, based on the data, a machine learning algorithm captures the pattern in the data facilitating classi�ication. 3.1. Acquiring Data for Supervised Learning

32

32

Labeled data for supervised learning was acquired during two experiments using the previously described mechanomyographic probes. In both experiments, the probe was placed in 7 different parts of the arm, on an area of around 35 mm in diameter, on �lexor carpi radialis and �lexor carpi ulnaris muscles. The exact placement of the probes is illustrated in Fig.6. In both experiments, the operator was asked to move the hand using 20 gestures, typical for normal hand movement, a list of which is presented in Tab. 1. The choice of gestures was made such that there were both static poses and dynamic actions – possibly making it more dif�icult to classify the stiffness due to the motion artifacts. We did not try to classify the gestures, but in the third experiment, they were recorded to enable better interpretation of the results. Actions were done twice, once having a relaxed hand, the second time having it stiffened. Five of the positions were used in training (1,3,4,5,6 see Fig.6) while points 2 and 7 were used in tests. In all three experiments, the goal was to classify the mechanomyographical signal for a binary label – whether the hand was stiff or relaxed. Participants were instructed to stiffen their hand so that the �ingers would not (signi�icantly) move when there would be an external force and to have near none resistance to force (just to compensate gravity) when having the hand relaxed. We did not measure the actual mechanical stiffness of the hand. In the �irst experiment, a plastic probe was used, and data were acquired with an effective acquisition frequency of 1024 samples per second and a frame length of 0.5s. For each vector of samples, magnitudes of discrete Fourier coef�icients were calculated and registered (a vector of length 512). The aim of the Experiment was assessing the feasibility to use a simple sensor with computationally fast data process (even on microcontrollers or DSP’s) to acquire a usable signal. In the second experiment, data were collected using the silicone-rubber probe, with an audio signal being registered with a frequency of 16 kHz and labeled with the frequency of 2 Hz (i.e., every half-second). A single audio recording was registered for one action. Each hand gesture motion recording, for each person, was split into 0.5 s labeled recordings. In the �irst experiment, as the data was labeled every 500 ms, 3220 half-second training and 689 (around 20%) testing label-recording pairs were acquired, which adds Articles

VOLUME N°22 2019 2019 VOLUME 13,13, N°

Tab. 1. �and mo�ons used in data ac�uisi�on �or mechanomyography experiments number 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

activity �lat hand �lat hand rotated 90 degrees closed �ist closed �ist rotated 90 degrees spherical grasp 3 �inger grip wedge grip pointing 4 straightened �ingers 2 straightened (freedom gesture) a cylindrical grasp radial palmar grasp �inger wave motion tapping �ingers a ”scissor” gesture �lexing hand in wrist a hand laying on table grabbing a box (transverse volar grip) grabbing a bottle (cylindrical palmar prehension) using a keyboard

Fig. 6. Sensor placement during the experiments up to about 30 minutes of the registered signal. In the second experiment, 6944 half-second training and 2962 (about 27%) testing label-recording pairs were acquired, adding up to around 80 minutes of a signal. 3.2. Experiment 1

For the �irst experiment, three classes of algorithms were investigated. These were: Support Vector Machines (SVM) with Linear Kernel, Extreme Random Trees and Neural Networks. For SVM and Extreme Random Trees, we used Scikit-Learn implementations and Keras for Neural Networks [1, 25]. For Fully-Connected Neural Network we used a three-layer Neural Network, with one hidden layer, batch normalization, and dropout during training, see Fig. 9. In all cases, a backpropagation learning algorithm was used. Similar algorithms were used in other papers for electro or mechanomyography analysis [6]. Feature vector was constructed from magnitudes of Fourier coef�icients, corresponding to frequency band 0-50 Hz. Also, a recursive feature elimination (RFE) method was used to select a subset of features that had the highest importance. The method trains the classi�ier using an initial set of features, computes the feature importance (a ranking classi�ier) for the


Journal of of Automation, Automation, Mobile Journal MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

classi�ier and removes the feature with the smallest ranking criterion. The procedure is repeated until a desired (reduced) number of features is achieved [12]. For each classi�ication method and each type of feature vector a hyperparameter optimization was used, as all selected algorithms are highly dependent on selected hyperparameters (C parameter for SVM with linear kernel, number of estimators, min sample split, a maximum number of features for extreme random trees, number of neurons on a layer and regularization parameters for a neural network). A grid search with cross-validation (5-split) was used for optimization. For each algorithm and each hyperparameter, we created a list of values to evaluate and then we took the Cartesian product of these possible values. Then, for each hyperparameter combination, the machine learning algorithm was taught on a 4/5ths of the test data, while tested on the remaining 1/5th (of the test set), repeating �ive times on a different split. Subsequently, a mean precision value, taken from evaluating the taught algorithms on particular test sets was used as a return value for hyperparameter combination. A model with the best parameters was then tested on the test set.

VOLUME 2019 VOLUME 13,13, N°N° 2 2 2019

(length 50) – subband energy vector [30],

- subset of features constructed from concatenation of subband energy vector and average coef�icients and reduced by feature elimination using Recursive feature elimination (RFE). The scikit-learn implementation of the RFE algorithm was used [25].

Additionally, other architectures of neural networks were used, namely convolutional (see Fig. 10b), and recurrent (see Fig.10c) neural networks, which required different input data formats:

- in case of convolutional neural networks, the input was a matrix (64x7) where each row was one of the vectors of Wavelet Packet decomposition, forming an ”image” of the spectrogram (see Fig.7) – matrix of decom�osition coef�icients. This approach is similar to the representation used by Wołczowski and Zdunek in [33] – each measurement can be represented as a 3D tensor y ∈ RI1 ×I2 ×I3 , where I1 is the scale dimensionality, I2 is time shift dimensionality, I3 is the number of sensors (1 or 2).

- for a recurrent network, a raw decimated time series vector was used, of length 256.

3.3. Experiment 2

In the second experiment, in addition to the same features as in Experiment 1, that is constructed from magnitudes of FFT coef�icients and eliminated using RFE feature selection, we evaluated other signal transforms. Namely, we used spectral analysis using Wavelet Packet decomposition, and we directly used the decimated signal in the time domain. Spectral analysis and direct method were used motivated by the previously researched nonstationarity of the signal [7]. For analysis using Wavelet Packets, we used the methodology described in [30], for analysis of EEG signal. We used the same parameters, that is Daubechies wavelet 4 (db4) wavelet family and 6th level of wavelet packet decomposition. For calculations, PyWavelets library for Python language was used [21]. Wavelet packet decomposition, for a signal of length 256 and 6th level of decomposition, produces 64 vectors of length 7. Sorted by frequency, the �irst 25 of those represents frequencies between 0-50 Hz. Several combinations of features were investigated, each set creating a feature vector. Same types of feature vectors were selected for a shorter time window 0.25s.

- feature vector constructed from a concatenation of 25 �irst (frequency sorted) vectors from the 6th level of Wavelet Packet decomposition (length 25*7) – all decom�osition coef�icients - feature vector constructed from a concatenation of a vector where each element is a mean of elements of one of the vectors of Wavelet Packet decomposition – average coef�icients from [30], with a vector where each element is a norm of one of the vectors of Wavelet Packet decomposition, �irst 25 vectors were used

(a) Relaxed hand

(b) ���ened hand

Fig. 7. Image of matrix made from vectors of Wavelet �ac�et �ecomposi�on. �orresponding �me series are presented in Fig. 5 Similarly to the �irst experiment, for each method and feature set combination, a hyperparameter optimization based on cross-validation (5-fold) was used. The model with the best set of hyperparameters was evaluated on the test set. Also for all algorithms, the speed of execution was measured. This was done using a categorization of a random vector by a selected method, using timeit Python function, which repeated execution of this function several times to estimate mean execution time and standard deviation.

3.�. �������b�e�t S��ne�� ��a��i��a�on ��ing ��o� probe Set In this experiment, subjects were equipped with two probes using the two-probe set (see Fig. 4). Seven healthy subjects, 4 male, 3 female, 5 right- and 2 lefthanded, age 22-33, performed an experimental protoArticles

33

33


Journal Journal of of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

Fig. 8. A two-probe set placement for the m�l�-par�c�pant e�per�ment col. For all subjects, the right hand was used, and there were no changes for the setup of the probe. The whole procedure lasted between 10 and 30 minutes per participant. All subjects participated voluntarily and gave written consent to the procedures. Experimenters placed the probe over the �lexor digitorum super�icialis and extensor digitorum (if not possible due to the differences in subject forearm diameter the probes were placed so they would be on two sides of brachioradialis muscle), see Fig 8. All subjects repeated the same set of actions as in previous experiments (see Tab. 1), which were recorded. Overall, 664 gesture recordings were collected, consisting of 7445 seconds of data. Data were processed through the same pipeline as in previous experiments, separate for both channels (i.e., low-pass �iltering and decimation and 6th level wavelet pocket features). Two machine learning architectures – convolutional neural networks and extra trees classi�ier – were used. For each participant, �ive conditions were evaluated. In �irst, learning was done using single-channel recordings of the other participants and test was done on this participant. In the second, two channels were used. In third, also two channels were used, but for each participant, data was scaled used ScikitLearn robust scaler: i.e., the median was removed, and data was scaled to interquartile (1st quartile and 3rd quartile) range per each channel. In forth condition, only same-subject recordings were used, using 5-fold data splitting, i.e., data was split 5 times into test-train sets, and the result is the mean precision value. In the �ifth condition, only same-subject recordings were used (with 5-fold data splitting), but only one mechanomyography channel (sensor on the anterior side of the arm). 3.5. Results and Discussion

34

34

Results of the Experiment one (plastic probe) and comparison to results for the same features but for silicone probe (from Experiment two) are presented in table 2. Rest of results for experiment two (single silicone probe, features constructed from Wavelet Packet Decomposition or raw time vector) are presented in table 3. Results of the third experiment (multiparticipant experiment) are presented in Fig. 11. The Table 2 shows that nearly every method gave signi�icantly better results for a silicone-probe than a Articles

VOLUME N°22 2019 2019 VOLUME 13,13, N°

plastic one. Still, an 89% accuracy of classi�ication can be achieved with a simple plastic probe, using Support �ector �lassi�ication (with Linear Kernel) with a feature vector being all magnitudes of FFT coef�icients. This feature set is also the best for silicone probe, with 91% accuracy. What is important, this method is much faster than Extra Trees �lassi�ier and Neural Network, with time for single classi�ication being around 30 us for S�� and around 2 ms for Extra Trees �lassi�ier and Neural Network. The preparation of the signal, that is calculation of magnitudes of Fourier coef�icients takes approximately 916 ± 5ns (for sampling frequency of 1 kHz). We also assume that 94% result of classi�ication for Extra Trees �lassi�ier is probably erroneously good, as mean result on cross-validated data was only 74%. The Table 3, presents that overall best result is achieved where a convolutional neural network is used on a 1s time frame, with each wavelet packet decomposition vector constructing one row of the input matrix, achieving a classi�ication result of 94.5%. Using a longer time frame gives better results, but for a 0.25s time frame, 90% accuracy is still possible. Interestingly, using a raw timeframe and recurrent neural network would result in the classi�ication with the accuracy of 89% but with the signi�icantly longer calculation times – 18 ms compared with classi�ication times of less than 2ms and transformation times (for signal decimation and wavelet packet decomposition) of 4 ms. In the third experiment models based on twoprobe input had a better result (i.e., two-channel cross-participant results were better than one channel cross-participant results, and two-channel samesubject results were better than one channel samesubject results) but the multi-person experiment with the two-probe set has also shown signi�icant differences in results between participants. Best results were achieved when models were trained using samesubject data even when using only one of the channels. This is similar to the results of other groups. Particularly, Youn and Kim presented that for estimating elbow �lexion forces from ��� signal same-subject validation tests were signi�icantly better than those of the cross-subject validation test [34]. Per-participant data normalization did not signi�icantly improve the crossparticipant results. The comparably more reduced performance in all the cases can be explained by the fact that the current design of the two probe harness does not allow accurate placement of two probes depending on the forearm size and that the participants were not trained. Also, there were fewer recordings per-participant than in the previous experiments, which can explain a worse performance of neural-network-based algorithms, which tend to require large amounts of data and over-learn otherwise.

�. �nteg�a�on �it� a R����ased ��ste�

The mechanomyography-based stiffness classi�i-


dense_1: Dense

batch_normalization_1: BatchNormalization

dropout_1: Dropout

dense_2: Dense

dense_4: Dense

Fig. 10. �r��cial Neural Network architectures use� in the Mechanomyogra�hy ���eriment �

lstm_1: LSTM

flatten_1: Flatten

dropout_5: Dropout

dropout_2: Dropout

dropout_3: Dropout

dropout_1: Dropout

batch_normalization_2: BatchNormalization

batch_normalization_2: BatchNormalization

max_pooling2d_1: MaxPooling2D

dense_5: Dense

lstm_1_input: InputLayer

conv2d_2: Conv2D

dropout_4: Dropout

(c) Recursive Neural Network (with Long-Short Memory unit [11])

(b) �onvolu�onal Neural Network

conv2d_1_input: InputLayer

conv2d_1: Conv2D

batch_normalization_1: BatchNormalization

(a) �ully-�onnecte� �r��cial Neural Network

dense_4_input: InputLayer

Fig. 9. �ully-�onnecte� �r��cial Neural Network architecture use� in ���eriment 1

dense_1_input: InputLayer

dense_3: Dense

dense_1: Dense

dense_6: Dense

dense_3: Dense

dropout_2: Dropout

dropout_6: Dropout

dropout_3: Dropout

dense_2: OutputLayer

batch_normalization_3: BatchNormalization

batch_normalization_3: BatchNormalization

dense_7: Dense

dense_4: Dense

Journal of of Automation, Automation, Mobile Journal MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems VOLUME 2019 VOLUME 13,13, N°N° 2 2 2019

Articles

35

35


Journal Journal of of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

cation device can be integrated with a robotic system using the ROS (Robot Operating System). The program does online data processing and classi�ication, results of which are available as ROS topics. ROS node publishes the recognized category, the estimated probability of classi�ication (if applicable: in SVM with Linear Kernel and Neural Network) and a decomposed (wavelet packet, FFT) or raw signal. In an actual system, the short 0.25s time frame is used. The proposed integration as a part of a teleoperation system is illustrated in Fig. 12. Operator’s hand pose and position are tracked by a vision system and a sensor glove. From this data, the desired gripper pose is calculated. Simultaneously, machanomygraphic probe, placed on the operator’s arm, measures the muscle signals. A data processing pipeline, described in previous sections, is used to classify the stiffness. The stiffness class is then used by the gripper controller, working in a stiffness control mode, to switch the stiffness to either hard or soft. After calibration, we plan to use the probability of the label to further adjust the stiffness of the gripper. Also, other ROS nodes can subscribe to stiffness, probability of the label, as well as, raw data or processed data topics.

VOLUME N°22 2019 2019 VOLUME 13,13, N°

The prototyping of the device can be extremely low-cost, with a material cost of around 10 PLN (less than 3 USD) for plastic probes and 20 PLN (5 USD) for silicone probes, and requires only a simple FDM 3D printer. A described device is limited by its use of only one or two channels, but the signal acquisition is very simple and can run on a standard PC with a built-in audio card. �e put the software code and CAD �iles for the devices in open source repository [35]. The device software is prepared as a ROS Package to enable re-use and integration with ROS-based systems. As further steps, we plan to use the mechanomyographic signals to enable regression of stiffness, similarly as Hoppner et al. did for electromyographic signal [14]. Also, we plan to improve the two-probe setup to enable better �it a particular muscle-group pair. This, however, needs to be followed by an easy-to-use protocol for teleoperators so they can �ix the sensor themselves.

ACKNOWLEDGEMENTS

Authors would like to thank Malgorzata Pruszkiewicz and Hubert Kowalczyk for their help in data collection.

AUTHORS

Igor Zubrycki∗ – Lodz University of Technology, Stefanowskiego 18/22, 90-924 Lodz, e-mail: igor.zubrycki@p.lodz.pl, www: www.robotyka.p.lodz.pl. Grzegorz Granosik – Lodz University of Technology, Stefanowskiego 18/22, 90-924 Lodz, e-mail: granosik@p.lodz.pl, www: www.robotyka.p.lodz.pl. ∗

Corresponding author

REFERENCES

Fig. 11. ŽŵƉĂƌŝƐŽŶ ŽĨ ƉƌĞĐŝƐŝŽŶ ƌĞƐƵůƚƐ ĨŽƌ ŵƵůƟƉůĞ ƉĂƌƟĐŝƉĂŶƚƐ ǁŚŝůĞ ƚƌĂŝŶŝŶŐ ŽŶ ĚŝīĞƌĞŶƚ ĚĂƚĂƐĞƚƐ

5. Conclusions

36

36

In this paper, we presented a concept of a simple and inexpensive device to measure mechanomyographic signal for estimation of hand’s stiffness using machine learning methods. The device with the best classi�ication method was robust to probe displacement, showing an over 90% accuracy of classi�ication with 4 Hz speed and for a 15 mm displacement of the probe. The system is capable of achieving 94% accuracy if the algorithms could run on a graphical card and 1 Hz frequency of classi�ication and when calibrated for a particular person. Results achieved are comparable to those of other authors with 90% achieved for mechanomyography (1s window of acquisition) [15] and 95% for electromyography [24]. Articles

[1] “Home – keras documentation”. keras.io, 2015.

https://

[2] “scipy.signal.decimate — scipy v1.1.0 reference guide”. https://docs.scipy.org/doc/scipy1.1.0/reference/generated/scipy. signal.decimate.html, 2019. [3] A. Ajoudani, S. B. Godfrey, M. Bianchi, M. G. Catalano, G. Grioli, N. Tsagarakis, and A. Bicchi, “Exploring Teleimpedance and Tactile Feedback for Intuitive Control of the Pisa/IIT SoftHand”, IEEE Transactions on Haptics, vol. 7, no. 2, 2014, 203– 215, 10.1109/TOH.2014.2309142. [4] A. Ajoudani, N. G. Tsagarakis, and A. Bicchi, “TeleImpedance: Preliminary results on measuring and replicating human arm impedance in tele operated robots”. In: 2011 IEEE International Conference on Robotics and Biomimetics, 2011, 216–222, 10.1109/ROBIO.2011.6181288.

[5] L. A. Al Abeach, S. Nefti-Meziani, and S. Davis, “Design of a Variable Stiffness Soft Dexterous Gripper”, Soft Robotics, vol. 4, no. 3, 2017, 274– 284, 10.1089/soro.2016.0044.


Journal of of Automation, Automation, Mobile Journal MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

VOLUME 2019 VOLUME 13,13, N°N° 2 2 2019

Tab. 2. �lassi�ca�on resul�s �or ��peri�en� �� �esul�s �or probe one (plas�c probe)� in brac�e� resul�s �or ��e second probe (silicone-rubber based) Classi�ier type SVM with Linear Kernel Extra Trees Classi�ier

data source RFE selected features from magnitudes of FFT coef�icients all (0-50 Hz) magnitudes of FFT coef�icients RFE selected features from magnitudes of FFT coef�icients all (0-50 Hz) magnitudes of FFT coef�icients

fully-connected Arti�itial Neural Network

RFE selected features

all (0-50 Hz)magnitudes of FFT coef�icients

hyperparameters C=1 (C=0.1)

accuracy 0.77 (0.88)

ex. time 31.3 ±3.09us

C=10 (C=0.1)

0.8996 (0.91)

30.2 ±2.02us 2.29 ms ±8.06us

n of est 40, min sample split 5, max feat 5 (n of estimators 40, min samples split 5, max features 6 ) n of est 40, min sample split 5, max feat 16 (n of est 40, min sample split 2, max feat 16 ) 3 layers, 10 neurons per layer

0.81 (0.88)

0.94 (0.88)

0.74

0.69 (0.81) 0.73 (0.85)

2.29 ms ±11.6us

2.11 ms ±13.2us 2.14 ms ±37.7us

Tab. 3. �esul�s o� classi�ca�on �or ��peri�en� �� �esul�s �or �s ��e�ra�e� and in brac�e� resul�s �or a ��e �ra�e o� ����s Classi�ier type SVM with Linear Kernel

Extra Trees Classi�ier

data source RFE selected features

average of coef�icients +subband energy vector all decomposition coef�icients RFE selected features average coef�icients +subband energy vector all decomposition coef�icients

fully-connected ANN

Convolutional NN Recurrent NN

RFE selected features

average of coef�icients +subband energy wector all decomposition coef�icients matrix of decomposition coef�icients downsampled (256/s) time signal

[6] A. H. Al-Timemy, G. Bugmann, J. Escudero, and N. Outram, “Classi�ication of Finger Movements for the Dexterous Hand Prosthesis Control With Surface Electromyography”, IEEE Journal of Biomedical and Health Informatics, vol. 17, no. 3, 2013, 608–618, 10.1109/JBHI.2013.2249590. [7] T. W. Beck, J. M. DeFreitas, M. S. Stock, and

hyperparameters C=10 (C=10) C=10 (C=1)

C=10 (C=100)

n est 40, min s split 5, max feat 5 (n est 40, max feat 5, min s split 10) n est 40, min s split 5, max feat 7 (n of est: 40, max feat 7, min s split 10) n est 40, min s split:5, max feat 12 (n est 40, max feat: 12, min s split 5)

accuracy 0.901 (0.877)

0.8996 (0.86)

ex. time 33.2 ±1us

0.56 (0.59)

0.901 (0.88)

29.2 ±2.65us 27.2 ±2.0us 2.2 ms ±0.03ms

0.904 (0.88)

2.17 ±0.02ms

0.88 (0.85)

2.23 ±0.01ms

0.91 (0.89)

1.87 ±0.3ms 1.9 ±0.3ms 1.81 ±0.1ms 1.54 ±0.2ms 18.2 ±3.9ms

0.914 (0.89)

see Fig. 10b

0.88 (0.88)

see Fig.10c

0.892 (0.88)

0.945 (0.90)

M. A. Dillon, “An examination of mechanomyographic signal stationarity during concentric isokinetic, eccentric isokinetic and isometric muscle actions”, Physiological Measurement, vol. 31, no. 3, 2010, 339–361, 10.1088/09673334/31/3/005.

[8] T. W. Beck, T. J. Housh, J. T. Cramer, J. P. Weir, G. O. Articles

37

37


Journal Journal of of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

VOLUME N°22 2019 2019 VOLUME 13,13, N°

sensor glove flexion data (vector) mechanomyographic probe

data processing

vision system

hand pose (vector of angle values)

hand position (transform frame)

mechanomyographic data processing lowpass filtering downsampling

kinematics mapping

calculating feature vectors (FFT or Wavelet Packet) stiffness classification (SVM/ ANN/ Extra Trees) stiffness/ probability/ raw signal

other ROS nodes

probability of the label (float [0.0-1.0])

hand stiffness (boolean)

desired gripper pose (angle values)

gripper controller calculate gripper joint speeds given pose disrepancy and desired stiffness

Fig. 12. WƌŽƉŽƐĞĚ ŝŶƚĞŐƌĂƟŽŶ ŽĨ ƐƟīŶĞƐƐ ĞƐƟŵĂƟŽŶ ŶŽĚĞ ŝŶ ƚŚĞ ƚĞůĞŽƉĞƌĂƟŽŶ ƐLJƐƚĞŵ Johnson, J. W. Coburn, M. H. Malek, and M. Mielke, “Mechanomyographic amplitude and frequency responses during dynamic muscle actions: a comprehensive review”, BioMedical Engineering OnLine, vol. 4, no. 1, 2005, 67, 10.1186/1475925X-4-67.

[9] S. Day, “Important Factors in surface EMG measurement”, Bortec Biomedical Ltd publishers, 2002, 1–17.

[10] D. W. Franklin, G. Liaw, T. E. Milner, R. Osu, E. Burdet, and M. Kawato, “Endpoint Stiffness of the Arm Is Directionally Tuned to Instability in the Environment”, Journal of Neuroscience, vol. 27, no. 29, 2007, 7705–7716, 10.1523/JNEUROSCI.0968-07.2007. [11] F. A. Gers, J. Schmidhuber, and F. Cummins, “Learning to Forget: Continual Prediction with LSTM”, Neural Computation, vol. 12, no. 10, 2000, 2451– 2471, 10.1162/089976600300015015.

[12] I. Guyon, J. Weston, S. Barnhill, and V. Vapnik, “Gene Selection for Cancer Classi�ication using Support Vector Machines”, Machine Learning, vol. 46, no. 1, 2002, 389–422, 10.1023/A:1012487302797. [13] S. G. Hart and L. E. Staveland. “Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research”. In: P. A. Hancock and N. Meshkati, eds., Advances in Psychology, volume 52 of Human Mental Workload, 139–183. North-Holland, January 1988.

38

38

[14] H. Hö ppner, M. Große-Dunker, G. Stillfried, J. Bayer, and P. van der Smagt, “Key Insights into Articles

Hand Biomechanics: Human Grip Stiffness Can Be Decoupled from Force by Cocontraction and Predicted from Electromyography”, Frontiers in Neurorobotics, vol. 11, 2017, 17, 10.3389/fnbot.2017.00017.

[15] M. A. Islam, K. Sundaraj, R. B. Ahmad, N. U. Ahamed, and M. A. Ali, “Mechanomyography Sensor Development, Related Signal Processing, and Applications: A Systematic Review”, IEEE Sensors Journal, vol. 13, no. 7, 2013, 2499–2516, 10.1109/JSEN.2013.2255982.

[16] E. Jezierski, “Low cost impedance controller for robotic gripper drive with DC motor”. In: 2015 20th International Conference on Methods and Models in Automation and Robotics (MMAR), 2015, 806–811, 10.1109/MMAR.2015.7283979. [17] N. Jiang, K. B. Englehart, and P. A. Parker, “Extracting Simultaneous and Proportional Neural Control Information for Multiple-DOF Prostheses From the Surface Electromyographic Signal”, IEEE Transactions on Biomedical Engineering, vol. 56, no. 4, 2009, 1070–1080, 10.1109/TBME.2008.2007967. [18] P. Kaczmarek, T. Mań kowski, and J. Tomczyń ski, “Towards sensor position-invariant hand gesture recognition using a mechanomyographic interface”. In: 2017 Signal Processing: Algorithms, Architectures, Arrangements, and Applications (SPA), 2017, 53–58, 10.23919/SPA.2017.8166837.

[19] K. Kiguchi and Y. Hayashi, “An EMG-Based Control for an Upper-Limb Power-Assist


Journal of of Automation, Automation, Mobile Journal MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

Exoskeleton Robot”, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 42, no. 4, 2012, 1064–1071, 10.1109/TSMCB.2012.2185843.

[20] E. Krueger, E. M. Scheeren, G. N. Nogueira-Neto, V. L. da Silveira Nantes Button, and P. Nohama, “Advances and perspectives of mechanomyography”, Revista Brasileira de Engenharia Biomédica, vol. 30, no. 4, 2014, 384–401, 10.1590/1517-3151.0541.

VOLUME 2019 VOLUME 13,13, N°N° 2 2 2019

Prosthesis”, Archives of Physical Medicine and Rehabilitation, vol. 86, no. 10, 2005, 2066–2070, 10.1016/j.apmr.2005.03.034.

[30] W. Ting, Y. Guo-zheng, Y. Bang-hua, and S. Hong, “EEG feature extraction based on wavelet packet decomposition for brain computer interface”, Measurement, vol. 41, no. 6, 2008, 618–625, 10.1016/j.measurement.2007.07.007.

[21] G. Lee, F. Wasilewski, R. Gommers, K. Wohlfahrt, A. O’Leary, and H. Nahrstaedt. “Pywavelets - wavelet transforms in python”, 2006.

[31] M. Watakabe, K. Mita, K. Akataki, and Y. Itoh, “Mechanical behaviour of condenser microphone in mechanomyography”, Medical and Biological Engineering and Computing, vol. 39, no. 2, 2001, 195–201, 10.1007/BF02344804.

[23] M. Ma. MMG sensor for muscle activity detection : low cost design, implementation and experimentation : a thesis presented in ful�ilment of the requirements for the degree of Masters of Engineering in Mechatronics, Massey University, Auckland, New Zealand. Thesis, Massey University, 2010.

[33] A. Wołczowski and R. Zdunek, “Electromyography and mechanomyography signal recognition: Experimental analysis using multi-way array decomposition methods”, Biocybernetics and Biomedical Engineering, vol. 37, no. 1, 2017, 103– 113, 10.1016/j.bbe.2016.09.004.

[22] R. Lopez and T. C. Davies, “The effect of surface electromyography placement on muscle activation amplitudes and timing”. In: 2016 IEEE EMBS International Student Conference (ISC), 2016, 1– 4, 10.1109/EMBSISC.2016.7508618.

[24] M. A. Oskoei and H. Hu, “Evaluation of Support Vector Machines in Upper Limb Motion Classi�ication Using Myoelectric Signal”. In: 14th International Conference on Biomedical Engineering: ICBME 2008, 2008.

[25] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay, “Scikit-learn: Machine learning in python”, Journal of Machine Learning Research, vol. 12, 2011, 2825–2830.

[26] A. O. Posatskiy and T. Chau, “The effects of motion artifact on mechanomyography: A comparative study of microphones and accelerometers”, Journal of Electromyography and Kinesiology, vol. 22, no. 2, 2012, 320–324, 10.1016/j.jelekin.2011.09.004.

[32] A. Wołczowski, M. Błędowski, and J. Witkowski, “The System for EMG and MMG Signals Recording for the Bioprosthetic Hand Control”, Journal of Automation, Mobile Robotics and Intelligent Systems, vol. 11, no. 3, 2017, 22–29, 10.14313/JAMRIS_3-2017/25.

[34] W. Youn and J. Kim, “Feasibility of using an arti�icial neural network model to estimate the elbow �lexion force from mechanomyography”, Journal of Neuroscience Methods, vol. 194, no. 2, 2011, 386–393, 10.1016/j.jneumeth.2010.11.003.

[35] I. Zubrycki. “A ROS node for aquisition, learning and sharing mechanomyografy data from audio signal: AdoHaha/mechanomiography_node”, April 2018.

[36] J. M. Zuniga, T. J. Housh, C. L. Camic, C. Russell Hendrix, H. C. Bergstrom, R. J. Schmidt, and G. O. Johnson, “The effects of skinfold thicknesses and innervation zone on the mechanomyographic signal during cycle ergometry”, Journal of Electromyography and Kinesiology, vol. 21, no. 5, 2011, 789–794, 10.1016/j.jelekin.2011.05.009.

[27] J. Silva and T. Chau, “Coupled microphoneaccelerometer sensor pair for dynamic noise reduction in MMG signal recording”, Electronics Letters, vol. 39, no. 21, 2003, 1496, 10.1049/el:20031003.

[28] J. Silva, T. Chau, S. Naumann, W. Helm, and A. A. Goldenberg, “Optimization of the signalto-noise ratio of silicon-embedded microphones for mechanomyography”. In: CCECE 2003 - Canadian Conference on Electrical and Computer Engineering. Toward a Caring and Humane Technology, vol. 3, 2003, 1493–1496, 10.1109/CCECE.2003.1226187. [29] J. Silva, W. Heim, and T. Chau, “A Self-Contained, Mechanomyography-Driven Externally Powered

Articles

39

39


Journal Systems Journalof ofAutomation, Automation,Mobile MobileRobotics Roboticsand andIntelligent Intelligent Systems

VOLUME 13, N° N°22 VOLUME 13,

2019 2019

D������������ �� ��� R����������� ������� �M� S������ ��� ���� ����� ����� � ���������� M�� A������ �u��i�ed: 31st January 2019; accepted: 30th April 2019

Michał Błędowski, Andrzej Wołczowski DOI: 10.14313/JAMRIS/2-2019/17 Abstract: The wor� discusses the construc�on of a measurement system for determining the rela�onship between EMG signals and hand grip movements. The rela�onship is necessary for the synthesis of control of the hand bioprosthesis. The measurement system is based on commercial Myo armband with EMG signals sensors and sensory glove with bend and pressure sensors. There are presented possibili�es� advantages and disadvantages of such approach. Keywords: EMG signals� electromyography� pa�ern recogni�on� hand bioprosthesis control

1. ��trod�c�o�

40

40

The progress of medicine is inseparably linked with the development of technology, especially in such �ields as mechatronics, computer science and robotics. A good example is advanced hand bioprostheses whose control is based on recognizing the patient’s intent of guiding the prosthesis by registering biosignals from his limb stump, and recognizing the patterns contained in them. The result of such recognition is the decision controlling the movement of the bioprosthesis. The idea of such a control is illustrated in Figure 2, included together with the description in Chapter 2. The basis for the synthesis of the recognition system is the knowledge of the relationship between the measured biosignals and the patient’s intention regarding the movement of the prosthesis. Such a relationship can be determined (with a good approximation) on the basis of the examination of a healthy person, by recording biosignals generated in the muscles of the forearm during the performance of speci�ic manipulative-grasping movements (movements from speci�ic classes). The desired relationship can then be expressed by labeling the recorded biosignals with the classes of performed grips (types of �inger movements) and its parameters (movement speed, strength and points of contact of the object with the hand). The labeling can be performed automatically by the measuring system based on information on the type of motion provided by the vision system or sensory glove or by the operator participating in the experiment [12, 13]. The paper presents the latter approach. The quality of bioprosthesis control is characterized essentially by two parameters:

- the size of repertoire of manipulative-grasping movements, and - the reliability of recognition of intentions of these movements’ realization.

Due to the disturbances accompanying the process of biosignal registration, which reduce the information contained in biosignals, the recognition error is usually greater than zero. Additionally, this error increases naturally with the number of prosthesis movements. Hence, when determining the relationship between the biosignals and the intent of prosthesis movement (also later, while controlling), the key issue is the quality of biosignal registration which depends directly on the measuring system and the registration procedure applied. The paper presents the course and the results of measurement experiments conducted with the commercial measuring band called the Myo (equipped with electromyographic sensors [6, 7, 15] and accelerometer), and the sensory glove developed by the authors (equipped with bend and pressure sensors, Fig. 5). As the quality criterion of the obtained measurement data, the level of reliability of the recorded signals class recognition has been accepted (or alternatively the recognition error). To verify the quality of the acquired data, the tests to recognize the recorded signals were carried out. These tests together with the applied algorithms of extraction and classi�ication of features are described in Chapter 4. The quality of the recorded signals was compared with the Biosignal Measurement System developed in the Mobile Robots & BioControl Laboratory, Faculty of Electronics, Wroclaw University of Science and Technology [14]. The comparison was made on the basis of the signal recognition results.

Fig. 1. The set of gestures proposed by the Myo armband’s manufacturer [1] In contrast to similar studies using the Myo armband [3, 5, 8], the authors introduce a new element, i.e. the registration of hand and object interaction based on the use of pressure sensors to identify the moment of contact with the object being grabbed, as well


Journal of Journal of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

as the measurement of the existing pressure forces (end of the motion). Also, the repertoire of 11 gripping movements chosen for the study de�initely exceeds the modest set of movement classes proposed by the manufacturer of the measuring band (Fig. 1) [1]. The adopted repertoire of movements is described in Chapter 4.

2. ����g�i��� ������� �i�� F�������

As it has been mentioned, registered bio-signals, after recognition of the patterns contained in them, can be used to control the machine, for example, a biomanipulator or bio-prosthesis of the hand. The scheme of the recognition process is shown in Figure 2. It includes the following stages: signal acquisition – The process of collecting the raw measurement data and saving them in digital form.

The information obtained at this stage may be redundant and contain interference. In a multichannel measuring system, an improper location of measuring electrodes may lead to repetition of some information in different signals. In turn, the most common sources of interference are, among others, own noise of measuring electronic devices and overlapping external noise from power devices. These disturbances can be substantially reduced by constructing appropriately the measuring circuits (differential measurement, measuring ampli�ier directly by the electrodes, etc.). Another source of noise at the stage of determining the biosignals-the intention of prosthesis movement relationship, is the poor repeatability of movements performed (for each class). This source of interference can be signi�icantly reduced by applying the appropriate registration procedure.

extraction – The measurement data processing stage. The registered signal is subjected to the process of feature extraction.

By means of appropriate extraction algorithms, the features characterizing the interesting (for the recognition process) information contained in the signal are determined. Extraction algorithms can operate in the time domain (determining such parameters as: rms, mav, number of passes through zero, etc.) [2], in the frequency domain (Fourier Transform, STFT, etc.) [9, 10] or time and frequency domain (for example: Wavelet Transform [4]). You can also combine features obtained by different methods. After combining the features of signals from all measuring channels (for a single measurement), a feature vector is created that uniquely characterizes a given class of signals.

selection – Stage of selecting a set of features best describing a given movement.

VOLUME 2019 VOLUME 13,13, N°N°2 2 2019

At this stage, redundant data as well as data related to interference are deleted. This is done by experimentally removing super�luous features from the vector of features or by manipulating the components of the vector to minimize intra-class dispersion and at the same time maximize the interclass dispersion. Examples of algorithms are: SBS, SFS [12].

classi�ication – The last step of the classic recognition scheme. At this stage, the feature vector representing the recognized signal is assigned to the best matching class. Examples of algorithms are e.g. neural networks NN [16], decision tree [11], or applied in the work K nearest neighbors (KNN) [17].

control command – The step of translating the classi�ication result into a control signal.

The recognized signal class can be treated as a signal initiating a speci�ic movement of the prosthesis.

feedback – An additional step in the classic recognition process.

It provides information about the course of the signal generation process. In the case under consideration, it is a sensory signal from the measuring glove informing about hand posture changes (bend sensors) and pressure forces of individual hand elements (resistive sensors).

Bio-signal recognition is a �lexible process in terms of the type of different data. It is possible to use all biological signals mentioned at the outset after bringing them to the digital representation in the form of a vector of features of �inite size. [2] Decision control elements

Feedback

Data acquisition

Measurement of pressure force

Extraction

Measurement of bending

Selection

Classification

Control command

Fig. 2. ŝĂŐƌĂŵ ŽĨ ƚŚĞ ƌĞĐŽŐŶŝƟŽŶ ƉƌŽĐĞƐƐ

Articles

41

41


Journal Systems Journalof ofAutomation, Automation,Mobile MobileRobotics Roboticsand andIntelligent Intelligent Systems

3. Measurement Stand Muscle movements are accompanied by changes in the electrical potential appearing on the membrane of their cells (myocytes / muscle �ibers). These changes propagate into the surrounding tissues reaching the skin surface. On the surface of the skin, changes in potentials from various cells, various muscles currently active, create a superposition that can be recorded as a surface EMG signal. Therefore, EMG signals recorded on the skin of the forearm, above the muscles moving �ingers of the hand, represent these movements. For obvious reasons, the highest percentages in the EMG signal measured have potentials from the closest muscle. Therefore, due to the ef�iciency of recognizing the movement class, the individual measurement points should be located successively over the muscles we are interested in. This arrangement is facilitated by the construction of the Myo armband by Thalmic Labs. The proposed measuring stand consists of the Myo measuring armband (Fig. 4) and the measuring glove (Fig. 5). The band, due to its construction, allows the signal to be measured only in eight places evenly distributed on the ring surrounding the forearm. On the other hand, the sensory glove registers such parameters as the pressure force on individual areas of the anterior of the hand and �lexion of the �ingers. �iagram 3 shows the entire measurement system. The master device is in this case the central unit connecting all peripheral devices. The measuring band connects to the computer via the �luetooth 4.0 protocol. Unmodi�ied, raw data is sent at 1 Mb / s. The limitation in this case is the micro controller located directly on the band that samples at 200 Hz. [1] On the other hand, the sensor glove uses a 64channel Advantech PCI-1747u measuring card that samples at a frequency of up to 250 kHz. In order to ensure a uniform measuring data frame, the frequency has been limited to 200 Hz.

VOLUME 13, N° N°22 VOLUME 13,

Sensor glove

Sensor armband

Flex sensors

The Myo

2019 2019

Bending sensors

Interfaces PCI-1747u

Bluetooth

PC

Fig. 3. Diagram of the measurement system omission of individual channels in the process of feature selection.

Fig. 4. The Myo armband

3.1. Measurement Band

42

42

The used Myo measuring band (Figure 4) is a tool that is widely used in many areas. Starting from the simple control of computer applications, ending with the control of complex prosthetic systems. The distribution of electrodes mounted on the band ensures repeatability of measurement conditions. Assuming that the band is always put on the same height of the forearm, the position of the electrodes in the subsequent tests does not differ signi�icantly. Plates with sensors are connected by �lexible joints which expand evenly with the extension of the band. The whole is a stable construction. Apart from the positive impact on repeatability of measurements between subsequent tests, the construction signi�icantly limits the measuring area to only a small area of the forearm. Covering additional muscle with EMG sensors requires the fusion of more bands working in parallel. Unfortunately there is no possibility of re-using rejected channels in the case of Articles

3.2. Sensor Glove The applied sensor glove (Fig. 6) is an alternative and at the same time an extended version for the commercial Motion Capture Data Glove solution. It was built as part of the didactic project at the Wroclaw University of Science and Technology. The elaborated solution consists of 24 independently operating sensors, including:

- 18 pressure sensors located on the anterior of the hand palm, successively on the �ingertips, the proximal �inger segments and the metacarpal. These sensors were built in a technology using a polymer �ilm with variable resistance under the in�luence of pressure and bending forces. They give information about the pressure on individual parts of the hand;

- 6 standard 2.2 inch bend sensors. One sensor for each �inger and one additional thumb sensor to examine the bending of its base (Fig. 6). They were


Journal Journal of of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

VOLUME 2019 VOLUME 13,13, N°N°2 2 2019

Fig. 7. �oltage c�aracteris�cs of t�e force sensor Fig. 5. Sensor glove – posterior

Fig. 6. Sensor glove – anterior made in the ink technology and inform about the posture of the �ingers.

The glove communicates with the system via a measuring card with an AC converter. 3.3. Force Sensor

characteristic (Tab. 1 and Fig. 7) in the initial phase of movement, when there was no contact with any object from the environment, the voltage value measured on the sensor is contained in the range of 0 [V] to 0.5 [V]. In the second of movement occurs a contact and the voltage increases sharply to around 2 [V]. Then it gently falls and grows again, stabilizing in the range between 1.5[V ] and 1.75[V ]. The data coming from the pressure sensors not only inform about the existence of the contact with the obstacle, but also about the force exerted on the object during the grip itself. For example, it can be read from the graph that the sensor has a force of about 0.981 Newton in accordance with the formula: F = ma

Where m is the mass value read from Diagram 7 for voltage equal to the instantaneous voltage in 34 seconds and amounts to 2[V ], a is the value of gravitational acceleration and is assumed to be 9.81[ sm2 ]. F =

1 × 9.81 = 0.981[N ] 10

Table 1 shows the voltage values which were measured on selected pressure sensors for �ive weight values [g]: 0, 50, 100, 500 and 1000. The test was carried out at 5 [V] supply voltage. Tab. 1. �oltage c�aracteris�cs of selecte� force sensors �on t�e �nger�ps� Position Voltage value [V] at sensor load [g] 0 50 100 500 1000 Thumb 0.2 1.4 2.6 3.9 4.3 Index 0.5 1 1.5 2.6 3 Middle 0.6 2 2.8 4.1 4.5 Ring 0.4 1.4 2.2 3.5 4 Pinky 0.7 1 1.7 3 3.6

As expected, with the increase of the pressure force on the sensor, its resistance decreases. It leads to a reduction in voltage drop. An example signal recorded on a force sensor is presented in Figure 7. In accordance with the sensors

(1)

(2)

Fig. 8. Example signal of force sensor 3.4. Bend Sensor The properties of standard bend sensors are presented in Table 2. According to the presented data, along with the progressive bending of the sensor from 00 to 900 , its resistance increases and the voltage decreases. Articles

43

43


Journal Journal of of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

VOLUME N°22 2019 2019 VOLUME 13,13, N°

The averaged values for all three positions are presented in Diagram 9. Analyzing the data contained therein, it can be stated that in the examined range the voltage characteristics of the sensor are linear. Tab. 2. �oltage c�aracteris�cs of bend sensors Position Thumb Index Middle Ring Pinky

Voltage value [V] at the sensor bends [0 ] 0 45 90 3 2.5 2 2.6 2.1 1.8 3.3 2.9 2.6 2.4 1.9 1.5 3.1 2.5 2

(a)

(b)

(c)

(d)

Fig. 11. Example classes of movements

Fig. 9. �oltage c�aracteris�cs of t�e bend sensor The data contained in Diagram 10 shows the electrical signal from the bend sensor. As in the previous case, the registration took 2 seconds. Comparing the data with the voltage characteristic (Figure 2) of the sensor, it can be stated that in the initial phase of movement the �inger with the sensor was bent by 450 . Then in 12 seconds it was straightened and bent again in the of the �irst second of movement.

Fig. 10. Example signal of bend sensor

4. Tests

44

44

The carried out research aimed at illustrating the possibilities of the measuring stand in terms of the interpretation of electromyographic data from the Myo Articles

measuring band for the purpose of recognizing the intention of making a movement. The authors’ intention is also to maximize the number of correct identi�iers (input data → movement class) and aspiration to completely eliminate false matches for three different algorithms: standard deviation, wavelet form, energy of signal. The ��� algorithm was used as the classi�ier with parameter of neighbors number equal to 3. According to the studies, it is the optimal parameter value for the problem of classifying the time features of electromyographic signals [12]. 4.1. The Run of the Experiments

The selected set of movement classes consists of 11 different grasps. This set has been presented in the paper on an innovative measurement system developed at the Wroclaw University of Science and Technology [14]. It consists of movements during which everyday objects are captured: a pen (Fig. 11a), a screw, a potentiometer, a glass without handle (Fig. 11b), a glass with handle (Fig. 11c), a kettle, an ATM card (Fig. 11d), a mobile phone with rotary movement, a computer mouse, a mobile phone without rotary movement, a suitcase. The process of data acquisition took place in two stages. First, proper movement was performed for 2 seconds. Then, again for 2 seconds, a rest period took place during which the user had time to put away the object and return to the starting position. The measuring application has been equipped with two progress bars that �ill in alternately and inform about the remaining time. For each of the 11 classes, 200 measurements were taken. Then, for each move, a 100-element subset was selected for teaching the classi�ier. The rest of the measurements were used to assess the quality of classi�ications. As a result of subsequent tests, tables of truths


Journal of Automation, Mobile Robotics and Intelligent Systems Journal of Automation, Mobile Robotics and Intelligent Systems

VOLUME 13, N° 2 2019 VOLUME 13, N° 2 2019

and a percentage of recognition of the tested measurements are presented. The succeeding lines of the truth table contain information about single, examined classes of movements. For each of them, the numbers in the following columns describe the percentage recognition of a given movement as another class. The numbers lying on the diagonal are the values of the correct recognition of the next move by the classi�ier. All other values greater than 0 indicate an erroneous assignment of the classed measurement. 4.2. Results Stan�ar� �evia�on (S�) The standard deviation algorithm belongs to the group of classical, time algorithms of character extraction. N 1 Xsd = (xi − x0 )2 (3) N i=1

The calculations were carried out for the standard deviation and the ��� classi�ier. Tab. 3. Truth table for �tan�ar� �ev�a�on 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11.

1. 92 0 2 0 2 1 6 0 0 0 0

2. 2 89 3 0 8 0 0 0 0 0 1

3. 1 2 92 1 5 0 0 0 1 3 0

4. 1 0 0 99 1 0 0 0 0 0 0

5. 3 7 2 0 83 0 1 0 1 0 0

6. 0 0 0 0 0 99 0 0 0 0 0

7. 1 2 1 0 1 0 91 0 0 0 0

8. 0 0 0 0 0 0 0 93 0 5 0

9. 0 0 0 0 0 0 2 0 98 0 1

10. 0 0 0 0 0 0 0 7 0 91 0

11. 0 0 0 0 0 0 0 0 0 1 98

Table 3 shows the results of the classi�ication for a set of 11 grabs. The selected row 5 contains elements from columns 1 to 11. The values contained in them indicate successively the number of classi�ications 10 for classes from 1 to 11. In the example shown, class 5 was recognized 83 times correctly and 17 times incorrectly: 3 times as class 1, 7 times as class 2, twice as class 3 and once as class 7. The sum of values on the diagonal of the truth table in relation to the total amount of classed measurements gives the percentage value of the classi�ication of the movements studied. Xsk : 93%

(4)

Wavelet form (WL) Wavelet form algorithm from the group of time algorithms. The value of the feature determined by this method is calculated as the sum of absolute values from the differences of the subsequent amplitudes of the signal. N |∆xk | (5) XW = k=2

∆xk = xk − xk−1

(6)

The calculations were carried out for the wavelet form and the ��� classi�ier. Tab. 4. Truth table for wavelet length 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11.

1. 93 1 5 0 0 0 0 0 0 0 0

2. 1 91 0 0 6 0 0 0 0 0 0

3. 1 0 91 0 5 0 2 0 1 0 0

4. 1 0 0 99 0 0 1 0 0 0 2

5. 4 5 3 0 89 0 5 0 1 0 0

6. 0 0 0 1 0 99 0 0 0 0 0

7. 0 2 0 0 0 1 91 0 2 0 1

8. 0 0 0 0 0 0 0 93 0 1 0

9. 0 0 0 0 0 0 1 0 96 0 0

10. 0 0 1 0 0 0 0 7 0 98 0

11. 0 1 0 0 0 0 0 0 0 1 97

(7)

XW : 94%

From table 4, you can read the class that was most often incorrectly recognized. Class 5 has been mista�enly identi�ied 11 times. Energy (En)

Another classic algorithm of character extraction is the algorithm for calculating Energy of a signal. XEn =

N

k=1

(8)

|xk |2

The calculations were carried out for the Energy of signal and the ��� classi�ier. Tab. 5. Truth table for energy 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11.

86 2 7 4 2 2 8 0 0 1 1

2 80 7 0 14 0 0 0 0 1 2

2 4 81 0 6 0 3 0 2 6 0

1 0 0 89 0 0 0 0 0 5 0

6 13 1 0 78 0 3 0 2 2 0

0 0 0 2 0 98 0 0 0 0 0

XEn : 84%

3 1 2 2 0 0 83 0 2 0 1

0 0 2 0 0 0 0 89 1 10 1

0 0 0 0 0 0 3 0 93 0 0

0 0 0 3 0 0 0 11 0 74 2

0 0 0 0 0 0 0 0 0 1 93

(9)

Again analyzing truth table 5, the class that is most often wrongly recognized can be easily found. In this case, it is class 10 with 26% of the samples incorrectly recognized.

4.3. The Comparison of Measurement Systems In the next step of the research, the results of experiments were compiled and the measurement data obtained with the Myo armband and a professional measuring system [14] developed at the Wrocław University of Technology and Science were compared. In both cases, the tests (both data acquisition as well as digital processing) were proceeded with the same measurement methodology. In the case of a measurement system from the Wroclaw University of Technology Articles

45

45


Journal Systems Journalof ofAutomation, Automation,Mobile MobileRobotics Roboticsand andIntelligent Intelligent Systems

and Science, data acquisition took place on 16 channels, 8 of them were electromyographic signals and the remaining 8 mechanomyogra�ic signals. The results of the experiments were presented in Table 6 and in Figure 12.

VOLUME 13, N° N°22 VOLUME 13,

2019 2019

The average value of correct classi�ications was in each case the highest for the system using the Myo.

prosthesis themselves will not be damaged due to the movement. The presented concept of the measurement system gives new possibilities due to the study of algorithms for the recognition of biological signals. In contrast to the current classical approach, the input vector contains signals from four different sources. Starting from the electrical signal ending with a mechanical signal, which gives the ability to recognize intentions based on different sources of signal, and thus carrying different information. The results of experiments in Chapter 4 show that the quality of recorded signals is satisfactory. The use of simple algorithms for extraction of features, operating on the temporal properties of signals, is suf�icient to achieve high recognition values. These results can be the starting point for further research on the algorithms for recognizing biological patterns. The next stage of work should be the expansion of the controller by a part of the event controller reacting to contact with an obstacle from the environment.

Fig. 12. Comparison of the Myo and Lab 06 systems

Michał Błędowski∗ – Chair of Cybernetics and Robotics, Faculty of Electronics, Wroclaw University of Science and Technology, ul. Janiszewskiego 11/17, Wrocław, Poland, e-mail: michal.bledowski@pwr.edu.pl. Andrzej Wołczowski – Chair of Cybernetics and Robotics, Faculty of Electronics, Wroclaw University of Science and Technology, ul. Janiszewskiego 11/17, Wrocław, Poland, e-mail: andrzej.wolczowski@pwr.edu.pl.

Tab. 6. ����rate ��assi��a�ons for the Myo and Lab 06 systems

The Myo Lab 06 EMG Lab 06 MMG Lab 06 EMG + MMG

The algorithm used to extract features SD WL En 93% 94% 84% 82% 86% 74% 67% 76% 64% 89% 93% 77%

AUTHORS

Corresponding author

REFERENCES 5. Conclusions

46

46

The paper presents the concept of the system construction for the simultaneous measurement of the bioelectric signal and pressure force on the prosthesis’s �ingers during motion. When the system was compared to a professional measuring system, the parameters of a commercial measuring band appear to be suf�icient in terms of the quality of the received signal. In addition, the commercial system also allows the registration of the signal from the mounted accelerometer and gyroscope which extends the existing capabilities of traditional measurement systems [14]. The use of a measuring glove with bend and pressure sensors gives additional information about the hand posture (bend sensors) during motion as well as interaction with the surrounding world (touch sensors). Thanks to this information, it is possible to accurately describe the condition of the hand and forearm at any time during the movement. Additionally, thanks to bend and pressure signals, it is possible to control the prosthesis in such a way that the interaction with the surrounding world is as non-invasive and realistic as possible so that the manipulated objects and the Articles

[1] “MYO armband tech specs”. https: //support.getmyo.com/hc/en-us/ articles/202647853-What-gestures-doesthe-Myo-armband-recognize-. Accessed on: 28.08.2019.

[2] H. M. Al-Angari, G. Kanitz, S. Tarantino, and C. Cipriani, “Distance and mutual information methods for EMG feature and channel subset selection for classi�ication of hand movements”, Biomedical Signal Processing and Control, vol. 27, 2016, 24–31, 10.1016/j.bspc.2016.01.011. [3] M. Geryes, J. Charara, A. Skaiky, A. Mcheick, and J. Girault, “A novel biomedical application for the Myo gesture control armband”. In: 2017 29th International Conference on Microelectronics (ICM), 2017, 1–4, 10.1109/ICM.2017.8268823.

[4] K. H. Ghazali, M. F. Mansor, M. M. Mustafa, and A. Hussain, “Feature Extraction Technique using Discrete Wavelet Transform for Image Classi�ication”. In: 2007 5th Student Conference on Research and Development, 2007, 1–4, 10.1109/SCORED.2007.4451366.


Journal Journal of of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

[5] A. A. Hidayat, Z. Arief, and D. C. Happyanto, “LOVETT Scaling with Flex Sensor and MYO Armband for Monitoring Finger Muscles Therapy of Post-Stroke People”, EMITTER International Journal of Engineering Technology, vol. 3, no. 2, 2016, 60–76, 10.24003/emitter.v3i2.45.

[6] M. Kurzynski and A. Wolczowski, “Multiple Classi�ier System Applied to the Control of Bioprosthetic Hand Based on Recognition of Multimodal Biosignals”. In: J. Goh, ed., The 15th International Conference on Biomedical Engineering, 2014, 577–580. [7] M. Kurzynski, M. Krysmann, P. Trajdos, and A. Wolczowski, “Multiclassi�ier system with hybrid learning applied to the control of bioprosthetic hand”, Computers in Biology and Medicine, vol. 69, 2016, 286–297, 10.1016/j.compbiomed.2015.04.023. [8] I. Mendez, B. W. Hansen, C. M. Grabow, E. J. L. Smedegaard, N. B. Skogberg, X. J. Uth, A. Bruhn, B. Geng, and E. N. Kamavuako, “Evaluation of the Myo armband for the classi�ication of hand motions”. In: 2017 International Conference on Rehabilitation Robotics (ICORR), 2017, 1211–1214, 10.1109/ICORR.2017.8009414.

VOLUME 2019 VOLUME 13,13, N°N°2 2 2019

[15] A. Wolczowski and M. Kurzynski, “Control of hand prosthesis using fusion of information from bio-signals and from prosthesis sensors”. In: 201� Asia-Paci�ic Conference on Computer Aided System Engineering (APCASE), 2014, 19–24, 10.1109/APCASE.2014.6924465.

[16] G. P. Zhang, “Neural networks for classi�ication: a survey”, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), vol. 30, no. 4, 2000, 451–462, 10.1109/5326.897072. [17] S. Zhang, X. Li, M. Zong, X. Zhu, and R. Wang, “Ef�icient kNN Classi�ication With Different Numbers of Nearest Neighbors”, IEEE Transactions on Neural Networks and Learning Systems, vol. 29, no. 5, 2018, 1774–1785, 10.1109/TNNLS.2017.2673241.

[9] M. Mironovova and J. Bı́la, “Fast fourier transform for feature extraction and neural network for classi�ication of electrocardiogram signals”. In: 2015 Fourth International Conference on Future Generation Communication Technology (FGCT), 2015, 1–6, 10.1109/FGCT.2015.7300244.

[10] E. Sejdić , I. Djurović , and J. Jiang, “Timefrequency feature representation using energy concentration: An overview of recent advances”, Digital Signal Processing, vol. 19, no. 1, 2009, 153–183, 10.1016/j.dsp.2007.12.004. [11] P. Tu and J. Chung, “A new decision-tree classi�ication algorithm for machine learning”. In: Proceedings Fourth International Conference on Tools with Arti�icial Intelligence TAI ��2, 1992, 370–377, 10.1109/TAI.1992.246431. [12] A. Wołczowski and K. Krysztoforski, “Arti�icial hand control via EMG signal classi�ication – experimental investigation of alghoritms”. In: K. Tchoń , ed., Progress in robotics, Wydawnictwa Komunikacji i Łącznoś ci, Warszawa, 2008, 97– 122. [13] A. Wołczowski and S. Myś liń ski, “Identifying the relation between �inger motion and EMG signals for bioprosthesis control”. In: Proceedings of the 12th IEEE International Conference on Methods and Models in Automation and Robotics (MMAR 2006), 2006, 817–822.

[14] A. Wołczowski, M. Błędowski, and J. Witkowski, “System do rejestracji sygnałó w EMG i MMG dla sterowania bioprotezą dłoni”, Prace Naukowe Politechniki Warszawskiej. Elektronika, vol. z. 195, t. 1, 2016. Articles

47

47


Journal of Journal of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

VOLUME 2019 VOLUME 13,13, N°N°22 2019

M���������� ������� S����� ��� R����� �S� ������������ �u��i�ed: 12th January 2019; accepted: 13th April 2019

Adam Kurnicki, Bartłomiej Stańczyk DOI: 10.14313/JAMRIS/2-2019/18 Abstract: The ar�cle presents a control algorithm of a robo�c ma� nipulator used as the main component of a system for a remote noninvasive medical ultrasound e�amina�on. This algorithm has been developed within the ReMeDi �Remote Medical Diagnos�cian� pro�ect. �t the beginning of the ar�cle, the manipulator �inema�cs, its mechanical construc�on and the control system structure as a part of the telemanipula�on system is shown. The essen�al components of the control system are discussed in detail. Then problems and solu�ons connected with the genera� �on and conversion of the posi�on and orienta�on of the reference signals are presented. Finally, the results of the evalua�ons with users are discussed. Keywords: remote medical e�amina�on, telemedicine, medical robot, manipulator, control system

�� ��trod�c�o�

48

48

The demographic development of modern societies, characterised by increase in the number of elderly people, drives up the demand for specialised medical care and for advanced diagnostic facilities. Unfortunately, primary healthcare centres and provincinal hospitals in most countries, especially developing ones, usually lack such facilities and do not always provide specialised services because of a lack of physicians. Shortage of quali�ied specialists often leads to misdiagnosis, which can cause further deterioration in patient’s health, or even lead to demise. Such circumstances provide an incentive for the development of many types of telemedicine-related services, ranging from telerehabilitation [22], teledentistry [10], etc. to telesurgery [28]. Medical robots are successfully employed in numerous telemedicine systems. The most technologically advanced and best known are surgical robots [19, 30]. Successful medical treatment normally depends on a timely and correct diagnosis which is crucial in typical emergency situations. A medical specialist needs some time to get to a patient from home or another hospital. If a doctor could perform diagnosis remotely and make a decision about surgical intervention, the hospital staff could use the time during which the doctor is travelling and prepare the patient. In order to help in such situations, a number of telerobotic systems were developed. Currently researchers work on newer, more advanced and universal solutions. Some of them reached the level of commercial products, e.g. VGO [4] used for medical teleconsultation, MEDI-

ROB [2] for echocardiography examination and MELODY [1] for abdominal ultrasonography. To the best knowledge of the authors, there are currently no devices allowing a complete remote medical examination (i.e. interview, auscultation, palpation and USG examinations) and diagnosis based on contemporary medical standards, apart from the ReMeDi [3] system. The Remote Medical Dignostician (ReMeDi) is a multifunctional robotic system which allows to perform a real remote physical and ultrasonographic (USG) examination [6, 21, 27]. The �irst version of this system was designed and made during the ReMeDi project [3] funded by the European Union’s Research and Innovation 7th Framework Programme (EU FP7). Nowadays system is developed within project: ”Research and innovation” activity 1.2 RPO WL 2014-2020 funded by Lublin Enterprise Support Agency. The ReMeDi system is a typical teleoperation system, which basic principle of work is illustrated in Fig. 1. It can be seen as two subsystems, physically spaced apart. The �irst one, called ReMeDi Robot, is located on the patient’s site (in a provincinal hospital or medical centre) and the second, called DiagUI (Diagnostician User Interface), is on the doctor’s site (in another hospital or doctor’s home). The doctor plays the role of operator and the system enables him to carry out remote medical examination of the patient in the way that is essentially similar to the traditional examination. During the examination doctor moves and rotates a dummy USG probe held in the palm. The dummy probe constitutes the top part of a haptic interface (see left part of Fig. 1). It allows to generate and send position and orientation demands for the real USG probe, which examines the patient. In order to obtain the right ultrasound image, doctor has to properly position the real probe on the patient body. This is possible, because the real probe is the end effector of the manipulator �ixed to the mobile ReMeDi robot (see right part of Fig. 1) and follows the required movements. At the same time, the forces acting on the ultrasound probe should be conveyed to the dummy probe via the haptic interface so that the doctor can feel the stiffness and the geometry of the patient’s body. The use of a fairly advanced vision system allows the doctor to observe the patient’s body and the real probe online. The medical interview with the patient is performed by the doctor due to the integration of the teleconferencing system on both sites of the ReMeDi system. All measurement, control and audio-video signals are transmitted via the Internet. Ultrasound examination is one of the most fre-


Journal of of Automation, Automation, Mobile Journal MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

VOLUME 2019 VOLUME 13,13, N°N° 2 2 2019

ReMeDi Examination Setup

Haptic Interface

Pedal

DOCTOR LOCATION

Communication Link - Internet

Diagnostician User Interface (DiagUI)

Head on Actuated Neck

Assistant (Nurse)

USG Probe

Manipulator Mobile Platform

HOSPITAL

Fig. 1. ReMeDi system overview quently used diagnostics methods, as well as it provides the most specialist diagnostic information during medical emergencies. From the teleoperation point of view it is the most complex and complicated part of a remote medical examination. The key in the implementation and realisation of this examination is the use of a robot with a manipulator. The robotic solutions for tele-echography proposed in the literature can be divided into two groups. The �irst group [1, 5, 9, 13] addresses relativelly small and portable manipulators, usually intended for dedicated type of the USG examination. They allow the doctor to perform only small transitions and rotation of the USG probe in the region where assistant placed and held them. In the second group [11,12,16,18,24], independent robots are equipped with manipulators (usually anthropomorphic) which allows the physician to perform full control of the probe position and orientation. The ReMeDi arm mechanics and control system were designed to give the physician the operational ability addressed by the second group. Due to the fact that it performs manipulations in a direct contact with the patient’s body, it should be safe, which is facilitated by reliability, stability, low stiffness and low mass of the manipulator. Unfortunately, the requirements of the user (physician) are slightly different. He would like the required position and orientation of the probe to be accurately reproduced (without inertia and drift). In addition, he would like to be able to feel the touch and rigidity of the patient’s body as much as possible (high stiffness of the manipulator is required). In this connection, while designing the ReMeDi robot and its control system, compromise solutions were used, which took into account the above mentioned issues. There is a quite wide variety of control system architectures dedicated for manipulators used in teleechography research. Most of them is based on classical manipulator control architecture with inner position [17] or velocity [16,31] control loop (usually with PID controller) at joint level. Joint references are computed in outer loop, which performs Cartesian cont-

rol through velocity based (less often position based) inverse kinematics. Mathiassen et al. [16] propose to control the manipulator implementing a compliance force control algorithm before velocity based inverse kinematics. The robotic tele-echography system developed by Koizumi et al. [12] uses impedance control at Cartesian position control level. While the orientation is controlled directly with the use of continuous path controller [11]. More advanced and complicated control architecture dedicated for robotic-assisted tele-echography manipulator was developed by Luıs Santos and Rui Cortesao. In [23] they applied indirect force control (admittance control) for Cartesian motion control loop to establish the contact dynamics between the echographic probe and the patient. The orientation is controlled without the admitance control loop. Additionally, the motion controller has a velocity model-reference adaptive control (AOB) in the joint space level, driven by task space posture errors. A two-task hierarchy architecture with posture optimization is proposed in [24], where Cartesian force control is the primary task, orientation control is the secondary task, and posture optimization is performed in the null space of all prioritized tasks. This paper focuses on the challenges related to the design of the control system of the ReMeDi manipulator which allows a doctor to carry out remote medical examination, including auscultation and different modes of an ultrasonographic examination. In section 2 a general overview of ReMeDi telemanipulation system is presented. This section outlines the construction and kinematics of the manipulator as the most important parts of the telemanipulation system. In section 3 the manipulator control system structure and its components are discussed in detail. Algorithms that generate position and orientation reference signals based on the received demands from the Master side are shown in section 4. Finally, some results of users evaluation are discussed in section 5. Articles

49

49


Journal Journal of of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

VOLUME N°22 2019 2019 VOLUME 13,13, N°

2. ReMeDi �eleopera�on S��te�

fh Haptic Interface jm (Master) fs*

x*m ReMeDi xm* robot xs arm j*m (Slave) fs fs, µs

Environment

xm xm

Communication channel

Doctor

The ReMeDi system employs bilateral teleoperation architecture, which general scheme is shown in Fig. 2. It has a classical structure [14, 20] with such components as: Doctor, Master, Communication channel, Slave and Environment – mainly the Patient’s body.

Fig. 2. �t���t��e �� t�e �e�e�� ����te��� te�e��e����� system

50

50

a)

b)

Stereo cameras

Dummy USG probe

zm O m ym xm

xmo

USG probe

Oso

xso

zso yso

ys Os zs xs zmo ymo

Omo

The main system components: Master – the haptic interface and Slave – the ReMeDi robot manipulator are presented in Fig 3. Since these components are separated in their location, they have to exchange information (i.e. actual master input device: position xm ∈ R3 , velocity ẋm ∈ R3 , orientation angles φm ∈ R3 and forces f s ∈ R3 measured on the slave side) in real time, over the Communication channel (Internet) at quite considerable distances. The system features such undesired phenomenon as variable communication latencies, which are discussed in [7, 14]. In order to cope with this issue, the communication channel was equipped with wave variable algorithms. This solution is described in more detail in [14]. The transmitted signals f s , xm , ẋm and φm can be disturbed in the communication channel and therefore they are represented by the following symbols: f ∗s , x∗m , ẋ∗m and φ∗m ∈ R3 on the opposite side. 2.�. Ma�ter – �ap�� �nter�a�e

2.2. Slave – ReMeDi Manipulator

The haptic interface (see Fig. 3a) is the core of the Master component. It consists of a pure 3D translational manipulator built as a motorised parallel robot with delta type kinematics [8]. It is used as a translation input device. A passive 3 DoF mechanism is mounted with a dummy USG probe at the top of the delta robot. Its range of motion corresponds to the hand movement pivoting at the wrist and plays the role of an orientation input device. The haptic interface is controlled by the force controller, designed and deployed by PERCRO Lab [29]. The force controller computes the values of torques for each actuated joint of the delta robot as the sum of: the torques corresponding to the gravity and friction compensation of the haptic interface and the torques corresponding to force f s , which is acting on the real probe as a reaction to patient’s body touch. It is a realisation of the force feedback and allows to produce true-to-life touch sensations on the doctor’s hand. As an effect of exertion of force f h ∈ R3 by the doctor to the translation input device, the dummy probe moves

The Slave component is based on a 7DoF humansized manipulator mounted on the left side of the ReMeDi robot (see Fig. 3b). Several requirements were stated for the development of the ReMeDi arm in [15, 21, 27]. From the kinematics point of view, the most important is, that the arm should allow to perform all the typical echocardigraphy and abdominal USG examinations. To achieve this goal, the manipulator placed on the patient’s site has to reveal similar physical characteristics to the human arm, so that the doctor can drive it as he would move his own arm during medical examination. According to [26], analysis of human arms reveals, that the minimum number of DOFs used for their reproduction is 7. The resulting design of the ReMeDi manipulator, shown in Fig. 4a, was performed by the ACCREA Engineering company. The manipulator consists of: two rotational joints, �irst at the torso (turntable) and second at the elbow, two spherical joints with 2 DoFs at the shoulder and the wrist, and one additional, rotational joint at the probe (end-effector). The Denavit-Hartenberg para-

Articles

Fig. 3. �e�e�� te�e��e����� system ��m���e�ts � � ����� ��te����e� � � �e�e�� ����t ��t� m�������t�� over distance represented by the change of position signal xm . The actual values of the signal xm are calculated according to rules described in [8] based on the values of the delta robot joints angles, measured by encoders. They are used as reference values for the position control of the real probe. The values of the joint angles φm read from encoders mounted on each joint of the orientation input device are used as reference values for control of the orientation of the ultrasound probe. All of the measured vectors of Cartesian positions xm , velocities ẋm and Euler angles φm of the tip of the dummy probe are expressed in the master base coordination frame Omo − xmo ymo zmo , shown in Fig. 3a.


Journal of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

a)

shoulder

elbow

force-torque sensor

torso

b)

d2 d1

z1

z2 y2 x2

y1

x1 z0

wrist

a3

y3 x3 z3

zS yS xS probe

d5 d7 x4 x5 z4 y5

x7 (zS) y4 a6 z5 x6z6 z7(xS) y6 y7(-yS)

y0 x0

Fig. 4. ReMeDi 7 DoF manipulator a � p�ysical construc�on� b � kinema�cs structure and link coordinate systems meters for the manipulator are listed in Table 1, the corresponding set of frames is shown in Fig. 4b. Tab. 1. Manipulator Denavit-Hartenberg parameters Joint no 1 2 3 4 5 6 7

Link offset ai [m] 0.0 0.0 0.4 0.0 0.0 -0.115 0.0

Link twist αi [rad] π/2 −π/2 π/2 π/2 −π/2 π/2 0.0

Link length di [m] 0.35 -0.26 0.0 0.0 0.32 0.0 0.03

Joint angle qi [rad] q1 q2 q3 q4 + π/2 q5 q6 q7

The ReMeDi arm is redundant and reaches 1m in length. Its kinematic structure is optimal in the sense of size/workspace relation so that the probe is capable of reproducing every doctor’s movement performed during all the required USG examinations. Relative to the patient’s body, the manipulator is rigid. It is built with the use of aluminium/steel construction elements and typical electro-mechanical components like: DC motors with harmonic gears and incremental encoders mounted on the shaft. The maximal contact force, which could be generated at the probe tip is 40 N. The hardware layer of the ReMeDi arm control system is based on a PC computer with a real-time operating system and a specialised, digital unit, called JointsController, manufactured by the ACCREA Engineering. Thus the deployment of the control system (arm control algorithms) is distributed between the two already mentioned hardware components. In general, this control system realises position control strategy. The ReMeDi arm control strategy and its particular control system components are presented in details in section 3.

VOLUME 2019 VOLUME 13,13, N°N°2 2 2019

In order to provide force feedback to the haptic interface, the arm is equipped with a dedicated custom built six-axis force/torque sensor (fts) mounted in series betweeen the last joint of the arm and the probe mounting mechanism. The end-effector pose ξ s (i.e. Cartesian position xs ∈ R3 and orientation matrix Rs ∈ R3x3 ) and measured by fts and transformed to the tip of the real probe (see Fig. 4a ): forces f s and torques µs are expressed in the slave base coordination frame Oso − xso yso zso , shown in Fig. 3b.

3. Manipulator Control System

The architecture of the ReMeDi arm control system is an evalution of the previous work [26]. The general block diagram of this architecture is presented in Fig. 5. From the point of view of the position and orientation control strategy, it has a cascaded structure, composed of two parts. The internal loop, formed by the two blocks: ReMeDi Arm and Position Controller is used to control the position and orientation of the arm in joint space, i.e. it tracks the reference trajectory qsd ∈ R7 by the vector of the actual joint position values qs ∈ R7 . The external loop serves the realisation of the control algorithms in the task space. This part consists of the Demanded Velocity Vector Calculation and Admittance Control, an integrator and IK-Inverse Kinematics blocks. The FK - Forward Kinematics block is used to calculate the actual ξs and demanded ξ sd end-effector pose based on the measured qs and desired qsd position values of the arm joints. There is one additional, very important block, which manages the ReMeDi arm controller - Arm Control System Manager. The main part of the Position Controller was physically implemented in JointsController (see Fig. 6). The algorithms, which realise the functionalities of all other blocks are executed under the real-time system of a PC based controller. The communication between the PC and JointsController is realised through an ethernet connection with UDP protocol. 3.1. �osi�on Controller

The Position Controller implements seven independent joint position control systems. Each of these systems has a form of position-velocity-torque cascade control structure, presented in Fig. 7. Every control loop, shown there, is a PID-based control algorithm, that causes the tracking of the demanded position value qisd by the actual position value qis of i-th joint. The torque control loop is performed by DC motor servo controllers (current controllers - parts of DC Drives). The velocity and position control loops are implemented in the main motion controller (MMC) - part of JointsController (see hardware architecture shown in Fig. 6). JointsController is a hierarchical, multichip controller, which supports all seven DC drives, i.e. counts impulses from incremental encoders by the quadrature encoders (encoder interfaces built on the basis of seven STM32F1 microcontrollers) and Articles

51 51


Journal Journal of of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

VOLUME N°22 2019 2019 VOLUME 13,13, N°

pacted by their static friction. In order to eliminate this issue, a static friction compensation algorithm was used. This algorithm is based on the principle of a relay with hysteresis. The sign of the compensation torque τisf depends on the sign of the demanded velocity signal for the velocity control loop. The change of the τisf signal sign is not affected by the change of the demanded velocity value if the demanded velocity value is less than the arbitrarily chosen velocity threshold. The value of τisf equals the experimentally identi�ied value of the drive static friction.

PC BASED CONTROLLER

END-SWITCH

ETHERNET RMII<-->PHY

GPIO

PREEMPT LINUX

ACCREA PAWER UNIT

UDP

EMERGENCY BUTTON CIRCUIT

ETHERNET ADAPTER

RMII

UART<-->RS485

UART SPI

SPI

DC MOTOR

INCREMENTAL ENCODER

DC DRIVE 1

TIMER 1

MAX3097

TIMER 2

MAX3097

TIMER 3

MAX3097

TIMER 4

MAX3097

�.�. �n�er�e �ine�a�c�

The values of the desired joint positions qsd for the position control loop (see Fig. 5) result from the integration of the reference joint velocities q̇sd ∈ R7 . The velocity vector q̇sd is calculated by the Inverse Kinematics block from the arm end-effector desired velocity vector Ẋ sd ∈ R6 . The vector Ẋ sd = [ẋsd , ω sd ] consists of two vectors: the desired linear ẋsd ∈ R3 and desired angular ω sd ∈ R3 velocities, expressed with respect to the base frame Oso − xso yso zso presented in Fig. 3b. Since the manipulator forms a redundant kinematic chain, to solve the inverse kinematics problem a pseudoinverse control algorithm [26] is used:

GPIO

STM32F1

ENCODER INTRFACE MODULE 1

STM32F407VG MMC

JOINTS CONTROLLER

Fig. 6. Simplified hardware architecture of the ReMeDi arm control system sends demands to current controller from the MMC via an RS485 interface. The MMC is a microcontroller STM32F407VG loaded with a program, which performs four basic tasks: - communication with the higher-level controller (PC based controller) through the Ethernet,

q̇sd = J #s Ẋ sd + [I − J #s J s ]q̇sd0 ,

where: J #s is a Moore–Penrose pseudoinverse of the arm Jacobian J s ∈ R6x7 , q̇sd0 ∈ R7 is an arbitrary value of the joint velocity vector.

- communication with DC drive power units through UART,

�.�. �e�an�e� �elocit� �ector Calc�la�on an� ���it� tance Control

- acquisition of data from encoder interface modules through SPI ,

In order to improve the contact stability and to avoid large contact forces, the arm should be compliant. Since the arm is designed as a mechanically stiff structure, it is necessary to apply a compliant control algorithm - admittance control. The implemented admittance control algorithm is based on the forcetorque hs = [f s , µs ] ∈ R6 measurement recalculated to the end-effector tip. Regarding the algorithm presented in [23] and [24] not only the translational mo-

- execution (with 1 kHz) of the above-mentioned position-velocity control loops together with gravity and static friction compensation algorithms. The gravity compensation algorithm results in correction of the torque signal τisc by torque τisg (see Fig. 7) obtained from the gravity model of the manipulator. Since harmonic gears are parts of the manipulator drives, the control quality is signi�icantly im-

hs=[fs, µS]

fs hs *

Xsd

hs

x*m x*m

jm*

Demanded Velocity Vector Calculation and Admittance Control

Xsd

FK

IK

qsd

Js

Fig. 5. ReMeDi arm control system architecture 52 52

Articles

RCCSCMD

Arm Controller Manager

cts

ξsd

(1)

cjs

qsd

ACSSTAT

hw

Position Controller

us

ts

ReMeDi Arm

qs

ξs FK


Journal of of Automation, Automation, Mobile Journal MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

VOLUME 2019 VOLUME 13,13, N°N° 2 2 2019

tion is modi�ied by admittance controller but the rotational (orientation) as well. The desired position or the velocity vector is corrected by its displacement or velocity respectively, calculated from the impedance equations [26] for translational and rotational motion: f s = M ẍdc + Dẋdc + Kxdc ,

(2)

(3)

µs = Mo ω̇ dc + Do ω dc + Ko εdc ,

where: ẍdc , ẋdc , xdc ∈ R3 are translational acceleration, velocity and displacement, ω̇ dc , ω dc , εdc ∈ R3 are angular acceleration, velocity and displacement, M, Mo , D, Do , K, Ko are inertia, damping and stiffness positive scalar parameters chosen experimentally (M=4 kg, D=63 Ns/m, K=500 N/m, Mo =0.04 kgm2 , Do =1.3 Nms/rad, Ko =10 Nm/rad). This algorithm establishes a virtual mass-spring-damper system on the end-effector so that the arm becomes compliant. Together with the admittance control, a 2nd order low-pass �ilters for translational and rotational motion: 1 1 (4) ẍsdf + ẋsdf + xsdf = xd , (2πf c )2 πfc 1 1 ω sdf + εsdf = εd , (5) ω̇ sdf + (2πf co )2 πfco were implemented. Filters with cut-off frequencies fc and fco protect the arm against rapid changes of the desired position xd ∈ R3 and orientation angles εd ∈ R3 commanded by the master device. Finally, the desired position xsd ∈ R3 and orientation angles εsd ∈ R3 are given as: xsd = xsdf + xdc , εsd = εsdf + εdc .

(6)

(7)

Taking into account the equations (2) - (7) and assuming that: 1/(2πfc )2 = M /K, 1/πfc = D/K, 1/(2πfco )2 = Mo /Ko and 1/πfco = Do /Ko the desired velocity vector Ẋ sd used by the inverse kinematics algorithm (1) can be calculated from the following equations: M ẍsd + Dẋsd = f s + K(xd − xsd ), Mo ω̇ sd + Do ω sd = µs + Ko (εd − εsd ).

(8)

(9)

The desired pose ξ sd = [xsd , εsd ] (see Fig. 5) is calculated by forward kinematics from desired joint positions qsd . The desired orientation angles εd and εsd

are represented in the implementation of the control algorithm by quaternions obtained by means of transformations described in [26]. The reference pose for slave ξ d (i.e. the desired position xd ∈ R3 and the quaternion which represents the desired orientation matrix Rd ∈ R3x3 or εd ∈ R3 in equation (9)) is calculated based on the signals x∗m and φ∗m received from the haptic interface, according to the algorithms presented in section 4. 3.4. Arm Control System Manager

The Arm Control System Manager is composed of three functional elements: the arm decision maker (ADM), trajectory generators and the arm monitoring system. The arm monitoring system analyses the signals coming from the Position Controller (including the hw signal carrying information about the state of the hardware), which leads to the system failure detection and isolation (safety feature). On the basis of this information and control commands received from the ReMeDi Robot Central Control System (RCCSCM D - see more detailed information about RCCS in publication [25]) the ADM generates control commands: cts for task space control system part (i.e. for Demand Velocity Vector Calculation block) and cjs for joint space controller (i.e. Position Controller). The ADM also supervises trajectory generators. It switches sources (algorithms) of programmed arm movement trajectories (i.e. demanded velocity vector ∗ Ẋ sd ∈ R6 values) according to RCCS commands and consequently to the ADM commands. The Arm Decision Maker was designed in the form of a �inite state machine, presented in Fig. 8. It operates on four main states: Startup, Active, Shutdown and Failure. Just after Power is activated, the system goes directly to the Startup state, where low-level control components are initialised/run. When all the lowerlevel components have been run properly (i.e. signal Startup_Done has been set to ”1”) and the RCCS has initiated the startup procedure for the whole ReMeDi system (by setting RCCSCM D to the STARTUP value [25]) the system leaves the Startup state and goes to the Active state. In case an emergency button is pressed or any component indicates a failure state or the RCCS sends the FAILURE command (i.e. RCCSCM D ==FAILURE), the system goes into the Faitisf

qisd qisc

qisd qis

tisc

tisg

tisd

uis

tis

Encoder

qis

qis

Fig. 7. WŽƐƟŽŶͲsĞůŽĐŝƚLJͲdŽƌƋƵĞ ĐĂƐĐĂĚĞ ĐŽŶƚƌŽů ƐƚƌƵĐƚƵƌĞ ʹ ƉĂƌƚ ŽĨ ƚŚĞ ƉŽƐŝƟŽŶ ĐŽŶƚƌŽůůĞƌ

Articles

53

53


Journal of Automation, Mobile Robotics and Intelligent Systems Journal of Automation, Mobile Robotics and Intelligent Systems

[EMG_OK && RCCSCMD==Fault_CLR]

[RCCSCMD==SHUTDOWN]

Startup

[Trans2Failure] [Startup_Done && RCCSCMD=STARTUP] [Trans2Failure] Active

Failure

[RCCSCMD==SHUTDOWN] [RCCSCMD==SHUTDOWN] Shutdown

Trans2Failure= ~EMG_OK || RCCSCMD==FAILURE || hw.Fault

Fig. 8. Arm Decision Maker – finite state machine lure state. Then the ADM sends information via a cjs signal to the Position Controller that it should run operations required in the failure state, i.e. disable drives, brake joints, etc. In order to switch off the ReMeDi arm in a controlled manner, the RCCS sends the SHUTDOWN command to the ADM, which starts the shutdown procedure. The high-level controller (RCCS) is informed about the current status of the arm control system by an Arm Controller Manager output signal ACSST AT (see Fig. 5), which is in fact the output signal from the ADM and indicates its actual state. The arm decision maker includes more elements than the four states presented in Fig. 8. Each of these states is represented by several substates. The most important and complex, from the medical examination point of view, is the Active state (see Fig. ??). The Active state begins in one of four states: - Homing - the procedure of initialisation/reset of arm joint encoders, executed when homing procedure has not yet been done (Homing_DONE is ”0”), - Wait for Arm Unsecure - the state preceding Homing, activated in case the arm hasn’t been mechanically unlocked i.e. the sensor signal ArmSecured is ”1”,

- Rest Mode - the joint drives are disabled (relaxed in case there is no need to keep the arm power consumed), activated when homing has been already done and the arm is mechanically locked,

54

54

- GoTo Init Pose - the arm goes to the initial position in which the force-torque sensor is initialised and the arm is ready for preparation to the examination. After the GoTo Init Pose procedure is completed, the algorithm goes to the Hold Pose state. This state indicates that all the moving parts of the arm should be stopped. It is the starting point for the other commonly used actions. The Hand Lead and Tool Change states are used during arm preparation for examination. The former helps the assistant to manually position the arm close Articles

VOLUME 13, N° 2 2019 VOLUME 13, N° 2 2019

to the examined body part of the patient. In this case demanded arm pose is modi�ied by an algorithm which maps measurements received from the ∗ force/torque sensor to demanded velocity vector Ẋ sd . The Tool Change state allows the assistant to change the probe. When the Remote Examination state is active, the doctor remotely operates the arm. After the examination is �inished and if the RCCS required parking procedure (RCCSCM D == PARKING), the ADM goes to the Parking state. Trajectory generator leads the arm to the parking position. Finally, just after the arm has been secured (ArmSecured is set to ”1”), the ADM activates the Rest Mode.

�. ������i����� �� ��� ���i��� ��� ��i����� ��� �����i��

The manipulator control algorithm described in section 3 controls the position and orientation of the ultrasound probe in relation to the arm base coordinate system, which means that all of the variables associated with the control in the Cartesian space, are expressed relative to the base coordinate system of the slave manipulator Oso − xso yso zso (see Fig. 3b). The signals coming from the master are expressed in the base coordinate system of the haptic interface Omo − xmo ymo zmo (see Fig. 3a). Therefore, all variables must be uni�ied to a single coordinate system� in this case the slave coordinate system is chosen. Thus all calculations of the position and orientation setpoints are determined in the slave coordinate system. However, it is not necessary to perform a mathematical transformation of the master base coordinate system to the slave base coordinate system, since the default master X axis is perpendicular to the screen displaying the actual probe location to the doctor, and the X axis of the base slave system is parallel to the camera that generates this very image, so it is possible to assume that these coordinate systems are aligned and the transformation is replaced by the visual feedback. In the implementation of the algorithm determining the position and orientation setpoints one should take into account a number of conditions related to the architecture and logical sequence of the ReMeDi system events and the requirements of the user (doctor), as described in [25]. Some of these conditions are the initial conditions for determining the position and orientation setpoints, which are determined at the moment of acquisition of the remote control of the manipulator by a doctor, i.e. the moment of pressing the pedal by the doctor (see Fig. 1) - the coupling of master and slave. Position and orientation setpoints are generally calculated as the sum of the increments of the position and orientation respectively (generated by the doctor on the haptic interface after the master and slave coupling) and the values of the position and orientation at the time of the coupling of the master and slave. This mechanism will be described in more detail in the subsequent sections. With the orientation control, in order to ensure the comfort of the doctor (intuitive orientation change), it is also necessary to take into account the need for


Journal of Journal of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

[~Homing_DONE && hw.ArmSecured]

VOLUME 2019 VOLUME 13,13, N°N°2 2 2019

[Homing_DONE && ~hw.ArmSecured]

[Homing_DONE && hw.ArmSecured]

[~Homing_DONE && ~hw.ArmSecured]

Arm Disabled [~hw.ArmSecured] (Rest Mode)

Wait for GoTo Init Pose

[~hw.ArmSecured] [hw.ArmSecured]

Homing

[Parking_FIN]

Wait for Arm Secure

[Homing_FIN]

[hw.ArmSecured]

GoTo Init Pose

[GoToInitPose_FIN]

[hw.ArmSecured] Wait for Arm Unsecure

[~hw.ArmSecured]

Parking

[RCCSCMD==PARKING]

[RCCSCMD~= TOOL_CHANGE]

[RCCSCMD== TOOL_CHANGE] [RCCSCMD== MANUAL_CTRL]

Hold Pose

[RCCSCMD~= MANUAL_CTRL] [RCCSCMD==EXAMINATION]

Tool Change

Hand Lead (Manual Control)

[RCCSCMD~=EXAMINATION]

Remote Examination

Fig. 9. �rm Decision �a�er – �c��e state proper synchronisation of coordinate systems of the manipulator tip Os − xs ys zs (see Fig. 3b) , and the coordinate system orientation of the orientation input device Om − xm ym zm (see Fig. 3a), ie. aligning of the probe dummy and the real probe before pressing the clutch pedal. This situation is shown by the diagram in Fig. 10, where the frames of the Master’s probe Oml and the Slave’s probe Osl are synchronised at the moment of system coupling. We developed three alternatives presented in the section 4.2, which were evaluated by the users during the system’s evaluation.

Master

Slave

Oml

Osl so xmso- xml

xm - xml

so

xml

ΔR

Om

xsl

ΔR

Os xs = xd

xm

Omo

so

Oso

Fig. 10. Diagram showing the arrangement of the dummy and real probes at the moment of coupling and a�er a transla�on and rota�on of the input de�ice �a�is colours: X – red, Y – green and Z – blue) �.1. ��� ��go�it�� �o� ��t���ining t�� �osi�on ��t� points The vector of the position setpoints xd (see left side of Fig. 10) required as reference signal in the algorithm presented in section 3.3, in equation (8) is calculated using the following rule: so xd = xs = xsl + xso m − x ml ,

(10)

the right side of which is a sum of the manipulator endeffector position xsl ∈ R3 , recorded at the time the coupling of the master and slave and the difference be3 tween the haptic interface positions: xso m ∈ R - curso 3 rent and xml ∈ R - registered at the moment of the coupling, which values are the values of the signal x∗m

(see Fig. 2 or Fig. 5) and are expressed relative to Oso - the base coordinate system of the manipulator.

�.�. ��� ��go�it�� �o� ��t���ining t�� ��i�nt��on ��t� points The mechanical structure of the orientation input device shown in Fig. 3a is such that the matrix of rota3x3 is calculated based on the angle vations Rmo m ∈ R lues measured on the consecutive Euler angle system: so Rmo m = Rz,φmz Ry,φmy Rx,φmx = Rm .

(11)

The doctor manipulating the orientation input device during the teleoperation, changes the orientation of the dummy probe according to the following equation: so so Rso m = ∆R Rml ,

(12)

so 3x3 where: Rso - current and registered at m , Rml ∈ R the time of coupling the master and slave matrices (11), expressed in the Oso coordinate system, ∆Rso ∈ R3x3 - the change (increment) of the orientation of the dummy probe expressed relative to the base Oso coordinate system. This operation is presented on left side diagram in Fig. 10. Rearranging the equation (12) we obtain: so −1 ∆Rso = Rso , (13) m (Rml )

Finally, the rotation matrix representing the orientation setpoint for the real ultrasound probe may be calculated according to the following formula: so Rd = Rso s = ∆R Rsl ,

(14)

where: Rsl ∈ R3x3 – the rotation matrix of the slave manipulator at the moment of the pressing of coupling pedal. Depending on the number of axes of the coordinate systems de�ining the orientation of the dummy and the real ultrasound probe, which should be synchronised before telemanipulation starts, we developed three alternative methods of determining the desired orientation for the manipulator effector, i.e. the real ultrasound probe: - An incremental method that does not require synchronisation of any axis, Articles

55

55


Journal Journal of of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

a)

Master

Osl

Oml

b)

Slave

Master

Slave

Oml

Osl

so

ΔR

so

ΔR

so

Omo

Master

Osl

Oml

Os Oso

Slave so

ΔR

so

ΔR

Os

Om Omo

ΔR

Om

Oso c)

so

ΔR

Os

Om Omo

VOLUME N°22 2019 2019 VOLUME 13,13, N°

Oso

Fig. 11. ŝĂŐƌĂŵ ƐŚŽǁŝŶŐ ƚŚĞ ĂƌƌĂŶŐĞŵĞŶƚ ŽĨ ƚŚĞ ĚƵŵŵLJ ĂŶĚ ƌĞĂů ƉƌŽďĞƐ Ăƚ ƚŚĞ ŵŽŵĞŶƚ ŽĨ ĐŽƵƉůŝŶŐ ĂŶĚ ĂŌĞƌ ƌŽƚĂƟŽŶ ;ǁŝƚŚ ĂĚĚŝƟŽŶĂů ƚƌĂŶƐůĂƟŽŶͿ ŽĨ ƚŚĞ ŝŶƉƵƚ ĚĞǀŝĐĞ ďLJ ϭϴϬ ĚĞŐƌĞĞƐ ĂƌŽƵŶĚ ƚŚĞ y ;ƉƌŽďĞ ƐLJŵŵĞƚƌLJͿ ĂdžŝƐ Ă ʹ ǁŚĞŶ ŶŽ ĂdžŝƐ ŝƐ ƐLJŶĐŚƌŽŶŝƐĞĚ Ăƚ ƚŚĞ ƟŵĞ ŽĨ ƐLJƐƚĞŵ ĐŽƵƉůŝŶŐ Ͳ ǁŚĞŶ ƐLJŶĐŚƌŽŶŝƐĂƟŽŶ ŝƐ ŶŽƚ ƌĞƋƵŝƌĞĚ͕ ď ʹ ŝŶ ƚŚĞ ĐĂƐĞ ŽĨ ĐŽŶƐŝƐƚĞŶƚ ĂƌƌĂŶŐĞŵĞŶƚ ŽĨ ƚŚĞ ĚƵŵŵLJ ĂŶĚ ƌĞĂů ƉƌŽďĞƐ͕ Đ ʹ ǁŚĞŶ ŽŶůLJ ƚŚĞ ƐLJŶĐŚƌŽŶŝƐĂƟŽŶ ŽĨ ƚŚĞ yͲĂdžŝƐ ŝƐ ƌĞƋƵŝƌĞĚ - A direct method that requires synchronisation of all axes,

56

56

- A mixed method requiring synchronisation only of the X axis (i.e. requiring a certain setting of the rotation angles of the axes Y and Z of the orientation input device. In the case of the �irst proposed method (the incremental one) the desired orientation of the effector of the manipulator is directly calculated from the equations (13) and (14). Regardless of how the two probes are aligned at the time of the pedal coupling, the real probe will change its orientation by ∆Rso (increment), relative to its initial orientation. Although from the point of view of proper operation of the control system, it does not require any speci�ic settings of both probes relative to each other (synchronisation), in the case where the coupling occurs with both probes heavily misaligned, the pose handling becomes non-intuitive for the doctor. This problem is presented in Fig. 11a, where the probes at the time of coupling are rotated relative to each other by 180 degrees and the rotation of the input device (dummy probe) around its axis of symmetry, i.e. the X-axis causes a rotation of the real probe not around its axis of symmetry but around the perpendicular axis (i.e. the Z axis). To eliminate this, the coupling must be forced after both systems are synchronised in all axes, which is quite a troublesome and above all time-consuming operation for the doctor. To implement the second method (the direct one) in the equation (13) instead of the rotation matrix representing the orientation of the input device at the time of coupling, one should use the orientation maArticles

trix for which the coordinate system of the input device assumes an initial orientation (i.e., the one for which all rotation angles are zero). Considering the fact that the matrix is the identity matrix, equation (13) is simpli�ied to the form: ∆Rso = Rso m.

(15)

A similar procedure should be taken with the equation (14), in which, instead of the rotation matrix representing the orientation of the effector of the manipulator at the moment of coupling, one should use the orientation matrix for which an actual coordinate system of the ultrasound probe assumes an orientation output 3x3 Rso (i.e. when all the angles of the manis (0) ∈ R pulator are zero). Then the desired rotation matrix for the real ultrasound probe is determined as follows: so so Rd = Rso s = Rm Rs (0).

(16)

In this method, from the point of view of the intuitive handling by the doctor, a prior synchronisation would not be required. However, the coupling of the systems when the coordinate systems are not aligned with each other, generates a rapid, often dangerous movement of the real probe. Therefore, for safety reasons, it is required to introduce a requirement of prior synchronisation of all axes (see Fig. 11b). The implementation of the third (mixed) method requires the modi�ication of the second method. In the method of mixed orientation change caused by the input device in the axes Z and Y is carried out as in the second method, and the change in orientation caused by rotation about the X-axis (axis of symmetry of the


Journal of Journal of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

VOLUME 2019 VOLUME 13,13, N°N°2 2 2019

dummy probe) is realised by the incremental method. To achieve this, �irst the rotation matrix of the slave manipulator registered at the time of coupling is converted to the Euler angles representation, consistent with the representation of the rotation matrix of the input device expressed by equation (11). Due to the condition of having to synchronise the X-axis of the two probes (i.e. setting the rotation angles of the Y and Z axes of the input device so that: φsz = φmz and φsy = φmy ), one should additionally compute an equivalent to the angle of rotation φsxl about the X axis. Finally, the desired rotation matrix for the real ultrasound probe is calculated according to:

Nadbystrzycka 38a, 20-618 Lublin, Poland, e-mail: a.kurnicki@pollub.pl, www.pollub.pl. Bartłomiej Stańczyk – ACCREA Engineering, ul. Hiacyntowa 20, 20-143 Lublin, Poland, e-mail: b.stanczyk@accrea.com, www.accrea.com.

wherein the angle φsx by which the coordinate system of the manipulator should be rotated around the axis X, is: (18) φsx = φsxl + φmxl − φmx ,

REFERENCES

so Rd = Rso s = Rz,φsz Ry,φsy Rx,φsx Rs (0),

(17)

where: φmx and φmxl are the rotation angles of the input device about the X axis - respectively the current one and the one registered at the time of coupling of the master and slave. The moment of synchronisation and change of the orientation of the implemented mixed method is shown by the diagram in Fig. 11c. It is evident there that at any point in time only the X-axes of the two probes are consistent with each other. This synchronisation ensures full intuitiveness of the operation of the orientation change by the doctor for the real probe, e.g. when it forces a pivoting movement of the probe during the examination. Then the alignment of the other two axes is not important for the doctor.

5. Summary

The system for control of the position and orientation of the manipulator, presented in this article, has been implemented in a real, functioning in the form of a prototype, ReMeDi system used to perform teleechography. The entire system was subjected to evaluation with the participation of specialists and patient volunteers. Evaluations have con�irmed the correctness of the system and provided information on, among others, the choice of the solutions discussed in the section 4.2 i.e. the methods of determining the desired orientation. Although the incremental method, in order to work properly, does not require the troublesome synchronisation of the probes, yet it causes non-intuitive controls of the orientation, which are the greater the more the probes are out of sync. Therefore, the most useful is the mixed method, which requires only the alignment of the axis of symmetry of the two probes. It increase the comfort of the doctor’s work and shortens the time of the examination performed.

AUTHORS

Adam Kurnicki∗ – Automation and Metrology Department, Faculty of Electrical Engineering and Computer Science, Lublin University of Technology, ul.

Corresponding author

ACKNOWLEDGEMENTS This work was supported by the European Commission FP7 ICT-610902 project ReMeDi (Remote Medical Diagnostician) and by Lubelska Agencja Wspierania Przedsiębiorczoś ci within the ”Research and innovation” activity 1.2 RPO WL 2014-2020. [1] “AdEchoTech – the manufacturer website”. http: //www.adechotech.com/products/. Accessed on: 12.08.2019. [2] “Medirob – the manufacturer website”. http:// www.medirob.se. Accessed on: 12.08.2019.

[3] “Remote Medical Diagnostician | ReMeDi Project | FP7 | CORDIS | European Commission”. https://cordis.europa.eu/project/ rcn/110646/factsheet/en. Accessed on: 28.08.2019. [4] “VGo – the manufacturer website”. http://www. vgocom.com/. Accessed on: 12.08.2019.

[5] P. Arbeille, J. Ayoub, V. Kieffer, P. Ruiz, B. Combes, A. Coitrieux, P. Herve, S. Garnier, B. Leportz, E. Le�bvre, and F. Perrotin, “Realtime TeleOperated Abdominal and Fetal Echography in 4 Medical Centres, from one Expert Center, using a Robotic Arm & ISDN or Satellite Link”. In: 2008 IEEE International Conference on Automation, Quality and Testing, Robotics, vol. 1, 2008, 45–46, 10.1109/AQTR.2008.4588703. [6] K. Arent, M. Cholewiń ski, Ł. Chojnacki, W. Domski, M. Drwięga, J. Jakubiak, M. Janiak, B. Kreczmer, A. Kurnicki, B. Stań czyk, and D. Szczęś niakStań czyk, “Selected Topics in Design and Application of a Robot for Remote Medical Examination with the Use of Ultrasonography and Ascultation from the Perspective of the REMEDI Project”, Journal of Automation, Mobile Robotics and Intelligent Systems, vol. 11, no. 2, 2017, 82–94, 10.14313/JAMRIS_2-2017/20.

[7] S. Avgousti, A. S. Panayides, E. G. Christoforou, A. Argyrou, A. Jossif, P. Masouras, C. Novales, and P. Vieyres, “Medical Telerobotics and the Remote Ultrasonography Paradigm Over 4G Wireless Networks”. In: 2018 IEEE 20th International Conference on e-Health Networking, Applications and Services (Healthcom), 2018, 1–6, 10.1109/HealthCom.2018.8531194. [8] C. A. Avizzano, A. Filippeschi, J. M. J. Villegas, and E. Ruffaldi, “An Optimal Geometric Model Articles

57

57


Journal Journal of of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

for Clavels Delta Robot”. In: 2015 IEEE European Modelling Symposium (EMS), 2015, 232– 237, 10.1109/EMS.2015.84.

[9] F. Courreges, P. Vieyres, and R. S. H. Istepanian, “Advances in robotic tele-echography services the OtelO system”. In: The 26th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, vol. 2, 2004, 5371–5374, 10.1109/IEMBS.2004.1404499.

[10] S. S. Gadupudi, S. Nisha, and S. Yarramasu, “Teledentistry: A futuristic realm of dental care”, International Journal of Oral Health Sciences, vol. 7, no. 2, 2017, 63–67, 10.4103/ijohs.ijohs_22_17.

[11] N. Koizumi, S. Warisawa, H. Hashizume, and M. Mitsuishi, “Continuous Path Controller for the Remote Ultrasound Diagnostic System”, IEEE/ASME Transactions on Mechatronics, vol. 13, no. 2, 2008, 206–218, 10.1109/TMECH.2008.918530. [12] N. Koizumi, S. Warisawa, M. Nagoshi, H. Hashizume, and M. Mitsuishi, “Construction Methodology for a Remote Ultrasound Diagnostic System”, IEEE Transactions on Robotics, vol. 25, no. 3, 2009, 522–538, 10.1109/TRO.2009.2019785.

[13] A. Krupa, D. Folio, C. Novales, P. Vieyres, and T. Li, “Robotized Tele-Echography: An Assisting Visibility Tool to Support Expert Diagnostic”, IEEE Systems Journal, vol. 10, no. 3, 2016, 974–983, 10.1109/JSYST.2014.2314773.

[14] A. Kurnicki, M. Cholewiń ski, B. Stań czyk, and K. Arent, “Implementation and evaluation of a bilateral teleoperation with use of wave variables in the ReMeDi system for remote medical examination”. In: 2017 22nd International Conference on Methods and Models in Automation and Robotics (MMAR), 2017, 131–136, 10.1109/MMAR.2017.8046811. [15] A. Kurnicki, B. Stań czyk, and B. Kania, “Manipulator Development for Telediagnostics”. In: Proceedings of the International Conference on Mechatronics and Robotics, Structural Analysis (MEROSTA2014), 2014, 214–218. [16] K. Mathiassen, J. E. Fjellin, K. Glette, P. K. Hol, and O. J. Elle, “An Ultrasound Robotic System Using the Commercial Robot UR5”, Frontiers in Robotics and AI, vol. 3, 2016, 10.3389/frobt.2016.00001.

[17] R. Monfaredi, E. Wilson, B. Azizi Koutenaei, B. Labrecque, K. Leroy, J. Goldie, E. Louis, D. Swerdlow, and K. Cleary, “Robot-assisted ultrasound imaging: Overview and development of a parallel telerobotic system”, Minimally invasive therapy & allied technologies: MITAT: of�icial �ournal of the Society for Minimally Invasive Therapy, vol. 24, no. 1, 2015, 54–62, 10.3109/13645706.2014.992908.

58

58

[18] A. S. B. Mustafa, T. Ishii, Y. Matsunaga, R. Nakadate, H. Ishii, K. Ogawa, A. Saito, M. Sugawara, K. Niki, and A. Takanishi, “Development of robotic system for autonomous liver screening using Articles

VOLUME N°22 2019 2019 VOLUME 13,13, N°

ultrasound scanning device”. In: 2013 IEEE International Conference on Robotics and Biomimetics (ROBIO), 2013, 804–809, 10.1109/ROBIO.2013.6739561.

[19] A. Niewola, L. Podsędkowski, P. Wró blewski, P. Zawiasa, and M. Zawierucha, “Selected aspects of robin heart robot control”, Archive of Mechanical Engineering, vol. LX, no. 4, 2013.

[20] C. Passenberg, A. Peer, and M. Buss, “A survey of environment-, operator-, and task-adapted controllers for teleoperation systems”, Mechatronics, vol. 20, no. 7, 2010, 787–801, 10.1016/j.mechatronics.2010.04.005. [21] A. Peer, M. Buss, B. Stanczyk, D. SzczesniakStanczyk, W. Brzozowski, A. Wysokinski, M. Tscheligi, C. A. Avizzano, E. Ruffaldi, L. van Gool, A. Fossati, K. Arent, J. Jakubiak, and M. Janiak, “Towards a remote medical diagnostician for medical examination”. In: NextMed MMVR21, 2014. [22] A. Peretti, F. Amenta, S. K. Tayebati, G. Nittari, and S. S. Mahdi, “Telerehabilitation: Review of the State-of-the-Art and Areas of Application”, JMIR Rehabilitation and Assistive Technologies, vol. 4, no. 2, 2017, 10.2196/rehab.7511.

[23] L. Santos and R. Cortesã o, “Admittance control for robotic-assisted tele-echography”. In: 2013 16th International Conference on Advanced Robotics (ICAR), 2013, 1–7, 10.1109/ICAR.2013.6766502.

[24] L. Santos and R. Cortesã o, “Computed-Torque Control for Robotic-Assisted Tele-Echography Based on Perceived Stiffness Estimation”, IEEE Transactions on Automation Science and Engineering, vol. 15, no. 3, 2018, 1337–1354, 10.1109/TASE.2018.2790900. [25] B. Stań czyk, A. Kurnicki, and K. Arent, “Logical architecture of medical telediagnostic robotic system”. In: 2016 21st International Conference on Methods and Models in Automation and Robotics (MMAR), 2016, 200–205, 10.1109/MMAR.2016.7575133.

[26] B. Stanczyk, A. Peer, and M. Buss, “Development of a high-performance haptic telemanipulation system with dissimilar kinematics”, Advanced Robotics, vol. 20, no. 11, 2006, 1303– 1320, 10.1163/156855306778792461. [27] G. Stollnberger, C. Moser, C. Zenz, M. Tscheligi, D. Szczesniak-Stanczyk, M. Janowski, W. Brzozowski, and A. Wysokinski, “Capturing expected user experience of robotic systems in the health care sector”. In: Proceedings of the Austrian Robotics Workshop 2014, 2014, 42–46.

[28] A� . Taká cs, S. Jordá n, D. A� . Nagy, J. K. Tar, I. J. Rudas, and T. Haidegger, “Surgical robotics — Born in space”. In: 2015 IEEE 10th Jubilee International Symposium on Applied Computational Intelligence and Informatics, 2015, 547–551, 10.1109/SACI.2015.7208264.


Journal of of Automation, Automation, Mobile Journal MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

VOLUME 2019 VOLUME 13,13, N°N° 2 2 2019

[29] J. M. J. Villegas, C. A. Avizzano, E. Ruffaldi, and M. Bergamasco, “A Low Cost Open-Controller for Interactive Robotic System”. In: 2015 IEEE European Modelling Symposium (EMS), 2015, 462– 468, 10.1109/EMS.2015.75.

[30] J. Zhou, X. Ma, Z. Xu, and Z. Qi, “Overview of Medical Robot Technology Development”. In: 2018 37th Chinese Control Conference (CCC), 2018, 5169–5174, 10.23919/ChiCC.2018.8483769.

[31] W. Zhu, S. E. Salcudean, S. Bachmann, and P. Abolmaesumi, “Motion/force/image control of a diagnostic ultrasound robot”. In: Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation, vol. 2, 2000, 1580–1585, 10.1109/ROBOT.2000.844822.

Articles

59

59


Journal Journal of of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

VOLUME N°22 2019 2019 VOLUME 13,13, N°

M������� ��������-����� ������� ��� � M���������� ������������� ����������� �� R���� O�������� �ub�i�ed: 01st February 2019; accepted: 15th April 2019

Alicja Mazur, Mirela Kaczmarek, Joanna Ratajczak, Wojciech Domski DOI: 10.14313/JAMRIS/2-2019/19 Abstract: �his paper addresses the problem of posi�on-force control for robot manipulators under geometric endpoint constraints de�ned as round obstacles. �onsidera�ons are based on h�brid posi�on-force control and use modi�ed Arimoto algorithm with the principle of ”orthogonalisa�on”. �o achieve so-called �oint space orthogonalisa�on� where mo�on signals are orthogonal to force vectors and the direc�on selec�on is performed in the �oint space� pro�ec�on matri� is u�li�ed. �he convergence of trac�ing and force errors is proved. Keywords: manipulator� posi�on-force control� principle of orthogonalisa�on� h�brid control

�� ��trod�c�o�

62

60

Over the years, the use of robots has been increasing, especially in industry. Many of the simple tasks may need only trajectory control. In such a case the robot end-effector is moved only along a desired time trajectory and position control is suf�icient to achieve a goal. However, in order to perform many more complex tasks, such as assembly of parts, manipulation of tools, washing a window and the other, the robot endeffector must come into physical interactions with its environment. Successful execution of those tasks requires the approach considering the force of the contact and different control strategies. Tasks that require force control may vary and include the situation where applying a controlled force is needed for a manufacturing process (e.g. deburring and grinding), or pushing an external object using a controlled force is needed, or dealing with geometric uncertainty (e.g. in assembly) [5]. In the case of tasks assuming the possibility of contact of the manipulator with the environment, three phases can be distinguished [20, 21]: 1) approaching phase – free movement, 2) transition phase – free movement with expected contact, 3) contact phase – exerting a given force. The �irst phase involves approaching the object and represents the motion in a collision-free space. Therefore, positional control algorithms can be used for its implementation. The second phase is the transition from free movement to contact with the environment. This phase may be impacted by surroundings, therefore the control here is the most complex. Forces that may occur during a collision are usually not

fully known. In the control process, it is necessary to manage position and force simultaneously. If the contact does not occur, the control system tries to reach the desired position or speed. However, in the case of contact, the set trajectory must be modi�ied accordingly. The complication in the control process may be the fact that in the result of the contact, an unexpected manipulator’s behaviour may occur. In such a case, the manipulator may fall into vibrations or oscillations [21]. The last phase of control requires constant contact with the surroundings. Depending on the task, it may be necessary to exert a force with a certain value with a simultaneous change in the position of the manipulator. Considering manipulator’s interaction with the environment, the ability to model the environment, in which the robot operates, plays an important role. The type of environment and the degree of its familiarity largely determine the choice of the control algorithm. An approach in which the environment is well-known is shown, among others at work [4] and [10], while dealing with limited knowledge about the environment is presented in [6, 7, 11, 17] and [16]. In this paper we assume that environment is well known. For performing a class of tasks entailing the manipulator’s contact with the environment, force and compliance control are fundamental strategies [9, 18, 22]. In this article we will present a basic strategy for dealing with geometric constraints imposed by the robot environment – hybrid position-force control. For the �irst time the idea and scheme of hybrid motion-force control was proposed by Craig and Raibert [3] and based on decoupling of motion control and force control: motion – in directions without constrains, exerted force – in constrained directions. The problem was the kinematic instability of proposed hybrid position-force control scheme, which was connected with position control part. Fisher and Mujtaba [8] showed that kinematic instability is a result of an incomplete and inappropriate formulation of the problem, and proposed correct formulation of hybrid position-force control. Over the years, many various schemes have been proposed for such control systems. An insightful description of them can be found in [15] and a new review of notable interest is in [14]. Hybrid position-force control scheme is based on capability to provide both motion and force control which do not interact with each other. It is necessary to select directions where motion can be executed (position control) and other directions, where is the possibility to exert forces on a contact surface. De-


Journal of Journal of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

VOLUME 2019 VOLUME 13,13, N°N°2 2 2019

coupling of motion and force control is fundamental and can be achieved in various ways depending on e.g. chosen space. Schemes introduced in [3, 8] are based on selection matrix used for decoupling between motion and force through an appropriate selection of directions in the Cartesian space. Another approach can be found in [1, 2], where projection matrix is used to achieve so-called joint space orthogonalisation. Hence motion signals are orthogonal to force vectors and the selection is performed in the joint space. The result of comparison between classic hybrid control and modi�ied Arimoto algorithm for a �lat obstacle, were presented by authors before in [12]. In this article we present a modi�ied hybrid positionforce control algorithm based on joint space orthogonalisation, to achieve the following goal: positionforce control for manipulator with round obstacles. Considerations involve mathematical proof of the position and force errors convergence. Results of simulation experiments are also presented. In section 2 we present some preliminary information about manipulator dynamics under constraint. Next, in section 3 the control problem statement is given. Section 4 is devoted to the study of joint-space orthogonalisation. Afterwards, in section 5, the control law is stated and proved. Results of simulations are presented in section 6. Section 7 contains a brief summary and conclusions.

2. Robot Dynamics Under Constraint

Suppose that the model of manipulator is fully known and the manipulator endpoint is in physical contact with rigid surface described as holonomic constraint Φ(p) = 0. (1)

where p = (x, y, z) denotes the Cartesian (task) coordinates. Then the robot dynamics is described in terms of joint coordinates q = (q1 , ..., qn ) in the following form T

M (q)q̈ + C(q, q̇)q̇ + D(q) = u +

T Jϕn f,

(2)

where the left hand side of expression describes dynamics of the manipulator with following elements - q, q̇, q̈ ∈ Rn – vector of joint positions, velocities and accelerations, - M (q) – symmetrical, positive de�inite inertia matrix,

- C(q, q̇) – matrix of Coriolis and centripetal forces,

- D(q) – vector of gravitational forces. The right hand side of (2) includes control vector u and T expression Jϕn f which represents the contact force exerted at joints where Jϕn is normalized vector Jϕ de�ined as follows Jϕn =

and

Jϕ =

Jϕ ∥ Jϕ ∥

∂Φ ∂p · = A(p)Ja (q), ∂p ∂q

where Ja is Jacobi matrix.

3. Control Problem Statement As it was mentioned in introduction, in this paper we address the following control problem: �ixed-based manipulator with fully known dynamics should track desired trajectory pd (t) on the surface of the round obstacle with simultaneously exerting desired force fd in the direction orthogonal to the obstacle super�icies. We will make the following assumptions: - an obstacle is fully-known and completely rigid – given as holonomic constraint and de�ined in Cartesian coordinates, - the manipulator’s end-effector is in physical contact with the obstacle,

- desired trajectory pd (t) is C 2 class and is de�ined on the surface of the obstacle in Cartesian space. To realize such a task, it is necessary to de�ine a control law which has a hybrid structure: directions of position and position-force control are orthogonal.

�. �oint�s�ace �rt�o�onalisa�on

Let’s assume that both desired trajectory qd (t) and force fd (t), are given in joint space. Moreover, assume that the velocity signal q̇(t), position signal q(t) and momentum signal F (t) given as F (t) =

t

(3)

f (τ )dτ,

0

can be measured and used in real-time. Then let us introduce so-called �nominal reference� signal de�ined by expression

(4)

q̇r = Q(q)(q̇d − Λeq ) + βJϕn (q)eF ,

where - eq = q − qd denotes position error in joint coordinates, - Λ > 0 and β ≥ 0 are constants,

- eF = F − Fd denotes momentum signal error and t Fd = 0 fd (τ )dτ , - Q(q) ∈ Rn×n is projection matrix de�ined by T (q)Jϕn (q). Q(q) = In − Jϕn

(5)

Matrix Q(q), de�ined by (5), projects vectors in joint space onto the plane tangent to the surface Φ(p(q)) = 0 at point q. Holonomic constraint Φ(p(q)) = 0 is ful�illed as long as the manipulator endpoint is in physical contact with the surface. It holds that Jϕn (q)q̇ = 0,

(6)

and also ensures that the following equations are also satis�ied Q(q)q̇ = q̇,

T (q) = 0. Q(q)Jϕn

(7)

The difference between current velocity q̇ and nominal reference q̇r is called sliding variable s s = q̇ − q̇r .

(8)

Articles

63 61


Journal Journal of of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

VOLUME N°22 2019 2019 VOLUME 13,13, N°

Since nominal reference trajectory is given by equation (4) and also (7) is satis�ied, the following decomposition of the sliding variable (8) can be made s = q̇ − q̇r = Q(q)(ėq + Λeq ) − βJΦT (q)eF = s0 + s 1 ,

(9)

where ėq = q̇ − q̇d – velocity error in joint space, s0 = Q(q)(ėq +Λeq ), and s1 = βJΦT (q)eF . We conclude from (7) that signal s0 is orthogonal to the signal s1 T sT0 s1 = −(ėq + Λeq )T Qϕ βJϕn eF

T eF = 0. = −β(ėq + Λeq )T Qϕ Jϕn =0

(10)

Hence, nominal reference signal (4) is also composed of two parts which are orthogonal to each other. According to [1] and [2] this design concept is called ”joint-space orthogonalisation”.

5. Main Result – Control Law

For the manipulator de�ined by equation (2) modi�ied version of Arimoto control algorithm [1] has been chosen as the control law, which is de�ined by u =M (q)q̈r + C(q, q̇)q̇r + D(q)

− Kd (ėq + Λeq ) − JΦT (q)(fd − γeF ),

(11)

where Kd = KdT > 0, γ > 0 – regulation parameters of the controllers. Nominal reference signal q̇r is de�ined like before as (4). Comparing to original control law proposed by Arimoto, additional term −Kd (ėq + Λeq ) was added to improve algorithm convergence. And furthermore, different proof of the position and force errors convergence is presented.

5.�. �roo� o� t�e �osi�on an� �or�e �rrors Con�er�en�e Substituting control law (11) into system (2) we obtain the closed-loop system dynamics T M q̈ + C q̇ + D =Jϕn f + M q̈r + C q̇r + D − Kd s T (fd − γeF ) . − Jϕn

T M (q̈ − q̈r ) + C(q̇ − q̇r ) + Kd s = Jϕn (ėF + γeF ) (12)

and next, using the de�inition of sliding variable (8), we can present (12) in the following form (13)

For the closed-loop system (13) we propose following Lyapunov-like function 1 1 V (s, eF ) = sT M (q)s + βe2F , 2 2

which is non-negatively de�ined. We compute time derivative of V along solutions of the closed-loop system (13) 1 V̇ = sT M (q)ṡ + sT Ṁ (q)s + βeF ėF , 2

64

62

Articles

T (ėF + γeF )) V̇ = sT (−Cs − Kd s + Jϕn 1 + sT Ṁ (q)s + βeF ėF 2 T = −sT Kd s + sT Jϕn (ėF + γeF ) + βeF ėF T = −sT Kd s + (sT0 + sT1 )Jϕn (ėF + γeF )

+ βeF ėF

T = −sT Kd s + sT0 Jϕn (ėF + γeF ) T (ėF + γeF ) + βeF ėF + sT1 Jϕn

T = −sT Kd s + (ėq + Λeq )T QTϕ Jϕn (ėF + γeF ) T (ėF + γeF ) + βeF ėF . − βeF Jϕn Jϕn

(14)

(15)

From the conditions (7), it is known that Qϕ = QTϕ and T Qϕ Jϕn = 0. Therefore, the second expression in deriT = 1, vative (15) is equal zero. Moreover, since Jϕn Jϕn it follows that V̇ = −sT Kd s − βeF (ėF + γeF ) + βeF ėF = −sT Kd s − βγe2F .

(16)

We conclude from the properties of quadratic forms that the following inequality is satis�ied� sT Kd s ≥ λmin (Kd )sT s,

λmin (Kd ) > 0. (17)

After multiplying both sides of the equation (17) by (−1) and substituting into derivative of Lapunov function V (16) we get V̇ ≤ −λmin (Kd )sT s − βγe2F

= −λmin (Kd )(sT0 + sT1 )(s0 + s1 ) − βγe2F

= −λmin (Kd )(sT0 s0 + sT0 s1 + sT1 s0 + sT1 s1 ) − βγe2F .

(18)

Since it is shown in (10) that signals s0 and s1 are orthogonal, consequently we can rewrite equation (18) as follows V̇ ≤ −λmin (Kd ) ∥ s0 ∥2 −λmin (Kd ) ∥ s1 ∥2 − βγe2F ≤ 0

The above expression may be written as

T (ėF + γeF ). M ṡ + Cs + Kd s = Jϕn

what can be rewritten as

(19)

Finally, we get that stable equilibrium of the system (13) is (s0 , s1 , eF ) = (0, 0, 0), which implies following properties s0 = ėq + Λeq → 0 =⇒

eq → 0

(20)

along with convergence to zero. This ends the proof.

�. �i�ula�ons

The simulations have been done using the MATLAB package and the SIMULINK toolbox. As an object of simulations we have taken RTR manipulator, presented in �ig.1. Links of the RTR manipulator have been modeled as homogenous sticks with length equal to l2 = 0.9m, l3 = 1m, and masses m2 = 20kg, m3 = 20kg. Dynamics of manipulator is given by the equation (2) with elements equal to


Journal of Automation, Mobile Robotics and Intelligent Systems Journal of Automation, Mobile Robotics and Intelligent Systems

VOLUME 13, N° 2 2019 VOLUME 13, N° 2 2019

6.1. Desired Trajectory

Z

l2 l3

For manipulator desired trajectory has been de�ined as below     0.4 cos t xd (t) (25) pd (t) =  yd (t)  =  0.4 sin t  , t + 10 zd (t) 10

q3

q2

being a helical line located on the surface of a completely rigid obstacle. The holonomic constraint is therefore de�ined in the following form

Y q1

Φ(p) = x2 + y 2 − 0.42 = 0.

X

Fig. 1. Scheme of modelled RTR manipulator - M – symmetric positive de�inite inertia matrix   0 0 M11 M22 M23  , (21) M (q) =  0 0 M23 M33 M11 M22 M23 M33

= 13 m2 l22 + m3 (l22 + 13 l32 cos2 q3 + l2 l3 cos q3 ), = m2 + m3 , = 12 m3 l2 l3 cos q3 , = 13 m3 l32 ,

- C – matrix of centripetal and Coriolis forces   0 C13 C11 0 C23  , C(q, q̇) =  0 C31 0 0 C11 C13 C23 C31

(22)

= q̇3 (− 12 m3 l2 l3 sin q3 − 13 m3 l32 sin q3 cos q3 ), = −q̇1 ( 12 m3 l2 l3 sin q3 + 13 m3 l32 sin q3 cos q3 ), = − 12 q̇3 m3 l2 l3 sin q3 , = q̇1 ( 12 m3 l2 l3 sin q3 + 13 m3 l32 sin q3 cos q3 ),

- D – vector of gravity 

 0 D(q) =  (m2 + m3 )g  . 1 2 gm3 l3 cos q3

(23)

The effector’s position in Cartesian coordinates is     x cos q1 (l3 cos q3 + l2 ) p =  y  =  sin q1 (l3 cos q3 + l2 )  , l3 sin q3 + q2 z

and �acobi matrix is de�ined as follows   J11 0 J13 J(q) =  J21 0 J23  , 0 1 J33

J11 J13 J21 J23 J33

= − sin q1 (l3 cos q3 + l2 ), = − cos q1 sin q3 l3 , = cos q1 (l3 cos q3 + l2 ), = − sin q1 sin q3 l3 , = cos q3 l3 .

(24)

(26)

Desired trajectory (25) has to be located on obstacle surface, which is given in Cartesian coordinates. For this reason it is necessary to transform it to joint space. Using Newton algorithm [19], trajectory can be represented as solution of following equation (for initial condition k(qd (0)) − pd (0) = 0) : q̇d = J −1 (qd )[ṗd − γ(k(qd ) − pd )],

(27)

where: - qd , q̇d – desired position and velocity in joint space, - pd , ṗd – desired position and velocity in Cartesian space, - γ > 0 – convergence coef�icient� assumed γ = 7, - k(qd ) – end-effector position expressed as a function of qd . 6.�. Force �i���a�o�

To control contact between manipulator and surface it is necessary to get information about contact force. Contact force can be measured (if there is an actual manipulator equipped with appropriate sensors [21]) or calculated based on Lagrange multipliers and de�inition of holonomic constraint [13]. The Lagrange multipliers λ can be calculated by double differentiation, in time domain, the holonomic constraints equation (1) ϕ̈(q) = J˙q̇ + J q̈ = 0.

(28)

After substituting q̈ from dynamics equation of the manipulator (2), the equation (28) takes form

ϕ̈(q) = Jξ M −1 J T λ+J˙q̇−JM −1 (C q̇+D−u) = W (ϕ). Equation (28) does not guarantee that ϕ(q) = 0 will be always ful�illed. To assure it, even if manipulator is in some distance from surface of holonomic constraint, a numerical damping terms [13] are added to the system in the form of the following equation W (ϕ) = ϕ̈(q) + 2αϕ̇(q) + β 2 ϕ(q),

where α and β coef�icients should guarantee asymptotic stability. After the above steps, the holonomic Lagrange multipliers ful�il JM −1 J T λ = W (ϕ) − J˙q̇ + JM −1 (C q̇ + D − u). Articles

65

63


Journal Journal of of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

VOLUME N°22 2019 2019 VOLUME 13,13, N°

6.3. Results for the Manipulator on the Constraint According to the assumption, that holonomic constraint ful�ils Φ = 0, the initial conditions were chosen as x(0) = 0.4, y(0) = 0, z(0) = 7, to guarantee that manipulator end-effector is in contact with the surface. While desired force was chosen as fd = 10N. In �ig. 2(a) tracking of the desired trajectory in 3Dspace by the manipulator has been presented. Since the end-effector is supposed to be on the obstacle, the real trajectory coincides with the set in coordinates x and y. Whereas initial condition z(0) was chosen to show that the manipulator’s end-effector approaches the desired trajectory. Figures 2(c)–2(e) show position errors (in Cartesian space) in time domain as follows ex = x − xd , ey = y − yd and ez = z − zd . While on the �ig. 2(b) there is a force error in time domain e f = f − fd . 6.4. Results for the Manipulator off the Constraint

In section 6.3 results for the manipulator which were constantly on constraint were presented. However, as it was mentioned in the introduction, many tasks require three phase of control, and only last one is exerting a given force (being in contact with an obstacle). For this reason two-stage control is unavoidable, and both, position and position-force control are required. The �irst stage is position control to approach the end-effector to an obstacle. The second, when the manipulator is on the constraint, position-force control (exerting force on an obstacle) is required. For the �irst part – position control in the free space, nominal reference signal was given as q̇r = q̇d − Λeq ,

and control law was

u = M (q)q̈r + C(q, q̇)q̇r + D(q) − Kd s,

(29)

(30)

where s is sliding variable de�ined as s = q̇ − q̇r = ėq + Λeq . When the manipulator was on constraint (endeffector was in contact with an obstacle) the control was switched to the form presented in section 6.3. It is important to notice, that in case of no contact with T f is equal 0 obstacles, in dynamic model (2) part Jϕn (there is no contact force). The manipulator is a dynamic system. Therefore rapid switching between algorithms may destabilize the system. Moreover, it is not possible to avoid collision. For this reason, in algorithms simulation, the following solutions were utilized - position control is used for approaching to the obstacle, as long as distance between the end-effector and surface is greater than maxdist ,

66

64

- when manipulator is close to constraint surface (distance between the end-effector and the surface is smaller than maxdist ), then the control takes the form (11). However, because there is no contact with the obstacle, the force is not included in the manipulator dynamics. Moreover, desired force is not Articles

constant over time, but depends on actual value fd = max(P · f, 0),

(31)

where P ∈ (0, 1) is a proportionality factor. This solution allows gradual reduction of the actual force, so that when reaching the obstacle the force value is 0 (the manipulator slows down to avoid sudden collision with the environment).

- at the moment of contact with the obstacle, the occurring force will be taken into account in the dynamics equation of the manipulator, desired force is adjusted to the expected constant value fd .

According to the assumption, that end-effector is off constraint, the initial conditions were chosen as x(0) = 0.1, y(0) = 0.1, z(0) = 2. The results of tracking trajectory are presented in �ig. 3(a),(b) (3D and 2D). Figures 3(c)–3(e) present position tracking errors. Distance maxdist was chosen as 0.05, while the manipulator’s effector is on a constraint if its distance from the constraints is less than or equal to 0.00001. In order to de�ine the desired force, formula (31) is used while proportionally coef�icient P = 0.5. Fig. 4(a) shows the value of the desired force over time, which is 0, when the manipulator is far from the obstacle, and then the force changes accordingly (proportionally to current force value) and �inally when the manipulator is close to the surface falls to 0. At the moment when the manipulator is in contact with the obstacle, the value of desired force is set to the expected value fd = 10. The switching moments of the algorithm are shown in �ig. 4(a) and �ig. 4(c). The �irst vertical line indicates the time in which the manipulator is close to the obstacle (at a distance less than or equal to maxdist from the obstacle). The second vertical line indicates the moment when the manipulator comes into contact with the obstacle. Then the control algorithm is executed in the same way as for the situation described in sec. 6.3. This solution was adopted for computational reasons.

7. Conclusion

As mentioned in the introduction, one of the main strategies for position-force control is hybrid control based on decoupling position and force control. Consequently, the selection of directions of free motion (position control) and other directions, where there is a possibility to exert forces on a contact surface, is required. The choice of the control algorithm is determined by an obstacle shape. In this paper we assume that manipulator has to track trajectory given as a helical line located on the surface of a completely rigid round obstacle. For this kind of tasks it is necessary to choose directions of control in the joint space. Therefore we applied principle of orthogonalisation to provide decoupling of control in joint space. Based on the principle of orthogonalisation presented by Arimoto [1�, we proposed modi�ied position-control algorithm. Convergence of its tracking and force errors is proved.


Journal of Journal of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

VOLUME 2019 VOLUME 13,13, N°N°2 2 2019

(a)

(b)

(c)

(d)

(e)

Fig. 2. Modified Arimoto algorithm: (a) trajectory tracking, (b) force tracking error ef , (c) �o�i�on tracking error ex , (d) �o�i�on tracking error ey , (e) �o�i�on tracking error ez Correctness of the algorithm is supported by the results of simulations. Simulation test requires information about actual forces. Hence, we used the approach proposed by Mills and Goldenberg [13] providing force simulation in end-effector on the basis of the Lagrange multipliers. The drawback of this method is underlying assumption of total stiffness of the manipulator and obstacles. The presented position-force control algorithm works correctly and can be used for geometrically constrained fully-known manipulator with rigid obstacles during trajectory tracking. It is dedicated to the situation, when manipulator is in constant contact with the obstacle. In the other situation it is necessary to switch between position control and presented algorithm, according to the movement phase. Assumption of full knowledge about the dynamics is not required. For the parametric uncertainty, when certain number of model parameters is unknown, presented approach can be easily transformed to adaptive case by using classical adaptive algorithm in position part. However, assumption of fully-known rigid obstacles is still necessary. Future work focuses on the extension of the algorithm to the path following tasks for manipulators and on more detailed considerations about collisions.

AUTHORS Alicja

Mazur∗

Department

of

Cybernetics

and Robotics, Faculty of Electronics, Wrocław University of Science and Technology, ul. Janiszewskiego 11/17, 50-372 Wrocław, Poland, e-mail: alicja.mazur@pwr.edu.pl, www: weka.pwr.edu.pl/pracownicy/alicja-mazur. Mirela Kaczmarek – Department of Cybernetics and Robotics, Faculty of Electronics, Wrocław University of Science and Technology, ul. Janiszewskiego 11/17, 50-372 Wrocław, Poland, e-mail: mirela.kaczmarek@pwr.edu.pl. Joanna Ratajczak – Department of Cybernetics and Robotics, Faculty of Electronics, Wrocław University of Science and Technology, ul. Janiszewskiego 11/17, 50-372 Wrocław, Poland, e-mail: joanna.ratajczak@pwr.edu.pl. Wojciech Domski – Department of Cybernetics and Robotics, Faculty of Electronics, Wrocław University of Science and Technology, ul. Janiszewskiego 11/17, 50-372 Wrocław, Poland, e-mail: wojciech.domski@pwr.edu.pl. ∗

Corresponding author

ACKNOWLEDGEMENTS This work was supported by the Wrocław University of Science and Technology: Alicja Mazur and Mirela Kaczmarek under the statutory grant 0401/0022/18, Joanna Ratajczak under the statutory grant 0401/0019/18 and Wojciech Domski under the grant 0402/0107/18. Articles

67

65


Journal Journal of of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

VOLUME N°22 2019 2019 VOLUME 13,13, N°

(a)

(b)

(c)

(d)

(e)

(a)

(b)

(c)

Fig. 3. �o�i�e� �ri�oto a�gorit�� �it� a��roac�ing ��a�e� (a) tra�ector� tracking, (�) tra�ector� tracking � �ro�ec�on on �� ��ane, (c) �o�i�on tracking error ex , (�) �o�i�on tracking error ey , (e) �o�i�on tracking error ez

Fig. 4. �o�i�e� �ri�oto a�gorit�� �it� a��roac�ing ��a�e� (a) �e�ire� force in t�e �r�t ��� of t�e �o�on, (�) force tracking error ef , (c) force tracking error ef in t�e �r�t ��� of t�e �o�on

REFERENCES [1] S. Arimoto, Y. H. Liu, and T. Naniwa, “Modelbased adaptive hybrid control for geometrically constrained robots”. In: [1993] Proceedings IEEE International Conference on Robotics and Automation, 1993, 618–623, 10.1109/ROBOT.1993.292047.

[2] S. Arimoto, Y. H. Liu, and T. Naniwa, “Principle of Orthogonalization for Hybrid Control of Robot Arms”, IFAC Proceedings Volumes, vol. 26, no. 2, Part 3, 1993, 335–340, 10.1016/S14746670(17)48744-1. 68

66

[3] J. J. Craig and M. H. Raibert, “A systematic method Articles

of hybrid position/force control of a manipulator”. In: COMPSAC 79. Proceedings. Computer Software and The IEEE Computer Society’s Third International Applications Conference, 1979., 1979, 446–451, 10.1109/CMPSAC.1979.762539.

[4] M. Dapper, R. Maass, and R. Eckmiller, “Neural Velocity Force Control for Industrial Manipulators Contacting Rigid Surfaces”. In: L. Niklasson, M. Bodé n, and T. Ziemke, eds., Proc. of ICANN 98, 1998, 887–892.

[5] J. De Schutter, H. Bruyninckx, W.-H. Zhu, and M. W. Spong, “Force control: A bird’s eye view”. In: B. Siciliano and K. P. Valavanis, eds., Control Problems in Robotics and Automation, Lecture


Journal of Journal of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

Notes in Control and Information Sciences, 1998, 10.1007/BFb0015073.

[6] A. Fanaei and M. Farrokhi, “Robust adaptive neuro-fuzzy controller for hybrid position/force control of robot manipulators in contact with unknown environment”, Journal of Intelligent and Fuzzy Systems, vol. 17, no. 2, 2006, 125–144. [7] F. Ferguene and R. Toumi, “Dynamic External Force Feedback Loop Control of a Robot Manipulator Using a Neural Compensator – Application to the Trajectory Following in an Unknown Environment”, International Journal of Applied Mathematics and Computer Science, vol. 19, no. 1, 2009, 113–126, 10.2478/v10006-009-0011-9.

[8] W. D. Fisher and M. S. Mujtaba, “Hybrid Position/Force Control: A Correct Formulation”, The International Journal of Robotics Research, vol. 11, no. 4, 1992, 299–311, 10.1177/027836499201100403. [9] D. M. Gorinevsky, A. M. Formalsky, and A. Y. Schneider, Force control of robotics systems, CRC Press: Boca Raton, 1997.

[10] F. L. Lewis, D. M. Dawson, and C. T. Abdallah, Control of Robot Manipulators, Prentice Hall: Upper Saddle River, NJ, USA, 1993.

VOLUME 2019 VOLUME 13,13, N°N°2 2 2019

[17] Seul Jung, T. C. Hsia, and R. G. Bonitz, “Force tracking impedance control of robot manipulators under unknown environment”, IEEE Transactions on Control Systems Technology, vol. 12, no. 3, 2004, 474–483, 10.1109/TCST.2004.824320.

[18] B. Siciliano and O. Khatib, eds., Springer Handbook of Robotics, Springer Handbooks, SpringerVerlag: Berlin Heidelberg, 2008.

[19] K. Tchoń , A. Mazur, I. Dulȩba, R. Hossa, and R. Muszyń ski, Manipulatory i roboty mobilne: modele, planowanie ruchu, sterowanie, Akademicka O�icyna Wydawnicza PLJ: Warszawa, 2000, (in Polish). [20] M. Vukobratovic, V. Potkonjak, and V. Matijevic, Dynamics of Robots with Contact Tasks, Intelligent Systems, Control and Automation: Science and Engineering, Springer Netherlands, 2003.

[21] T. Winiarski and C. Zieliń ski, “Podstawy sterowania siłowego w robotach”, Pomiary Automatyka Robotyka, vol. 12, no. 6, 2008, 5–10.

[22] G. Zeng and A. Hemami, “An overview of robot force control”, Robotica, vol. 15, no. 5, 1997, 473– 482, 10.1017/S026357479700057X.

[11] Y. Li and S. S. Ge, “Impedance Learning for Robots Interacting With Unknown Environments”, IEEE Transactions on Control Systems Technology, vol. 22, no. 4, 2014, 1422–1432, 10.1109/TCST.2013.2286194. [12] A. Mazur, M. Kaczmarek, J. Ratajczak, and W. Domski, “Poró wnanie algorytmó w sterowania pozycyjno-siłowego dla manipulatora typu RTR”. In: K. Tchoń and C. Zieliń ski, eds., Postępy robotyki, Prace Naukowe – Politechnika Warszawska, Warszawa, 2018, 219–228, (in Polish).

[13] J. K. Mills and A. A. Goldenberg, “Force and position control of manipulators during constrained motion tasks”, IEEE Transactions on Robotics and Automation, vol. 5, no. 1, 1989, 30–46, 10.1109/70.88015.

[14] V. Ortenzi, R. Stolkin, J. Kuo, and M. Mistry, “Hybrid motion/force control: a review”, Advanced Robotics, vol. 31, no. 19-20, 2017, 1102–1113, 10.1080/01691864.2017.1364168. [15] R. Paul, “Problems and research issues associated with the hybrid control of force and displacement”. In: Proceedings of IEEE International Conference on Robotics and Automation, vol. 4, 1987, 1966–1971, 10.1109/ROBOT.1987.1087905.

[16] J. Pliego-Jimé nez and M. A. Arteaga-Pé rez, “Adaptive position/force control for robot manipulators in contact with a rigid surface with unknown parameters”. In: 2015 European Control Conference (ECC), 2015, 3603–3608, 10.1109/ECC.2015.7331090. Articles

69

67


Journal of Automation, Mobile Robotics and Intelligent Systems

VOLUME 13,

N° 2

2019

Using LabVIEW and ROS for Planning and Coordination of Robot Missions, the Example of ERL Emergency Robots and University Rover Challenge Competitions Submitted: 18th March 2019; accepted: 16th May 2019

Agnieszka Węgierska, Kacper Andrzejczak, Mateusz Kujawiński, Grzegorz Granosik

DOI: 10.14313/JAMRIS/2-2019/20 Abstract: The article presents the main functionalities and principles for operating a software for multi-robotic mission coordination developed for competitions ERL Emergency Robots 2017, as well as its adaptation during University Rover Challenge. We have started with an overview of similar software used in commercial applications or developed by other research groups. Then, our solution is thoroughly described, with its user interface made in LabVIEW and the communication layer based on ROS software. Two cases of robotic competitions proved our software to be useful both for planning and for managing the mission. The system supports the operator in teleoperation and during partial autonomy of the robots. It offers reporting on the robots’ positions, Points of Interest (POI), tasks status. Reports are generated in KML/KMZ formats, and allow us to replay the mission, and analyze it. Keywords: Mission coordination, Planning, Robotic competitions, KML/KMZ, LabVIEW

1. Introduction

68

During last few years interest in mobile robots is at a raise. Currently, robots are used not only in inspection, military or human response applications but also are becoming more and more available in industrial, agriculture or service applications. Some of the reasons for this are: the decreasing size and cost of sensors, increasing computing power of microcomputers, development of localization algorithms and growing popularity of neural network algorithms for recognition and reasoning. An important source of robot development and social awareness about them comes from robotic competitions. The participants customize their solutions for simulated scenarios or application cases, usually taken from the real word. Hence, researchers are able to find various solutions applicable to real life situations more easily. What is more, competitions give a lot of freedom to choose the best solutions and finally compare them in front of other teams. These are then oftentimes taken to the next stage of commercialization. In this paper we present a situation in which the participation in two international robotic contests re-

quired development of a specialized software for the group of our mobile robots. We show research done during the last two years: starting with a short description of the contests’ scenarios, analysis of applications available in the market, motivation to develop our own software, detailed description of its structure and functionalities, followed by conclusions and future plans. One of the most challenging aspects of robotic competitions is mission planning and supporting the operator in action. Proper task strategy and well planned mission time are important success factors. Software for mission management is able to solve both of these problems. Although it is mainly utilized for multi-robotic missions it is also useful in single domain tasks. An example of such a competition is ERL Emergency Robots, where competing teams came from different higher education institutions, companies or research centers [6]. The main goal for this competition was to draw attention to mission cooperation problems between robots from different domains – aerial, water and ground – in rescue mission. The teams performance was based on the quality of multi-robots cooperation, creation of 2D or 3D terrain maps, localizing missing workers and gas leakages, delivering the first aid kit to the missing worker. The competition required land, water, and aerial robots cooperating with each other. Most of the teams came from different countries and were experts in a single domain. This required coordination not only at robotics level but also human communication level. It was a stimulus to develop software for mission coordination which could reduce communication problems, universalize contact, and increase mission efficiency in situations where several different robots or people needed to cooperate. What also triggered the research was that each team had to provide logs in KML/KMZ format to show their actions and progress in a clear way. During the ERL competition, operators of every robot were located in different base stations. They were not allowed to communicate directly to each other. To compensate this, our coordination software provides the mission manager and operators a clear view on the mission status, task execution progress, time remaining, mission problems, manager decisions, and upcoming tasks. On the other hand, such software can also be used in single domain competitions to increase efficiency and quality of a given task performed by a single ro-


Journal of Automation, Mobile Robotics and Intelligent Systems

bot. One example is the University Rover Challenge (URC), during which robots simulate a Mars mission. The teams control the robots in tasks such as manipulation, navigation, ground sample evaluation and autonomous traversing. In most cases, missions consist of multiple smaller tasks and the detailed information related to each of the four missions are revealed by the judges only several minutes before the mission begins. Sometimes subtasks have to be made in a specific order or at a determined time. Each subtask could have different scoring. As a result it is extremely hard to memorize and later repeat the whole strategy. So an application for mission planning and coordination could improve the team’s performance level. In such instance, the software is limited to planning tasks and their allocated times in proper order. It allows the operator to follow the mission plan without additional work and, in case of problems, to abort tasks with the lowest chances for success. Finally, lack of similar applications on the market further confirmed the necessity of its development.

2. Overview of Multi-Robots Mission Manager Software

VOLUME 13,

N° 2

2019

application used Open Street Map for adding targets, drawing sectors and pointers, adding photos, or planning robot’s path (waypoints, starting and end point of the path, properties of each point like velocity or reaching accuracy).

2.2. Mobile Planer – Omron

The requirements of the factory environment are different. The first one is the map which should provide mostly corridors or easy identifiable obstacles, easy to read by the final user. One of the solutions to this problem is provided by MobilePlanner designed by Omron company [11]. The map consists of a previously scanned 2D map with 10 cm segments which inform about traversability. For segments with a low value, the traversability is easy, with high value the overcome is impossible or very difficult. In addition, users can define special zones on the map (e.g., prohibited zones, low-speed zones or one-way zones) and point of interest (e.g., loading or unloading places, battery charging location). The graphical user interface with the map is shown in Fig. 2.

Based on extensive research, this paragraph features an overview of existing mission manager applications. Several ongoing projects show the problem is not new. Five, of the closest to our application are presented below. They became an inspiration for us and helped us determine necessary requirements. The design process was supported by different user interfaces visible on below figures.

2.1. Project Icarus

The application shown in Fig. 1 was developed under ICARUS program [7, 8] designate for use in search and rescue missions. The system consists of two levels of control stations. The higher one is responsible for defining global goals, planning and coordinating the mission progress, while the lower provides more details on how to realize the goals.

Fig. 2. The user interface of MobilePlanner – Omron [11] Moreover, an autonomous and intelligent fleet management system is important for factory automation. The presented solution is able to manage up to 100 robots. In MobilePlanner, each target has a defined value of goal position tolerance. Users could link a task with a specific target or create sophisticated tasks from a list of predefined actions. To deal with the problem of reaching one target by several robots at the same time, the software could put the robots into the queue reducing the chances for traffic problems. Tasks can be sequential or parallel.

2.3. MIRFleet

Fig. 1. User panel of ICARUS software [8] The main features of the application are building, monitoring, updating and breaking the mission, adding or relocating resources to other mission sectors, setting inspection or patrolling zones, setting time and preferred robot type to achieve appointed goals, displaying information from robot sensors. ICARUS

The MIRFleet [12] is similar to Omron’s solution built by MIR company (Mobile Industrial Robots) for centralized control of up to 100 mobile robots. The company’s portfolio gives options to choose robots with different payload capacity or hook attachment. The web-based user interface is shown in Fig. 3. It works on any device with a web browser, and in the range of robot’s Wi-Fi. There are two ways to create a floor plan used for robot localization – the robot creates it by itself Articles

69


Journal of Automation, Mobile Robotics and Intelligent Systems

VOLUME 13,

N° 2

2019

robot may be displayed online in Google Maps, Bing or Open Street Map and offline inside the application from flight logs.

2.5. NVL Charter

Fig. 3. Tracking single robot in MIRFleet [12] or a map is uploaded from CAD or PNG file. The map restrictions (walls, prohibited zones, preferred drive zones, blocked zones, speed reducing, a maximum number of robots in a zone) are defined by colors. Therefore, the map may be edited in any graphic program. The map updates made by robots are shared with the whole fleet. Access to the application options is defined by user hierarchy. The additional features are alerts by SMS or e-mail. The tasks are created using a standard form including start and end time, mission name, priority (normal, high), type of the robot and additional task description. The task start time is limited to five days in advance. Tasks are divided into started, pending (robot currently not available or task start in a different time) and finished. This application is only prepared to control the MIR robots and does not give ability to manage robots of any other type.

2.4. ArduPilot – MissionPlaner [5]

Non-commercial users are looking for open source solutions. One of such is ArduPilot built on an open code. It can also be upgraded to support multi-robots and multi-domains capabilities. The user interface shown in Fig. 4 is used for mission planning with a single robot.

Fig. 4. ArduPilot control panel [5]

70

The software allows to edit the list of target points (add, delete, set tolerance and heading), set flight plan, save or read KML/KMZ files, display sensor data (speed, altitude or heading). Additionally, localization of the robot may be displayed online in Google Maps, Bing or Open Street Map and offline inside the application from flight logs. Additionally, the location of the Articles

Not strictly related to robotics is the management system used in logistics. NVL Charter (see Fig. 5) is an application for international transport management enabling tracking of specific car courses (place and date of departure, and place and date of arrival), set via points or editing orders. The main goal of this software is to suggest the most optimal route – optimizing trip time and cost.

Fig. 5. NVLCharter application window [4] In robotics the delivery could be treated as reaching the target point by the robot, while track loading and unloading may be treated as performing the task in a given localization.

2.6. Comparative Analysis

The above overview of recently developed applications shows that most of them are very limited, regarding universality and adaptability to different robot types and environments. It shows difficulties with creating a multipurpose user interface and coordinating multi-robot systems, especially in different environmental domains. Above mentioned applications are strictly dedicated to specific circumstances. We could divide all presented applications into an indoor and outdoor group. Applications for indoor use are focused on material handling. These applications can be integrated with factory management systems but are very limited to cooperate with different robot types. Different localization systems are used in indoor and outdoor robots. In many factories the workload and complexity of processes require several robots to be used (always with the same kinematic construction). Some proprietary applications (MIR, OMRON) use centralized fleet management system. On the other side there are applications for outdoor use. These systems are mostly limited to display robots’ locations, mission status or working zones. The applications are more often focused on mission coordination than planning repetitive tasks in time. Outdoor environment is much more diverse com-


Journal of Automation, Mobile Robotics and Intelligent Systems

pared to the industrial one. Therefore, applications are prepared to perform a limited number of scenarios. Two applications support multiple and heterogeneous robots. Outdoor applications seem to be developed more recently hence are less mature than the indoor ones. This lack of previous interest may have been caused by constraints given by the outdoor environment and lower demand coming from the market. Most of the applications have easy access to all data corresponding to the current task, status and information about possible problems. Most of the systems are designed to work with predefined robots, with no option to expand it. This close structure was one of the main obstacles in adapting available software to our needs. Some of applications have automatic report generation, it may be highly important in rescue missions where time is one of the most important indicators. The closest to our approach and also at the furthest level of development was an ICARUS project. The purpose of our project is to have an easily customizable application, able to manage different types of robots, missions, and environments (indoor or outdoor).

3. Robot Cooperation Structures After assigning the task to a specific robot we expect that the task will be finished with a positive result and in defined time, otherwise the system should inform the user about the reason of mission failure. Efficiency may be improved by increasing the number of robots performing a specific mission or a single task, as a result we are dealing with robot cooperation. For simple tasks in most cases we are using single type robots (e.g. area monitoring), however, if the task is more complex, using robots of different types could be much more efficient (e.g. search and delivery during rescue missions – aerial robot searches and ground robots could deliver heavy equipment near to the victim). We can define four basic system types in relation to number of cooperating robots [9]: – singular – do not belong to MRS (multi-robot systems), robots working by themselves, the task does not require cooperation, – double – group consists of two robots cooperating with each other, robots are able to perform simple tasks e.g. moving objects, – multi-unit – consisting of a small group of robots, – swarms – consisting of a large group of robots. Alternatively, we could define systems in relation to performance possibilities and morphology [9]: – identical – the same locomotion of all robots inside the group, – homogeneous – all robots have similar locomotion but not always the same, there are small differences in functionalities of each robot. Absence of one robot leads to the situation that the task could not be fully finished, – heterogeneous – different way of locomotion. Another element defining the way of cooperation is communication between robots [9]:

VOLUME 13,

N° 2

2019

– all robots communicate with each other – decentralized management, – communication between robots and control station – centralized management, – robots communicate with each other and make decision without the base – multimaster. Multi-robots systems might be divided in terms of ability to cooperate (see Fig. 6), according to [10] four levels could be recognized: cooperation, knowledge, coordination and management.

Fig. 6. Taxonomy of Multi-Robot Systems [10] Cooperation is present only when performing a specific task requires at least two robots. Using more than two, in some cases, could be much more efficient e.g. terrain mapping. The knowledge about other robots working in cooperation could be divided into two groups – aware and unaware. Robot coordination is related to cooperation. All robots take into consideration current states of other ones in a team and make decisions about providing consistent work flow of the task. In systems based on strict coordination we could define the following organization levels: – strictly centralized – systems have one leader, – weakly centralized – leader function could be provided by any robot, – distributed – any robot could make decisions by itself.

4. The Mission Planner App for the ERL Competition As mentioned, robotic contests serve as motivation to find solutions to real-life problems. The organizers of the ERL Emergency Robots competition were inspired by the Fukushima accident (earthquake and tsunami) and created the following rescue scenario. Due to highly radioactive elements, the rescue team used mobile robots to keep a safe distance. The main problem was the communication between robot operators and robots themselves, because each control station was located in a different place that required coordination. The main goal was to search for missing people in the open space at the seaside, inside the damaged building and underwater, this required robots from different domains, such as: UGV (Unmanned Ground Vehicle), UAV (Unmanned Aerial Vehicle), AUV (Autonomous Underwater Vehicle). Articles

71


Journal of Automation, Mobile Robotics and Intelligent Systems

One of the tasks requiring cooperation was the delivery of a first aid kit by an aerial robot to a ground robot. The UGV was supposed to send information to the UAV when the kit was needed, specifying its location. As time was a crucial factor, the task was not as simple as described, especially with the kit due on the ground shortly after the signal. Another task was finding a leaking pipe and closing the correct valve by the UGV and AUV at the same time to avoid radioactive contamination. These types of coordination between robots can be connected to appropriate task delegation, including only reconnaissance. For example, a damaged pipe outdoors can be recognized by an aerial robot or it can provide an approximate GPS position of a missing worker (mannequin-dummy) when it is out of the drone’s range. This information can then be used by other robots in their missions. An important requirement was providing a report in KML/KMZ format within one hour after the mission. More details about this format will be provided in chapter 5.3. For the purpose of the ERL Emergency Robots competition our Mission Planner application was created in LabVIEW software. It acts as an information flow coordinator between robots, so that each member of the group knows the current action status of all the others. It allows cooperation between robots in joint tasks such as examples mentioned before. The coordinator can also send common messages to other robots for work synchronization. The system architecture is based on a central control system. The Mission Planner is responsible for event logging and is to be located on one of the computers in the base station. The user panel was created in LabVIEW and the exchange of messages is possible through the ROS node. Initially, communication between different masters in the ROS environment was possible thanks to the multimaster_fkie package, unfortunately, its operation turned out to be unstable, and we had to create the LabVIEW procedure acting as a multimaster (coordinator). The software consists of the three modules shown in Fig. 7.

Fig. 7. The Mission Planner app modules

72

Each module has a separate user interface. The communication module is needed for correct operations. It is mainly responsible for gathering the data from robots and stations, including historical common messages between ROS masters providing data to other Mission Planner modules. The other modules can be started depending on their specific needs. The map module provides visualization of robots’ positions on the map, stating the distance between them and GPS coordinates of waypoints. The Mission coordination module displays the mission progress and common messages from others, sends the coordinator’s commands, collects navigation and mission data to further convert to KML/KMZ formats. Articles

VOLUME 13,

N° 2

2019

4.1. Communication Module The communication module is the most important part of this software because in the past, operators of different robots in our team use separate applications to control them. The UAV and AUV users could communicate and control the robot with ROS, but for the UGV the activities were split between LabVIEW and partially ROS. This mixed system for UGV is connected with the gradual migration of Raptors Rover software from LabVIEW to ROS, currently, the LabVIEW provides a more reliable solution. Regarding our problems with the multi-master configuration and unstable connection between all masters, we decided to create a module in LabVIEW which is responsible for exchanging information between robots and stations. For this reason the communication module should be combined with different environments and meet the following requirements: – provide the location of the robots (GPS position and orientation), – provide information about exchanged messages, –send messages to other robots and stations. In our solution, communication between the robots and the managing application uses the ROS system. Information exchanged directly between LabVIEW and ROS is obtained through the TCP/IP protocol and the WebSocket technology. A separate master runs on each robot. The diagram in Fig. 8 shows the flow of the received data, which are used in the coordination and location modules.

Fig. 8. Information exchange between stations and robots Robots provide the following information to appropriate stations: GPS position and orientation, stream from the vision system, common messaging when the task was done in autonomous mode (e.g., drone reached waypoint in an autonomous way) etc. Each station is responsible for controlling the robot and communicating with the Mission Planner. They deliver the GPS position in standard NavSatFix message from the sensor_msg library. The land robot sends the odometry message from the nav_msgs library that was used to calculate current orientation, but for the flying robot it was its own message structure that is: float64 heading It was not possible to collect the GPS position for an underwater robot during its operation, therefore, it was limited to send the position of the mannequin found underwater after the robot surfaces. The Mission Planner receives a common message from stations and robots, and sends it to the others.


Journal of Automation, Mobile Robotics and Intelligent Systems

For the purpose of information exchange, a custom message structure has been created: string sender_name string receiver_name float64 latitude float64 longitude int32 valve_number string status The message contains information about the sender and recipient of the message, the sender robot’s coordinates at the time of sending, the number of damaged valves and a status. The status field was used to send information about the currently performed activities by the robot, e.g. finding a leaking pipe, starting to deliver the first-aid-kit, or waiting for its delivery. Messages can be sent directly from robots to the Mission Planner in an autonomous task execution by an appropriate robot controller. Nevertheless, the robots mainly worked in manual mode, they could send specific messages (e.g., about current activities), which were generated by the robot’s controller (as suggestion for the operator) and confirmed by human. The coordinator can sent a common message to the others to provide information and realize cooperation between robots.

4.2. Mission Coordination Module The module is responsible for managing the course of the mission. It provides information about the mission in the form of a static list of tasks to do. This plan is prepared in advance. It means that an operator can easily follow the mission’s orders and recorded data can be related to time, but cannot alter the task during the mission. Next, in a separate application, the gathered data are converted to KML/KMZ files, because logs in this format are required. The architecture of the module is shown in Fig. 9. At the beginning, the module imports and stores a list of all the tasks and OPIs (Object Point of Interest) to be found. Data about unfinished tasks and last occurred reports are kept separately when

Fig. 9. Mission coordination module architecture

VOLUME 13,

N° 2

2019

the status is changed. The robot can be in one of the following states for each task: setup, ready, start, done, canceled. The first three statuses require explanation. Setup means that the robot is preparing to start a task, for instance when its configuration is changing. Ready means that the robot is prepared to start its mission. In this state it is usually for a short time, however, it can take much more time, e.g. when the UGV needs the first-aid-kit and it is waiting for its delivery by the UAV. It was one of the tasks that required coordination of cooperation. We assumed that when the UGV finished all outdoors tasks, it sends a common message to the UAV with information about its GPS position and demands for the first-aid-kit to be delivered. For this moment, the status of UGV is setup, because the next task cannot be started without the kit. The UAV receives a message, finishes the current task and comes back to the station for the kit. For now, the status is setup, because the drone is preparing for the delivery task. When the kit is mounted the status is ready to begin. Next, the status (start) occurs when the drone begins to fly to deliver the kit to the UGV. When it finished its task, the UGV takes the first-aid-kit and changes its status from ready to start. The search for the victim begins. The task is done when the kit is delivered to the victim. In general, the Mission Planner can be located anywhere, assuming ethernet (cable or WiFi) connection with robot controllers. In the presented situation it was used in the UGV control station and t was directly connected with its controller. Both are using LabVIEW as software environment and the position of the UGV is privileged (the UGV can send common message to the UAV and AUV, while the communication module provides messages from other robots). Nevertheless, the universal structure allows system extension for other robot controllers. The Mission coordination module is responsible for collecting data from each common message, task status changes, as well as each robot’s GPS position and found OPI. The human coordinator analyzes the common messages from the robots and manually reports the progress of the mission. When the task required searching damages or victims the coordinator reports twice. One time to update mission progress and the other to confirm the found OPI. User interface used by coordinator is shown in Fig. 10. – ‘Subtask to do’ (a) – displays a list of all tasks to be performed along with their current status, – ‘All subtask Status’ (b) – displays the status of all tasks, – ‘Subtask registered event’ (c) – displays information about registered events: UTC time, operating mode, robot name and coordinates, content, status and additional information about the task, – ‘OPIs found’ (d) – a table with a list of found items, contains information: time of finding the item, name of the robot, necessary information about the item and the name of the file with its picture, Articles

73


Journal of Automation, Mobile Robotics and Intelligent Systems

VOLUME 13,

N° 2

2019

Fig. 10. Mission coordination User Panel – ‘Task Reporting’ (e) – a form for reporting completed tasks with a selection list containing the names of unfinished tasks, – ‘Sending and displaying a common message’ (f) – received common messages sent by robots are displayed in the table. The operator analyzes them and reports (manually) in (e). The form allows you to select the sender and recipient of the message. This is necessary because it is not known in which base station the management software is located, but default is UGV.

4.3. Map Module

The idea of creating an offline map module for the robot’s location is related to another competition, the University Rover Challenge, during which there is no access to the Internet. For the purposes of the ERL competition, this application has been expanded to include the display of the positions of several robots and the distance between the two selected machines. The architecture of the Map module to present the robot’s location is shown in Fig. 11.

Fig. 11. Map module architecture 74

Articles

Data storage of robot data contains information about the current GPS position and the orientation of each robot which are required to display the appropriate position on a map. The map is loaded into the memory as a .jpg file with a scale that allows the user to set two markers and enter the distance between them. For correct functioning of the application, a map created in the UTM (Universal Transverse Mercator) should be uploaded to one zone. Furthermore, before running the software, the user is responsible for its appropriate configuration. That is an indication path to its map and logs, set source of data only to UGV (e.g. odometry and orientation or GPS), reference GPS coordination point and Earth projection parameters if different than default setting which is WGS84. Subsequently, the first action taken after starting the application should be scaling the map. For a nonscale map, the operator indicates any two points on the map and enters their GPS coordinates. Based on the distance determined, the length of one pixel is determined. User must enter coordinates and indicate the location of the reference point. The location of the robot on the map is determined by the distance in meters between the reference point and the robot’s position, which is then scaled and calculated in pixels. The accuracy of displaying items on the map depends primarily on the accuracy of the map’s ratio and the correct reference point display. The next important point is the ability to enter GPS coordinates in the following formats: DMS (degrees, minutes, seconds), DM (degrees, decimal minutes), D (decimal degrees). It is very useful because during the ERL competition, the GPS was given in DMS format but during URC it was DM. Additionally, points can be added by pointing on the map, loading the file with coordinates or giving the distance of the point from the robot in meters in the east and north direction. Points can be deleted from the list. The list of coordinates is responsible for displaying the robot’s distance from the entered points.


Journal of Automation, Mobile Robotics and Intelligent Systems

VOLUME 13,

N° 2

2019

Fig. 12. User Panel of the Map module

The appearance of the user interface is presented in Fig. 12. On the left side of the panel there are configuration options, the upper table provides data with the distance between the current tracking robot and each GPS point. The presented panel displays basic information about the position and orientation of tracking the robot and its distance to another chosen one. This panel is used most frequently during the mission. On the map the robots are all visible but only the one tracked is highlighted .

5. Application for Coordinating Single Robot at URC Competition The first edition of the URC competition was launched in 2006, for teams of international students. Each team designed and built their own version of the Mars rover which competed in 4 missions. In three of them, the operator was responsible for memorizing a lot of tasks during the mission. One of them was Extreme Retrieval and Delivery during which the robot had to deliver items between locations indicated by GPS coordinates. Scoring points for a task could be achieved separately by picking up or dropping the item, successful delivery, and finding indicated elements. The tasks may seem very simple for humans but for robots they were quite difficult (assuming teleoperation and delays). For example, picking up an item may take a few minutes. The other mission was called Science and was divided into two parts. In the first one the robot was exploring the terrain, gathering and testing samples, and looking for rocks indicated by the judges. The application for the URC competition is an adaptation of the same software, although it is limited to mission planning, management and a single robot tracking. The application could be used for any robot

sending GPS coordinates and heading direction information via ROS. The application section responsible for the mission’s plan could work offline without physical connection to the robot. In this version the user interface was simplified and some new functionalities required at URC were added: – displaying the information about delays in the mission plan – the remaining mission time – possibility to change tasks order during the mission – ability to read and save the mission plan form to a file – ability to edit plan during the mission – KML/KMZ report generation based on recorded data The mission report can be presented based on data from autonomy tests and Science Task.

5.1. Software Architecture

The application is still based on LabVIEW environment connected with ROS. Communication takes place in the same way as at the ERL competition. User interface and architecture was simplified and some external modules integrated. In this version we can distinguish four main parts: communication, map, mission management and report generation module. The software architecture is based on events and is shown in Fig. 13. Firstly, communication with the robot is required to provide all needed data to proper action of software. GPS coordinates and robot orientation are received by standard ROS nodes (NavSatFix from library sensor_msg and Odometry from nav_msgs). The module map has the same functions as the module described in chapter 4.3. It was extended only by the possibility to import the map configuration from a file. The misArticles

75


Journal of Automation, Mobile Robotics and Intelligent Systems

sion management was changed significantly and limited to one robot, and the report module is completely new. More information about the last two modules will be provided in the next chapters. Due to the simplified startup configuration: the scale and position on map, OPI or mission configuration are imported from a file. It is important wherever the robot operator has very limited time for system setups, such a situation appears for example during competitions when the operator has just several minutes for system setup. The possibility of importing a text file with a predefined mission plan reduces the number of mistakes.

VOLUME 13,

N° 2

2019

5.2. Mission Management Module The main requirement for this module is to help the operator to follow the mission plan. It provides information about the mission progress and occurring delays. Each task can be edited or shifted during the mission depending on the current situation or possibility. All actions taken are registered in order to prepare a report. Collecting data is crucial for further analysis and creating a final report. Each log contains UTC (Universal Time Coordinated) time, GPS position and heading of the robot during action. Moreover, the logs connected with taken actions provide information about the task: description, status, type of robot work, user comments, priority, and photo title when required. Without the software, users had to save each photo separately. The stored data provides information about the mission’s start time, the configuration of each task and the status of the robot. Firstly, the configuration of the mission is required. The mission configuration window is shown in Fig. 15. This configuration panel allows the user to import or export mission settings. It is very useful, because it allows planning to be started much earlier. The plan is limited to the sequence of specific tasks, required deadlines and the global task that should be done during the whole mission (e.g. searching for some objects). In terms of creating new tasks, the operator has to fill out a form consisting of: the task description and duration, type of job (autonomous, semi-autonomous, manual), priority (7 levels), attribution to one of the categories created by users and information whether taking a photo is required.

Fig. 13. Software architecture In the user interface some significant changes have been made compared to the ERL software, all configuration windows were merged together. The map was placed on the left side of the window, a clock counting remaining mission time with a mission delays indicator was added above the map, as presented in Fig. 14.

Fig. 14. Application front-end 76

Articles

Fig. 15. Configuration list The mission configuration is displayed above the form in a table which contains all collected information about created tasks and the last reported status. The next benefit is the possibility to change the order of tasks or delete them. Changing the configuration of each task is possible through the same form through which have added them. The option of filtering tasks in a table is available by selecting: task status, type of job, priority, required OPI, or user category. Moreover, reporting the completion of a task should be as simple as possible, because the operator has many things to control. Our software limited this


Journal of Automation, Mobile Robotics and Intelligent Systems

operation to selecting the tasks from a list and entering additional information only if it’s required. The reporting form is shown in Fig. 16.

Fig. 16. Form for reporting the tasks completion Choosing a specific task is followed by filling out a form with the type of job and the next predicted status based on current information. In the application we distinguish the following states: ready to start, start, in progress, done and canceled. The user is able to change the robot’s mode or status if it is required. For tasks requiring taking a photo, an additional textbox appears asking to fill out its title or generate a default name. Mostly, the operator selects only the task and confirms it. In addition task lists are connected through filters. That means that only tasks visible in a table can be chosen. Last 50 reports of tasks are visible in one of the page tab in a table with the same headers as in the configuration table. Reports are sorted from the most recent to the oldest. Finally, the module takes into consideration the current time and time when the last report of each task occurred in order to evaluate if the plan is realized. The application highlights the delays corresponding to task start time and end time. This feature is shown in Fig. 17.

VOLUME 13,

N° 2

2019

5.3. Report Generating Module The report generating module is responsible for preparing a report of the mission course which should be readable for the end user. It involves a few information to display: the GPS locations of POI and the robot, photos of interesting items in chosen place, actions taken during the mission at a specific moment, and their status. In this chapter we will show the most important feature of this module which is the possibility to display the whole mission as an animation which provides clear information about actions taken in time. In order to provide a suitable report, the module converts logged mission data to KML (Keyhole Markup Language) or KMZ (KML file after compression to ZIP file) format. Logged mission data is saved as a .txt file which contains information about UTC time of the events, GPS position and orientation of robot, description, status, type of job and additional information of each task, file name of photos in POI or searched items. The KML format allows to display detailed geographical data in an international standard Open Geospatial Consortium (OGC). The details for creating the report could be found in [18]. The KML format could be used with Google Earth and World Wind [19]. The generated report consists of one KMZ file and three KML files. KMZ file is required because it contains necessary photos and KML file provides GPS position and time when the photo was taken. The KML files are responsible for displaying: only waypoints, robot GPS navigation data with time and changing the status of task on time during mission. In the beginning, we will describe two KML files (waypoint, navigation data) which provide the result presented in the Fig. 18.

Fig. 18. Mission report with target points and robot path Fig. 17. Table with planned and real start times Delay calculations are skipped for the global task which can be completed at any moment of the mission. For other tasks their deadlines are the basis for this calculation, the expected start time of a specific task is the sum of deadlines of all of the preceding. During the mission, the order of the task could be changed, in such a situation software automatically recalculates mission time. The delays corresponding to start time in the plan are counted when a specific task has not been switched to “in progress” or “done” status before the assumed time. The duration of the task is counted from “in progress” status to “done” status.

For displaying the red marker on maps in waypoint KML file we have used the following template script (target points): <?xml version=”1.0” encoding=”UTF-8”?> <kml xmlns=”http://earth.google.com/kml/2.2”> <Document> <Folder> <name>Waypoints</name> <Style id=”WP”> <IconStyle><color>ff0000ff</color></IconStyle> </Style> <Folder> <name>UAV-1</name> <Placemark> <name>1</name> Articles

77


Journal of Automation, Mobile Robotics and Intelligent Systems

<styleUrl>#WP</styleUrl> <Point> <coordinates>118.175659,33.923909,0</coordinates> </Point> </Placemark> </Folder> </Folder> </Document> </kml> In order to display more waypoints the script should be extend only by: <Placemark> <name>1</name> <styleUrl>#WP</styleUrl> <Point> <coordinates>118.175659,33.923909,0</coordinates> </Point> </Placemark> The script for displaying a robot’s path was almost the same. One significant difference is in <Placemark> which doesn’t have a name but contains timestamp and heading. The <Placemark> structure is as follows: <Placemark> <description>Heading: 0.00</description> <styleUrl>#V1</styleUrl> <TimeStamp> <when>2018-05-31T17:45:02Z</when> </TimeStamp> <heading>0.00</heading> <Point> <coordinates>118.175659,33.923909,0</coordinates> </Point> </Placemark> Displaying a green marker is realized by define style id in <Folder>: <Style id=”V1”> <IconStyle><color>9900FF04</color></IconStyle> <LabelStyle><color>FF00FF04</color></LabelStyle> </Style>

78

The next KML file is responsible for displaying mission progress and the final status. The file contains, for each reported change of task’s status, the proper <Placemark> which includes information about its name, task description, timestamp and position when report occurred. These points are visible in the Fig. 19 e.g. the name of point “4s” means that task no. 4 has started in that place. <Placemark>s are grouped by task id in separate folders and their structure is as follows: <Placemark> <name>1d</name> <description> Mode:manual Task: gathering sample </description> <TimeStamp> Articles

VOLUME 13,

N° 2

2019

<when> 2018-05-31T17:50:49Z</when> </TimeStamp> <Point> <coordinates>110.79298,38.3983,0</coordinates> </Point> </Placemark> One advantage of this file is providing a table (Fig. 19) which sums up all the tasks and displays the last status of each task after the mission is located in the description field in <Placemark> without timestamp, which is located in the start position of robot. The description provides the legend of meaning for each letter used to describe a point name.

Fig. 19. Task status in mission “Science Task” For this feature we have to add the following script: <table border=”1”> <tr style=”background-color: rgb(204, 255, 102);”> <td>Id task</td> <td>mode</td> <td>Last status</td> <td>Task</td> <td>Add info</td> </tr> <tr> <td>1</td> <td>manual</td> <td style=»background-color: red;»>failure</td> <td>gathering sample</td> <td></td> </tr> <tr> <td>2</td> <td>manual</td> <td style=»background-color: lime;»>done</td> <td> pile of rock to find</td> <td></td> </tr> //and the following rows </table> The last report file is a KMZ file and its aim is to create proper <Placemark> with photos. On the map the small pictures are visible but after opening them


Journal of Automation, Mobile Robotics and Intelligent Systems

the user can see a bigger photo with description. Final result are shown in Fig. 20.

Fig. 20. Photo visualization in Google Earth To generate a KMZ file with photos, they have to be placed in ”zd” directory in the same directory tree as KML file. The structure of the file is shown below: <?xml version=”1.0” encoding=”UTF-8”?> <kml xmlns=”http://earth.google.com/kml/2.2”> <Document> <Folder> <name>Object recognition Information</name> <Folder> <name>UGV-1</name> <Placemark> <description> <![CDATA[<img style=”max-width:500px;” src=”zd/ pile.jpg”>]]><br>pile of rock</br> </description> <TimeStamp> <when>2018-05-31T17:47:59Z</when> </TimeStamp> <Point> <coordinates>110.792930,38.398331,0</coordinates> </Point> <Style> <Icon><href>zd/pile.jpg</href></Icon> </Style> </Placemark> </Folder> </Folder> </Document> </kml>

6. Conclusions This paper presents the Mission Planner – our application to support a mission coordinator, a person preparing the structure of the robotic mission, keeping an eye on mission progress, time and objectives, and finally preparing report. Our approach is based on LabVIEW technology, connected via ROS-bridge with ROS system. This choice was strongly based on expertise we had in LabVIEW used to control our first mobile robot – Raptors Rover. Then, this graphical software is very helpful to simultaneously build a core application and user interface, which was very important to quickly prepare a useful and supportive

VOLUME 13,

N° 2

2019

environment. Mission Planner is composed of several modules and was used for planning and management tasks for multi domain robotic mission. The application has graphical user interfaces to display the mission’s progress and broadcast its status to all robots and control stations connected. Commands are sent via text messages, therefore, robots in a team do not need to use LabVIEW. The application was tested on two robotic competitions ERL Emergency and URC. Huge differences between competitions, forced us to customize the software but with LabVIEW again it appeared to be quite fast. Good documentation and a lot of toolboxes was beneficial. The application created on ERL allowed to coordinate three different robots, working in different environments which were controlled by various software. In that time we managed to quickly create a system architecture template and carry out successful tests. We have used a simple solution of sending predefined text messages to inform other operators about the mission’s progress or problems. Using a list of such messages further supported the mission controller in reporting the status of the running task. Therefore, in stressful situations the operator is able to report faster on a task and the risk of mistakes is decreased. Tasks are usually very complex (most teams are not able to finish all of them), which means that reporting should be simplified as much as possible, and autonomous tasks should be recorded automatically by the robot. The latter functionality is very limited in our system and it is still the operator’s responsibility to confirm the robot’s and task’s status. Also the introduction of new robots is too impractical now, because it requires changes in program code and adding new lines of communication to other ROS Masters. While for several robots this is not a problem, for a larger number of them this procedure will be boring and too long. In the second version (for URC) we have successfully implemented some improvements, which eliminated or reduced disadvantages mentioned before. Software was limited to the management and control of the mission with one robot, according to contest rules. Simplifying the user interface reduces the time required to complete a single report to only a few seconds. A report of the whole mission without images can be generated within a few minutes after its completion. A report with photos is generated longer, because filenames are verified with names in reports. This problem can be resolved by adding a camera view to the software and when a task report is created, the selected camera view is automatically saved as an image. The timer and the delay indicator are important elements showing the remaining time to complete the mission and failure to comply with previously agreed plan. This supported the operator in controlling the plan and re-organizing tasks order. Additionally, having the list of tasks to do, during the complicated mission, the operator will not forget about important tasks. Articles

79


Journal of Automation, Mobile Robotics and Intelligent Systems

7. Future work

80

The experience gained during the competitions, and verification of our application, led us to further development of a distributed mission planning system. Here are the most important requirements we have identified and will use in the next version of the application. The current approach forces the operator to have an additional computer (Win OS) for our application, it increases the cost and amount of appliances operated by the user. This is an important disadvantage especially when robots have to be controlled by only one person. Therefore, the next app will be based on web technologies and ROS. Running a local server will allow us to use the application from any device equipped with a browser, without the need to install additional software on the client’s side. ROS would be responsible for receiving and delivering the necessary data to robots. For further research and software development, we will continue with the simulation platforms Gazebo or V-rep [4] and turtlebot robots. The application should work stably after losing connection with any device (station or robot). Time synchronization will be introduced, which will allow to update the mission plan for all robots and stations without any time delays or shifts. To exchange information between devices located close to each other, it is worth to consider alternative ways of communication, which were described in [14]. We consider the solutions with a separate planning layer, a higher level will be responsible for the implementation of the assumed goals, while a separate lower level for planning the trajectory of robot motion [2,3]. Moreover, robots connected to the system would provide information about the equipment and modes of implementation of particular tasks. The system configuration should be minimized, therefore, based on robot equipment and the configuration of the mission, the system could assign robots automatically to individual tasks so that the plan can be optimally implemented. Each task should have a specific cost of execution depending on the selected robot. Due to the dynamic changes in cost, it may be a good idea to use the LRT Switchback algorithm [2]. Reporting a malfunction or loss of communication should result in a dynamic reconfiguration of the plan. In some situations, it turns out that binary logic is not enough and it is necessary to specify additional states such as contradiction or unknown. To this end, it is worth using 4QL [1]. Another important possibility is displaying the robot’s location on the terrain map in real time together with Lidar (or other sensors) readings – such information can be helpful when entering or exiting building or traveling in narrow spaces. As important as planning and coordination is reporting about mission results – especially in search and rescue or inspection mission. Therefore we want to provide the ability to generate a report not only in KML/KMZ (requiring the GPS position), but in some others user friendly formats especially for indoor Articles

VOLUME 13,

N° 2

2019

missions. The above mentioned ideas of using simulators and sensor readings (in order to locate on the indoor map) will be of crucial importance.

Acknowledgements

This work was partially supported by the Ministry of Science and Higher Education under grant No. MNiSW/2017/78/DIR/NN2.

AUTHORS Grzegorz Granosik* – Lodz University of Technology, Institute of Automatic Control, Stefanowskiego 18/22, Lodz, 90-924, e-mail: granosik@p.lodz.pl, www: www.robotyka.p.lodz.pl. Agnieszka Węgierska – Lodz University of Technology, Institute of Automatic Control, Stefanowskiego 18/22, Lodz, 90-924, e-mail: agnieszka.wegierska@p. lodz.pl, www: www.robotyka.p.lodz.pl. Kacper Andrzejczak – Lodz University of Technology, Institute of Automatic Control, Stefanowskiego 18/22, Lodz, 90-924, e-mail: kacper.andrzejczak@p. lodz.pl, www: www.robotyka.p.lodz.pl. Mateusz Kujawiński – Lodz University of Technology, Institute of Automatic Control, Stefanowskiego 18/22, Lodz, 90-924, e-mail: mateusz.kujawinski@p. lodz.pl, www: www.robotyka.p.lodz.pl. *Corresponding author

REFERENCES

[1] Ł. Białek, A. Szałas, A. Borkowski, M. Gnatowski, M. M. Borkowska, B. Dunin-Kęplicz, and J. Szklarski, “Coordinating multiple rescue robots”, Prace Naukowe Politechniki Warszawskiej. Elektronika, vol. 194, 2014, 185–194. [2] M. Przybylski, “Hierarchiczne planowanie akcji robota usługowego w środowisku dynamicznym”, Prace Naukowe Politechniki Warszawskiej. Elektronika, vol. 194, 2014, 471–480.

[3] K. M. Mówiński and E. Roszkowska, “Sterowanie hybrydowe ruchem robotów mobilnych w systemach wielorobotycznych”, Prace Naukowe Politechniki Warszawskiej. Elektronika, vol. 195, 2016, 255–264. [4] “K. Dorer, Applications of multi-agent systems in logistics”, gki.informatik.unifreiburg.de/teaching/ws0809 /map/mas_lect10_dorer.pdf. Accessed on: 12.08.2019.

[5] “Ardupilot, Mission Planner Home”, ardupi­ lot.org/ planner/index.html. Accessed on: 12.08.2019.


Journal of Automation, Mobile Robotics and Intelligent Systems

VOLUME 13,

N° 2

2019

[6] “European robotics league”, eu-robotics.net/ robotics_league/erlemergency/about/index.html. Accessed on: 12.08.2019.

[7] “ICARUS research project”, www.fp7-icarus.eu/. Accessed on: 12.08.2019.

[8] S. Govindaraj, P. Letier, K. Chintamani, J. Gancet, M. N. Jimenez, M. Á. Esbrı́, P. Musialik, J. Bedkowski, I. Badiola, R. Gonçalves, A. Coelho, D. Serrano, M. Tosa, T. Pfister, and J. M. Sanchez, “Command and Control Systems for Search and Rescue Robots”, Search and Rescue Robotics – From Theory to Practice, 2017, 10.5772/intechopen.69495.

[9] M. Garzón, J. Valente, J. J. Roldán, D. Garzón-Ramos, J. de León, A. Barrientos, and J. del Cerro, “Using ROS in Multi-robot Systems: Experiences and Lessons Learned from Real-World Field Tests”. In: A. Koubaa, ed., Robot Operating System (ROS). Studies in Computational Intelligence, Springer, Cham, 2017, 10.1007/978-3319-54927-9_14.

[10] L. Iocchi, D. Nardi, and M. Salerno, “Reactivity and Deliberation: A Survey on Multi-Robot Systems”. In: M. Hannebauer, J. Wendler, and E. Pa­ gello, eds., Balancing Reactivity and Social Deliberation in Multi-Agent Systems, BRSDMAS 2000. Lecture Notes in Computer Science, Springer, Berlin, Heidelberg, 2001, 9–32, 10.1007/3-54044568-4_2. [11] “OMRON, LD Series – Self navigating Autonomous Intelligent Vehicle”, automation.omron. com/en/us/ products/family/LD. Accessed on: 12.08.2019.

[12] “MiR, Mobile Industrial Robots”, www.mobile-industrial-robots.com/en. Accessed on: 12.08.2019. [13] “Google Developers, Keyhole Markup Language”, developers.google.com/kml. Accessed on: 12.08.2019. [14] “NASA, WorldWind”, worldwind.arc.nasa.gov. Accessed on: 12.08.2019.

Articles

81


VOLUME 13, N° 2, 2019 www.jamris.org (ONLINE)

Journal of Automation, Mobile Robotics and Intelligent Systems pISSN 1897-8649 (PRINT) / eISSN 2080-2145

Publisher: ŁUKASIEWICZ Research Network – Industrial Research Institute for Automation and Measurements PIAP

WWW.JAMRIS.ORG  •  pISSN 1897-8649 (PRINT) / eISSN 2080-2145 (ONLINE)  •  VOLUME  13, N° 2, 2019 Indexed in SCOPUS


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.