VOLUME 13, N° 4, 2019 www.jamris.org (ONLINE)
Journal of Automation, Mobile Robotics and Intelligent Systems pISSN 1897-8649 (PRINT) / eISSN 2080-2145
Publisher: ŁUKASIEWICZ Research Network – Industrial Research Institute for Automation and Measurements PIAP
WWW.JAMRIS.ORG • pISSN 1897-8649 (PRINT) / eISSN 2080-2145 (ONLINE) • VOLUME 13, N° 4, 2019 Indexed in SCOPUS
Journal of Automation, Mobile Robotics and Intelligent Systems A peer-reviewed quarterly focusing on new achievements in the following fields: • Fundamentals of automation and robotics • Applied automatics • Mobile robots control • Distributed systems • Navigation • Mechatronic systems in robotics • Sensors and actuators • Data transmission • Biomechatronics • Mobile computing Editor-in-Chief
Typesetting
Janusz Kacprzyk (Polish Academy of Sciences, PIAP, Poland)
PanDawer, www.pandawer.pl
Advisory Board
Webmaster
Dimitar Filev (Research & Advenced Engineering, Ford Motor Company, USA) Piotr Ryszawa (PIAP, Poland) Ministry of Science Kaoru Hirota (Japan Society for the Promotion of Science, Beijing Office) and Higher Education Editorial Office Witold Pedrycz (ECERF, University of Alberta, Canada) Republic of Poland ŁUKASIEWICZ Research Network Co-Editors – Industrial Research Institute for Automation and Measurements PIAP Roman Szewczyk (PIAP, Warsaw University of Technology) Al. Jerozolimskie 202, 02-486 Warsaw, Poland Oscar Castillo (Tijuana Institute of Technology, Mexico) tel. +48-22-8740109, e-mail: office@jamris.org Marek Zaremba (University of Quebec, Canada) Ministry The reference version of the journal is e-version. Printed in 100 copies.
of Science
Executive Editor
Ministry of Science Katarzyna Rzeplinska-Rykała, e-mail: office@jamris.org Poland) ´ and Higher(PIAP, Education
Articles are reviewed, excluding advertisements and descriptions of products. and Higher
Education If in doubt about the proper edition of contributions, for copyright and reprint Republic of Poland permissions please contact the Executive Editor.
Republic of Poland
Associate Editor
Piotr Skrzypczynski ´ (Poznan ´ University of Technology, Poland)
P ublishing of “Journal of Automation, Mobile Robotics and Intelligent Systems” – the task Ministry of Science financed under contract 907/P-DUN/2019 and Higher Education from funds of the Ministry of Science and Republic of Poland Higher Education of the Republic of Poland allocated to science dissemination activities.
Statistical Editor ´ Małgorzata Kaliczynska (PIAP, Poland)
Ministry of Science and Higher Education Republic of Poland
Editorial Board:
Mark Last (Ben-Gurion University, Israel) Anthony Maciejewski (Colorado State University, USA) Krzysztof Malinowski (Warsaw University of Technology, Poland) Andrzej Masłowski (Warsaw University of Technology, Poland) Patricia Melin (Tijuana Institute of Technology, Mexico) Fazel Naghdy (University of Wollongong, Australia) Zbigniew Nahorski (Polish Academy of Sciences, Poland) Nadia Nedjah (State University of Rio de Janeiro, Brazil) Dmitry A. Novikov (Institute of Control Sciences, Russian Academy of Sciences, Moscow, Russia) Duc Truong Pham (Birmingham University, UK) Lech Polkowski (University of Warmia and Mazury, Olsztyn, Poland) Alain Pruski (University of Metz, France) Rita Ribeiro (UNINOVA, Instituto de Desenvolvimento de Novas Tecnologias, Caparica, Portugal) Imre Rudas (Óbuda University, Hungary) Leszek Rutkowski (Czestochowa University of Technology, Poland) Alessandro Saffiotti (Örebro University, Sweden) Klaus Schilling (Julius-Maximilians-University Wuerzburg, Germany) Vassil Sgurev (Bulgarian Academy of Sciences, Department of Intelligent Systems, Bulgaria) Helena Szczerbicka (Leibniz Universität, Hannover, Germany) Ryszard Tadeusiewicz (AGH University of Science and Technology in Cracow, Poland) Stanisław Tarasiewicz (University of Laval, Canada) Piotr Tatjewski (Warsaw University of Technology, Poland) Rene Wamkeue (University of Quebec, Canada) Janusz Zalewski (Florida Gulf Coast University, USA) ´ Teresa Zielinska (Warsaw University of Technology, Poland)
Chairman - Janusz Kacprzyk (Polish Academy of Sciences, PIAP, Poland) Plamen Angelov (Lancaster University, UK) Adam Borkowski (Polish Academy of Sciences, Poland) Wolfgang Borutzky (Fachhochschule Bonn-Rhein-Sieg, Germany) Bice Cavallo (University of Naples Federico II, Napoli, Italy) Chin Chen Chang (Feng Chia University, Taiwan) Jorge Manuel Miranda Dias (University of Coimbra, Portugal) Andries Engelbrecht (University of Pretoria, Republic of South Africa) Pablo Estévez (University of Chile) Bogdan Gabrys (Bournemouth University, UK) Fernando Gomide (University of Campinas, São Paulo, Brazil) Aboul Ella Hassanien (Cairo University, Egypt) Joachim Hertzberg (Osnabrück University, Germany) Evangelos V. Hristoforou (National Technical University of Athens, Greece) Ryszard Jachowicz (Warsaw University of Technology, Poland) Tadeusz Kaczorek (Bialystok University of Technology, Poland) Nikola Kasabov (Auckland University of Technology, New Zealand) ´ Marian P. Kazmierkowski (Warsaw University of Technology, Poland) Laszlo T. Kóczy (Szechenyi Istvan University, Gyor and Budapest University of Technology and Economics, Hungary) Józef Korbicz (University of Zielona Góra, Poland) Krzysztof Kozłowski (Poznan University of Technology, Poland) Eckart Kramer (Fachhochschule Eberswalde, Germany) Rudolf Kruse (Otto-von-Guericke-Universität, Magdeburg, Germany) Ching-Teng Lin (National Chiao-Tung University, Taiwan) Piotr Kulczycki (AGH University of Science and Technology, Cracow, Poland) Andrew Kusiak (University of Iowa, USA)
Publisher: ŁUKASIEWICZ Research Network – Industrial Research Institute for Automation and Measurements PIAP All rights reserved © Articles
1
Journal of Automation, Mobile Robotics and Intelligent Systems Volume 13, N° 4, 2019 DOI: 10.14313/JAMRIS/4-2019
Contents 52
3
Method of Automatic Parking of a Car – Theoretical Considerations and Simulation Studies Maciej Trojnacki DOI: 10.14313/JAMRIS/4-2019/32 11
Modeling Cognitive Functionalities of Prosthetic Arm using Conceptual Spaces M.S. Ishwarya, Ch. Aswani Kumar DOI: 10.14313/JAMRIS/4-2019/33 22
Design of Control Algorithms for Mobile Robots in an Environment with Static and Dynamic Obstacles Robert Piotrowski, Bartosz Maciąg, Wojciech Makohoń, Krzysztof Milewski DOI: 10.14313/JAMRIS/4-2019/34 31
Automated Tracking and Real Time Following of Moving Person for Robotics Applications Jignesh J. Patoliya, Hiren K. Mewada DOI: 10.14313/JAMRIS/4-2019/35 38
Advanced Structural Fibre Material for Single Link Robotic Manipulator Simulation Analysis with Flexibility S. Ramalingam, S. Rasool Mohideen DOI: 10.14313/JAMRIS/4-2019/36 47
Economical Photovoltaic Tree with Improved Angle of Movement Based Sun Tracking System Bishal Karmakar, Rammohan Mallipeddi, Md. Nasid Kamal Protiq DOI: 10.14313/JAMRIS/4-2019/37 2
Articles
Development of Anthropomorphic Dual Arm Robot with Distinct Degrees of Freedom for Coordinated Operations Abhaykrishna K M, Nidhi Homey Parayil, Ibin Menon Reghunandan, Sudheer A P DOI: 10.14313/JAMRIS/4-2019/38 65
RT-LAB Platform for Real-Time Implementation of Luenberger Observer Based Speed Sensorless Control of Induction Motor Mansour Bechar, Abdeldjebar Hazzab, Mohamed Habbab, Pierre Sicard, Mohammed Slimi DOI: 10.14313/JAMRIS/4-2019/39 73
Developing a Multiplatform Control Environment Dariusz Rzońca, Jan Sadolewski, Andrzej Stec, Zbigniew Świder, Bartosz Trybus, Leszek Trybus DOI: 10.14313/JAMRIS/4-2019/40 85
Application of Uninorms to Aggregate Uncertainty from Many Classifiers Paweł Drygaś, Jan G. Bazan, Piotr Pusz, Maksymilian Knap DOI: 10.14313/JAMRIS/4-2019/41 91
Some New Robotization Problems Related to the Introduction of Collaborative Robots into Industrial Practice Zbigniew Pilat, Wojciech Klimasara, Marek Pachuta, Marcin Słowikowski DOI: 10.14313/JAMRIS/4-2019/42
Journal of Automation, Mobile Robotics and Intelligent Systems
VOLUME 13,
N° 4
2019
Method of Automatic Parking of a Car – Theoretical Considerations and Simulation Studies Submitted: 9th October 2019; accepted: 7th January 2020
Maciej Trojnacki
DOI: 10.14313/JAMRIS/4-2019/32 Abstract: The paper is concerned with the problem of automatic parking of a car. Current state of the art in self-driving cars and methods of automatic parking were discussed. For simulation studies, the kinematic bicycle model of the vehicle was assumed. A method of solving the problem of automatic parking was proposed. The idea of this method consists in planning characteristic poses of the vehicle and realization of motion between these poses using pose controller. Results of simulation research were presented and discussed, which confirmed the correctness of the approach. Keywords: self-driving car, automatic parking assist system, kinematic modeling, simulation studies
1. Introduction The beginning of the XXI century is a period of intensification of research in field of self-driving cars (also known as autonomous car, driverless car or robotic car). Important achievement in this domain was Stanley vehicle designed by a team of prof. S. Thrun, which won DARPA Grand Challenge in 2005 [8]. Generally speaking, there are used two different approaches concerning development of self-driving cars. The first promoted by most of automotive companies assumes gradual reaching of higher and higher levels of automation of car driving, starting from single assistance systems and ending on fully automatic car. The example of semi-automatic electric car is Tesla Model S P85D [15]. The second approach promoted by Google company assumes at once development and commercialization of fully automatic car, which even does not include steering wheel that provides possibility of driving by human being. Within Google self-driving car project fully automatic solution of a car was designed, which is based on detailed maps and Street View technology [13]. The increasing use steering-by-wire and throttleby-wire technologies in cars facilitates the development of self-driving cars. An example of this is Open Source Car Control (OSCC) project [14]. The automatic parking assist system (APAS) is a kind of advanced driver assistance systems (ADAS) supporting driving of a car, which enables performing
parallel, perpendicular (forward and reverse) or angle parking. This kind of system helps human in realization of parking maneuvers, which can be troublesome and associated with stress. The actual state of the art in APAS is described in works [7, 9, 11]. Generally speaking, APAS can be divided into two categories: fully-APAS and semi-APAS in which velocity of a car is controlled by human being. Based on the complexity of parking maneuvers, there exist one trial parking and back-and-forth several trials parking maneuvers. According to the velocity of vehicle, there are variable speed and constant speed parking. According to [7], typical structure of APAS consists of the following modules: environment recognition, path planning and path tracking control. However, if time of motion is taken into account, then instead of path planning and path tracking control one should talk about motion planning and trajectory tracking control. Environment recognition module can contain ultrasonic, radar and laser sensors, cameras, GPS receivers and INS. The module responsible for environment recognition involves both localization of the vehicle and environment mapping. Therefore, in this case, Simultaneous Localization and Mapping (SLAM) algorithms can be used. In turn, at the stage of realization of motion using path tracking control module localization of the vehicle is also an important problem and in some cases also detection of obstacles in vehicle’s surroundings. One of the examples of works devoted to the problems of planning and control of automatic parking maneuver is the paper [2]. The authors present the solution of segmental path planning with back-andforth shuttling maneuvers in a narrow parking lot. They discuss the flatness-based steering control algorithm and the longitudinal motion control strategy. The proposed method was verified by hardware-inthe-loop test results with a driving simulator. This system considers also a case when no parking place is found. The solution involves also web-based driver-to-vehicle interfaces. Another example includes paper [4], in which the authors proposed an idea of a general autonomous parking system. This system detects that a driver gets out of a car, searches for an empty parking space as fast and safely as possible, autonomously drives a car towards the parking lot and later, on demand, goes back to a human driver.
3
Journal of Automation, Mobile Robotics and Intelligent Systems
In some solutions in the field of automatic parking, artificial intelligence methods are used. An example of such work is [1], in which a fuzzy PD+I controller was applied for parallel parking. In paper [12], a reinforcement learning-based end-to-end parking algorithm was proposed to achieve automatic parking. In turn, in [10] a method which use TensorFlow training Recurrent Neural Network (RNN) to predict parking trajectory curves and determine the optimal parking trajectory was used. It should be noted that for successful realization of parking maneuver it is not sufficient to track only the selected point of the vehicle. It is also necessary to guarantee an appropriate course angle of the vehicle. This angle can be determined on the basis of the velocity vector of selected point, which should be tangential to the motion path of that point. In the general case, however, the position of a selected point can be achieved for different values of the course angle of the vehicle. Therefore, it may be more appropriate to use the pose of the vehicle, that is, the position of the selected point and the course angle of the vehicle. Then the problem of parking of a car can be reduced to the problem of planning and realization of motion to successive desired poses. These poses can include characteristic locations of the vehicle resulting from the motion planning process. The number of these poses may be arbitrarily high, but it should be chosen in such a way so as to enable collision-free movement of the vehicle in a known environment. The described approach is the subject of this work and it will be carried out on the example of the automatic parallel parking of a car.
2. Kinematic Modeling of a Car
For simulation research purposes, the bicycle model of a car will be adopted [3, 5]. The use of such a model is justified in case of motion of a car with low velocity. In this instance, one can assume lack of slip of wheels as well as can neglect dynamics of a vehicle and properties of drive units for driving and steering of wheels. Kinematic structure of a car is illustrated in Fig. 1. The geometric center of the front equivalent wheel is designated as F, whilst the rear one as R. It is assumed that vehicle motion is realized in OxOy plane of the fixed coordinate system {O}. The moving coordinate system, considered as rigidly connected to the center of the rear wheel, is denoted with symbol {R}. Actual position and course of a vehicle, that is its pose, can be described by the vector of generalized coordinates: O
4
q = [ O xR , O yR , Oϕz ]T (1)
where: OxR, OyR are coordinates describing position vector OrR of characteristic point R belonging to the vehicle, and Oφz denotes angle of spin of that vehicle about z axis with respect to fixed coordinate system {O}, which is also named the course angle. Articles
VOLUME 13,
N° 4
2019
Fig. 1. Kinematic structure of a car, desired and actual poses as well as pose errors In turn, vectors of generalized velocities respectively in {O} and {R} coordinate systems can be written as:
O = q [ O xR , O y R , O ϕ z ]T , R= q [ RvRx , RvRy , R ωz ]T (2) where: OvRx = O xR , OvRy = O y R , O ωz = O ϕ z .
It is assumed that RvRy = 0, which means the vehicle does not move in the lateral direction. In case of steady turning of the vehicle, between velocity of the point R, angular velocity of vehicle body Oωz and turning radius Rz the following dependencies take place:
R
vR =RvRx =R ωz Rz = > R ωz =RvRx /Rz .
(3)
From these relationships follow that vehicle rotation is impossible for RvRx = 0. Moreover, for δ = π/2 longitudinal motion of the vehicle is impossible, i.e. R vRx = 0. As it is known, in case of classical car, the turning radius Rz depends on turning angle of steerable wheel δ and wheelbase L, i.e.:
Rz = L/tan(δ) and Rzmin = L/tan(δmax ). (4)
One can notice that wheelbase L and maximum absolute value of turning angle of steerable wheel δmax determine geometry of possible maneuvers during parking of a car, for example minimum radius of turning Rzmin. According to work [7], one can define curvature of path and curvature derivative using the following relationships: = κ 1/= Rz tan(δ)/ L which should satisfy their limitations:
κ ≤ κmax and σ ≤ σmax .
(5)
(6) The vector of generalized velocities O q can be defined on the basis of kinematic equations of motion in the form:
Journal of Automation, Mobile Robotics and Intelligent Systems
VOLUME 13,
O xR cos( O ϕz ) 0 R cos( O ϕz ) vRx O O O q= y = sin( ϕz ) 0 R = sin( O ϕz ) Rv O ϕ R ωz tan(δ)/L Rx 0 1 z (7) where vector Rv = [Rvx, Rωz]T contains respectively longitudinal velocity of the point R of the vehicle and yaw rate of the vehicle body, both in {R} coordinate system. Moreover, in this case the following equation of non-holonomic constraints (non-integrable) can be written: vRy cos( O ϕz ) − RvRx sin( O ϕz ) = 0.
O
(8)
Between velocities of points F and R of the vehicle, that is geometric centers of wheels, the following dependencies take place:
R
vFx=
R
vRx , RvFy=
R
vRx tan(δ), RvRx=
R
vF cos(δ).
(9) Assuming that both wheels of the vehicle roll without slip and that their radius is equal to r one can write relationships:
R vF =ω f r , R vR =ωb r , ωb =ω f cos(δ), (10) where ωf, ωb are angular velocities of spin of front and rear wheels, respectively. One can also notice that particular pose of the vehicle can be achieved for various values of turning angle of steerable wheel δ. Therefore, the state of the vehicle can be described by the state vector:
= x [ O xR , O yR , O ϕz , δ]T .
(11)
For the analyzed vehicle, assuming as previously lack of slip of wheels, both forward and inverse kinematics problems can be solved. Forward kinematics problem consists in determination of velocities of the vehicle Rv = [Rvx, Rωz]T on the basis of known turning angle of the front steerable wheel δ and angular velocity of spin of the front or rear wheel. Forward kinematics problem can be solved on the basis of the following equations:
R
vRx =ωb r ,
R
ωz =ωb r tan(δ)/L.
(12)
Inverse kinematics problem consists in determination of turning angle of the front steerable wheel δ and angular velocity of spin of the front or rear wheel
N° 4
2019
on the basis of known velocities of the vehicle Rv = [Rvx, Rωz]T. This task can be solved based on relationships: ωb =
R
R vRx /= r , δ atan( R ωz L/ = vRx ) atan( L/Rz ).
(13) These dependencies will be used at the stage of control of a pose of the vehicle.
3. Automatic Parking of a Car
In this work, parallel parking maneuver will be analyzed. It is assumed that surroundings of a car is known and was determined suitable parking place. Moreover, it is assumed that vehicle’s surroundings is static, that is, it does not change during parking maneuver. Therefore, the use of the surroundings sensors is not necessary. The problem of automatic parking of a car will be decomposed into two following subproblems: – maneuver planning, that is planning of characteristic poses of the vehicle, – realization of motion between successive poses using pose controller.
3.1. Maneuver Planning
As it was mentioned before, planning maneuver will consist in planning of characteristic poses of the vehicle. These poses should be selected in such places to provide collision-free motion between them, assuming sufficient accuracy of motion realization by pose controller. Detailed planning process of poses of the vehicle is not a subject of this work, therefore only general assumptions in this respect will be provided. According to Fig. 2, it is assumed that parking place is behind a car. With such a parking place, absolute coordinate system {O} is associated. On the basis of known geometrical parameters of the vehicles (first of all its length LR, width WR and wheelbase L), dimensions of a parking place (LS i WS) and initial pose D0 it is possible to determine the following successive desired poses of the vehicle (p = 1, ... n) during parallel parking maneuver: 1. pose D1 in the area of a corner C1 of a parking place (the vehicle is situated in such a way to assure that
Fig. 2. Characteristic poses of a car determined as a result of motion planning process Articles
5
Journal of Automation, Mobile Robotics and Intelligent Systems
VOLUME 13,
the distance from its right side to a corner C1 will be at least Wrmin), 2. pose D2, in which a car achieves desired course angle for subsequent longitudinal motion to pose D3, 3. pose D3, in which longitudinal motion of a car is finished, 4. pose D4, in which a car reaches extreme acceptable rear position (at a minimum distance Lbmin from rear part of a parking place and at least Wrmin from its right side), 5. final pose Dn. If a parking place is not sufficiently large, it may be necessary to perform additional movements in order to park a car deeply enough. In such case, between poses D5 and Dn, additional poses should be planned. In order to strictly determine particular poses of a car it is necessary to introduce adequate number of geometrical limitations. Advantage of the proposed approach is that, at the stage of motion planning, it is not required to accurately determine the desired motion path of selected point of the vehicle taking into account the suitable curvatures of that path and its derivative. This problem is automatically solved by the pose controller, which takes into consideration limitations of a car.
3.2. Pose Control
Considerations concerning pose controller will be based on Fig. 1. With desired pose of the vehicle will be associated additional coordinate system {D}. The desired pose of the vehicle in absolute coordinate system {O} can be described by the following vector of generalized coordinates: O O qd [ O xRd , O yRd , = ϕzd ]T [ O xD , O yD , O ϕD ]T , (14) =
where: OxRd, OyRd are coordinates describing desired position vector OrRd of characteristic point R of the vehicle, and Oφzd is desired course angle of that vehicle. In turn, the pose errors in the coordinate system associated with the vehicle {R} and in the stationary system {O} can be determined based on the relationships: R
Rex cos( O ϕz ) sin( O ϕz ) 0 Re y = − sin( O ϕz ) cos( O ϕz ) 0 O qe , qe = Re 0 0 1 ψ OeRx O = qe = e O Ry eψ O
O xRd − O xR O yRd − O yR . Oϕ − Oϕ z zd
(15)
The pose controller is intended to ensure the movement of the vehicle between successive planned poses. It is based on solution described in works [3, 6]. In the present article, the form of such a controller, which enables changing the direction of motion of the vehicle, is used. The pose controller is based on polar coordinate system, therefore an error vector is introduced: 6
Articles
N° 4
qp = [eρ , eα , eβ ]T ,
which includes the following errors:
O eρ = ( OeRx )2 + ( OeRy )2 , eβ = eψ − eα ,
{
where:
(16) (17)
α for d = 1, eα = α − π sgn( α ) for d = −1. (18)
= α atan2( OeRy , OeRx ) − O ϕz
(19)
1 for | α |≤ π/2, d= −1 for | α |> π/2.
2019
(20)
The error eρ is associated with the distance between actual and desired position of the vehicle. The error eα determines direction of motion to the desired position. Finally, the error eβ decides how the course angle of a car should change in order to achieve desired orientation. The variable d means direction of motion, which is positive when desired pose is in front of a car and negative when is behind it. When desired pose is behind a car, the value of eα error should be corrected, to satisfy the condition |eα| ≤ π/2. The kinematic equations of motion described using above mentioned errors in polar coordinate system has the following form:
eρ − cos(eα ) = q p = eα d sin(eα )/ eρ eβ − sin(eα )/ eρ
Since:
0 R v −1 R Rx . ωz 0
(21)
R = ωz RvRx tan(δ)/L, (22) so eventually the problem of parking a car can be reduced to control of longitudinal motion of the vehicle with velocity RvRx and turning angle of the front steerable wheel δ. The pose controller is described by the following linear control law driving the vehicle to desired pose (to a unique equilibrium at (eρ, eα, eβ) = (0, 0, 0)):
kρ eρ uv = d u = . uδ kα eα + kβ eβ
(23)
| eρ | ≤ eρmax and | eρ |≤ vρmax .
(24)
This pose controller is stable only if kρ > 0, kβ < 0, kα – kρ > 0. It is assumed that the vehicle reaches desired pose if the following conditions are satisfied: If these conditions are satisfied, then the number of desired poses p is increased by one until a car reaches the desired final pose. Moreover, when it comes to control signals, the following limitations are introduced:
Journal of Automation, Mobile Robotics and Intelligent Systems
| u |≤ umax = [vmax , δmax ]T |u õ | ≤
max
and
= [amax , ϖmax ] . T
VOLUME 13,
(25)
It means that during parking maneuver there will be limited values of linear velocity and acceleration of the vehicle as well as values of turning angle of the front steerable wheel and angular velocity of turning of that wheel. The above limitations cause the following limitations on maximum angular velocity and acceleration of the vehicle: R
ωzmax = vmax tan(δmax )/L,
= amax tan(δmax )/L + vmax (1 + tan2 (δmax ))ϖmax /L. εzmax (26) R
In this work, kinematic model of a car is considered, that is, dynamics of the vehicle as well as properties of drive units driving and steering the wheels are neglected. Therefore, in simulation, the velocity of characteristic point R of the vehicle and turning angle of the front steerable wheel are equal to corresponding control signals of a pose controller, that is RvRx = uv and δ = uδ.
Tab. 1. Desired poses for parallel parking maneuver
Pose selector
eρmax, vρmax
q 1d
q2d
q3d
q4d
q5d
O
O
O
O
O
xRd (m)
6.90
6.45
3.10
1.10
2.15
φzd (rad)
0.36
0.46
0.46
0.00
0.00
O
yRd (m)
O
p
2019
error vector qp and direction of motion d pose controller calculates the control vector u, which takes into consideration limitations umax and υmax. Finally, using kinematic model of a car there is determined actual pose Oq and turning angle of the front wheel δ (that is state vector x of the vehicle) and another motion parameters like Rωz and RaRx. For simulation research the following parameters were adopted: – dimensions of a car: L = 2.9 m, LR = 4.7 m, WR = 2 m, lf = 0.9 m, lb = 0.9 m, r = 0.3 m; – limitations of a car: δmax = π/4 rad, ϖmax = π/4 rad/s, vmax = 1.2 m/s, amax = g/6, where g = 9.81 m/s2 is gravity acceleration; – initial state of a car: x0 = [OxR, OyR, Oφz, δ]T = [12, 3.9, 0, 0]T, – gains of a pose controller: kρ = 3 s–1, kα = 8, kβ = –3, – dimensions of a parking spot: LS = 7 m, WS = 2.2 m; – accuracy parameters of parking maneuver: eρmax = 0.08 m, vρmax = 0.05 m/s. As a result of planning process of parallel parking maneuver, the desired poses of the vehicle presented in Tab. 1 were determined.
4 Simulation Research
The subject of this paper is automatic parallel parking maneuver of a car. For this purpose, the simulation investigations in Matlab/Simulink environment were performed. The scheme of developed simulation model is shown in Fig. 3. Based on desired poses Oq1d, ..., Oqnd, which are the result of maneuver planning process, the actual desired pose Oqd = Oqpd is selected, where p is the number of selected desired pose (p = 1, ..., n). The next desired pose is chosen after achieving actual desired pose with defined accuracy, that is taking into account parameters eρmax and vρmax. For actual desired and real poses of the vehicle there are determined errors Oqe and qp as well as direction of motion d. Based on the
N° 4
O
3.42
3.25
1.50
1.15
1.15
Fig. 4 illustrates subsequent phases of parking maneuver obtained as a result of simulation, that is, from t = 0 s until the end of the movement. The figure shows: initial pose, subsequent desired and actual poses during motion of the vehicle, and final pose.
Desired accuracy
qp
Desired motion O
O
qd
Pose errors
qp, d
Pose controller
u
q1d, ..., Oqnd
Kinematic model of a car O
q
Maneuver planner Fig. 3. Scheme of simulation model developed in Matlab/Simulink environment Articles
7
Journal of Automation, Mobile Robotics and Intelligent Systems
VOLUME 13,
t = 0 [s]
desired pose No. 1
12 10
t=4 0 [s][s] 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3 3.2 3.4 3.6 3.8
initial pose
12 10
8
8
6
6
4
4
2
2
0
0
-2 -2
-2 -2
0
2
4
6
8
10
12
14
16
18
20
14
0
2
4
6
8
10
12
14
16
18
20
12
14
16
18
20
10
12
14
16
18
20
10
12
14
16
18
20
14 t=5 0 [s][s] 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3 3.2 3.4 3.6 3.8 4 4.2 4.4 4.6 4.8
t=6 0 [s][s] 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3 3.2 3.4 3.6 3.8 4 4.2 4.4 4.6 4.8 5 5.2 5.4 5.6 5.8
desired pose No. 2
12 10
10 8
6
6
4
4
2
2
0
0
0
2
4
6
8
10
desired pose No. 3
12
8
12
14
16
18
20
14
-2 -2
0
2
4
6
8
10
14 t=8 0 [s][s] 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3 3.2 3.4 3.6 3.8 4 4.2 4.4 4.6 4.8 5 5.2 5.4 5.6 5.8 6 6.2 6.4 6.6 6.8 7 7.2 7.4 7.6 7.8
t = 10 0 [s] 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3 3.2 3.4 3.6 3.8 4 4.2 4.4 4.6 4.8 5 5.2 5.4 5.6 5.8 6 6.2 6.4 6.6 6.8 7 7.2 7.4 7.6 7.8 8 8.2 8.4 8.6 8.8 9 9.2 9.4 9.6 9.8 [s] [s]
12
12
10
10
8
8
6
6
4
4
2
2
0
0
-2 -2
-2 -2
0
2
4
6
8
10
12
14
16
18
20
14
0
desired pose No. 4
2
4
6
8
14 t = 12 0 [s] 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3 3.2 3.4 3.6 3.8 4 4.2 4.4 4.6 4.8 5 5.2 5.4 5.6 5.8 6 6.2 6.4 6.6 6.8 7 7.2 7.4 7.6 7.8 8 8.2 8.4 8.6 8.8 9 9.2 9.4 9.6 9.8 10 10.2 10.4 10.6 10.8 11 11.2 11.4 11.6 11.8 [s] [s] [s]
desired pose No. 5
12 10
10 8
6
6
4
4
2
2
0
0
0
2
4
6
8
10
final pose
12
8
-2 -2
2019
14
14
-2 -2
N° 4
12
14
16
18
20
-2 -2
0
2
4
6
8
Fig. 4. Illustration of successive phases of parking maneuver In turn, in Fig. 5 there are presented detailed results of simulation research covering: – desired positions of characteristic point R of a car and actual motion path of that point (Fig. 5a), – desired and actual generalized coordinates describing desired and actual poses of the vehicle (Fig. 5b-d), – pose errors in absolute coordinate system {O} (Fig. 5e), – control signals for longitudinal velocity of the vehicle and turning angle of the front steerable wheel (Fig. 5f).
5. Conclusion and Future Works
8
In the present work, the method of automatic parking of a car was described. This method was used for Articles
planning and performing parallel parking maneuver. The characteristic poses of a car for that maneuver were determined within planning process. Bicycle kinematic model was used as a model of a car. To control a car, the pose controller in a form, which enables changing the direction of motion of the vehicle, was adopted. The simulation research of parallel parking maneuver in Matlab/Simulink environment was performed. The most important conclusions of the work are summarized below. – At the stage of motion planning it is not required to accurately determine the desired motion path of the selected point of the vehicle taking into account the suitable curvatures of that path and its derivative. – The proposed method, despite its simplicity and required small number of desired poses of a car,
Journal of Automation, Mobile Robotics and Intelligent Systems
a)
r Rd
rR
O
4.5
O
3
VOLUME 13,
O
b)
y ( m)
O
xR (m)
6 3
x ( m)
O
0 0
3 O
4
6
yRd (m)
9
yR (m)
0
d)
0.4
2
0.2
3 O
1
6
9 O
eRx /2 ( m)
eRy ( m)
12
φzd (rad)
9
12
φz (rad)
t ( s) 0
f)
15
O
-0.2
15
2 Oeψ ( rad)
6
0
t ( s) 0
3 O
0.6
3
0
t ( s)
0
12
O
1
e)
xRd (m)
2019
9
1.5
c)
O
12
N° 4
3
6
uv (m/s)
1.4
9
12
15
uδ (rad)
0.7
0 -1
0
-2
-0.7
t ( s)
-3 0
3
6
9
12
15
t ( s)
-1.4 0
3
6
9
12
15
Fig. 5. Result of simulation research of automatic parallel parking maneuver enables accurate realization of planned parking maneuver. – It allows realization of even complex parking maneuvers, that is, involving back-and-forth several trials. – The applied pose controller generates a variable longitudinal velocity of the vehicle, that is, the closer to the desired position the car is, the lower its velocity is. – The pose controller takes into consideration limitations of a car during parking maneuver, that is, linear velocity and acceleration of the vehicle as well as turning angle of the front steerable wheel and angular velocity of turning of that wheel. – The parallel parking maneuver is carried out efficiently and takes slightly longer than 13 s. – It should be noted however that the key factor for successful completion of the parking maneuver is in this case the accuracy of achieving the desired poses of the vehicle. Here, duration of this maneuver is less important. Directions of future works can cover among others: – the inclusion of information from surrounding sensors and modification of structure of the pose controller in order to allow for avoiding obstacles or emergency stopping the vehicle, – the studies of other parking maneuvers, which would require planning of the desired poses of a car for successive phases of the maneuver.
ACKNOWLEDGEMENTS
The work was carried out as part of the FB2517 project funded by Own Research Fund of Industrial Research Institute for Automation and Measurements PIAP.
AUTHOR Maciej Trojnacki – ŁUKASIEWICZ Research Network – Industrial Research Institute for Automation and Measurements PIAP, Al. Jerozolimskie 202, 02-486 Warsaw, Poland, Email: mtrojnacki@piap.pl.
References [1] E. Ballinas, O. Montiel, O. Castillo, Y. Rubio, and L. T. Aguilar, “Automatic Parallel Parking Algorithm for a Car‑like Robot using Fuzzy PD+I Control”, Engineering Letters, vol. 26, no. 4, 2018, 447–454. [2] K. Cheng, Y. Zhang, and H. Chen, “Planning and control for a fully‑automatic parallel parking assist system in narrow parking spaces”. In: 2013 IEEE Intelligent Vehicles Symposium (IV), Gold Coast City, Australia, 2013, 1440–1445 DOI: 10.1109/IVS.2013.6629669. [3] P. Corke, Robotics, Vision and Control: Fundamental Algorithms In MATLAB® Second, Completely Revised, Extended And Updated Edition, Springer International Publishing, 2017 DOI: 10.1007/978‑3‑319‑54413‑7. [4] S.‑W. Kim, W. Liu, and K. A. Marczuk, “Autonomous parking from a random drop point”. In: 2014 IEEE Intelligent Vehicles Symposium Proceedings, 2014, 498–503, ISSN: 1931‑0587 DOI: 10.1109/IVS.2014.6856475. [5] R. Rajamani, Vehicle Dynamics and Control, Mechanical Engineering Series, Springer US, 2012 DOI: 10.1007/978‑1‑4614‑1433‑9. Articles
9
Journal of Automation, Mobile Robotics and Intelligent Systems
[6] R. Siegwart, I. R. Nourbakhsh, and D. Scaramuzza, Introduction to autonomous mobile robots, Intelligent robotics and autonomous agents, MIT Press: Cambridge, Mass, 2011. [7] Y. Song and C. Liao, “Analysis and review of stateof‑the‑art automatic parking assist system”. In: 2016 IEEE International Conference on Vehicular Electronics and Safety (ICVES), 2016, 1–6 DOI: 10.1109/ICVES.2016.7548171. [8] S. Thrun et al., “Stanley: The robot that won the DARPA Grand Challenge”, Journal of Field Robotics, vol. 23, no. 9, 2006, 661–692 DOI: 10.1002/rob.20147. [9] W. Wang, Y. Song, J. Zhang, and H. Deng, “Automatic parking of vehicles: A review of literatures”, International Journal of Automotive Technology, vol. 15, no. 6, 2014, 967–978 DOI: 10.1007/s12239‑014‑0102‑y. [10] Z. Wang, Q. Shao, C. Wang, and Q. Zhang, “Automatic Parking Trajectory Planning Based on Recurrent Neural Network”. In: 2018 IEEE 9th International Conference on Software Engineering and Service Science (ICSESS), 2018, 1–4 DOI: 10.1109/ICSESS.2018.8663845. [11] Faheem, S. A. Mahmud, G. M. Khan, M. Rahman, and H. Zafar, “A Survey of Intelligent Car Parking System”, Journal of applied research and technology, vol. 11, no. 5, 2013, 714–726. [12] P. Zhang, L. Xiong, Z. Yu, P. Fang, S. Yan, J. Yao, and Y. Zhou, “Reinforcement Learning‑Based End‑to‑End Parking for Automatic Parking System”, Sensors, vol. 19, no. 18, 2019 DOI: 10.3390/s19183996. [13] “Waymo” (formerly Google self-driving car project), waymo.com. Accessed on: 23.01.2020 . [14] “Open Source Car Control”, github.com/PolySync/OSCC/wiki. Accessed on: 23.01.2020 . [15] “Electric Cars, Solar Panels & Clean Energy Storage | Tesla”, www.tesla.com. Accessed on: 23.01.2020 .
10
Articles
VOLUME 13,
N° 4
2019
Journal of Automation, Mobile Robotics and Intelligent Systems
VOLUME 13,
N° 4
2019
Modeling Cognitive Functionalities of Prosthetic Arm using Conceptual Spaces Submitted: 18th March 2019; accepted: 20th December 2019
M.S. Ishwarya, Ch. Aswani Kumar
DOI: 10.14313/JAMRIS/4-2019/33 Abstract: Conceptual space framework is used for representing knowledge in cognitive systems. In this paper, we have adapted conceptual space framework for prosthetic arm considering its cognitive abilities such as receiving signals, recognizing and decoding the signal and responding with the corresponding action in order to develop a conceptual space of the prosthetic arm. Cognitive functionalities such as learning, memorizing and distinguishing configurations of prosthetic arm are achieved via its conceptual space. To our knowledge, this work is the first attempt to adapt the conceptual spaces to model cognitive functionalities of prosthetic arm. Adding to this, we have made use of different notion of concept that reflects the topological structure in concepts. To model the actions of the prosthetic arm functionalities, we have made use of force patterns to represent action. Similarly, to model the distinguishing ability, we make use of the relationship between the attributes conveyed by adapted different notion of concept. Keywords: Cognition, Conceptual spaces, Concepts, Information granules, Prosthetic arm
1. Introduction Prosthetics is an artificial system or a device that replaces the human lost or malfunctioning body parts [1]. Prosthetics can include a wide range of devices such as pacemaker, dental implants, limbs, ear implants, etc. [2], [3]. Among these our particular interest is on upper prosthetic limbs (prosthetic arm) since upper limbs are crucial parts of human body for multiple tasks. Losing upper limb could significantly decrease an individual’s function in their life. Prosthetic arm can recover certain crucial tasks of an individual in a way comparable to human arm. In general, prosthetics can be classified into three categories: non-functional prosthetics (aesthetics purpose), mechanically controlled prosthetics and body controlled prosthetics [2]. Among these types, body controlled prosthetics are complex to handle as it involves many interrelated system such as biological system – the human body, mechanical system – the prosthetic arm and the computerized system – the application soft-
ware that controls the arm based on the signal from the human body. Body controlled prosthetics receives electromyography signals (EMG) from the electrodes as an input. From these raw EMG signals, information is extracted and processed further to simulate the corresponding action [4]. According to [5], these pre-processed inputs and corresponding actions can be regarded as a cognitive state. This literature also suggests that a state space model can be used for modelling the cognitive processes in the regarded cognitive states of the prosthetic arm. On the other hand, mammalian immune system is often described as cognitive system. Conceptual space framework can be used for demonstrating the cognitive functionalities of an immune system [6]. Motivated by the aforementioned literature, we aim to model the cognitive functionalities of the prosthetic arm using the conceptual space framework in this paper. For this purpose, we perform an analogical reasoning of processes in prosthetic arm using the notions of conceptual space framework. Analogy reasoning involves adaptation relational information that already exists in one domain to another application domain [7]. To our knowledge, a very few literature are available discussing the analysis of the prosthetic arm from the perspective of cognition. As mentioned earlier, we model the cognitive functionalities such as exploring the similarities and differences using the geometrical framework of conceptual spaces for the following reasons: I. To demonstrate the applicability and utility of Gӓrdenfors geometric framework of conceptual spaces on prosthetic arm. II. To achieve the cognitive functionalities of the conceptual spaces of prosthetic arm system. The novelty of the proposed work is the application of conceptual space framework for prosthetic arm. In this work, we also adapt a different notion of concept. Generally, a concept is a pair consisting of a real world entity and its description. However, literature suggests that the conventional notion of concept does not reflect the hierarchical structure in human brain [8]. On the other hand, Gӓ� rdenfors suggest that notion of concept as ‘regions’ in conceptual space framework lacks precision without topological structure [9]. We address this issue by adapting three types of attribute granules namely focal, general and essential attributes. We model the distinguishing ability of the model using these three attribute since
11
Journal of Automation, Mobile Robotics and Intelligent Systems
the relationship between the attributes best convey the relationship between the concepts [8]. It should be also noted that concepts of cognitive frameworks models real world instance based on their static properties (attributes). In here, we have made use of force patterns to model the functions and actions of prosthetic arm [10]. The organization of the paper is as follows. In the next section of the paper, we have presented a brief literature review on cognitive abilities of conceptual spaces. The section 3 of the paper discusses the fundamentals of the Gӓ� rdenfors geometric representation of conceptual spaces. In section 4, we have elaborated on the proposed work while the successive section discusses the experimental analysis of the proposed work. The section 6 of the paper presents discussion on our insights with regard to our proposed work followed by conclusions.
2. Literature Review
12
This section of the paper presents the review on literature concerning potential applications of conceptual spaces and granular computing approaches from the perspective of cognition. Information in a cognitive system is represented in the form of concepts. Two prominent approaches for information representation in a cognitive system are symbolic representation and associationism [11]–[14]. Gӓ� rdenfors suggests a new approach geometric structures which is neither symbolic nor associationism for describing conceptual spaces. This theory of conceptual spaces is framework for knowledge representation. Studies have reported the application of conceptual space theory to RCC8 network to understand the betweenness of the constraints [17]. Similarly, the conceptual space can be applied to immune system in order to obtain an intelligent design of immunization [6]. Adding to these, conceptual spaces can also be applied to evolution in theories such as classic mechanics, quantum mechanics and special relativity theory. Conceptual spaces provide a knowledge level called lingua franca which facilitates the process of generalization and specialization in symbolic, sub-symbolic and diagrammatic approaches [18]. Augello et.al proposed an algebra to manipulate the internal conceptual representation of the artificial agents developed using conceptual spaces [19]. In language games, mental concept and lexicon formation can be made to co-evolve using Gӓ� rdenfors framework of conceptual spaces together with Barsalou’s mental simulation and ESOM neural networks [20]. Another interesting linguistic application of conceptual space is formation of link between the idea of speaker, meaning of words and usage of words [21]. Conceptual spaces were also used by researchers to define psychiatric concepts together with ontologies [22]. In the aforementioned literature, the concepts in the target domain are static. However, concepts can also be used to represents the action and their functional properties [10]. In order to add the dynamic properties of an action, force attributes essential in concept representation. Actions Articles
VOLUME 13,
N° 4
2019
contain sufficient information to perceive underlying force patterns [23], [24]. The basic principle of human cognition is to recognize and distinguish objects based on its universal as well as special properties. In order to analyse the universal and special properties, arbitrary information is transformed into information granules [25]. One straight forward approach is to classify the arbitrary information into information granules such as necessary, sufficient and necessary as well as sufficient granules [26]. Similarly, concepts can also be learned using information granules [8]. Alternatively, concepts can be learned from incomplete information via granular computing techniques [27]. A concept represents relation between set of objects and its descriptions and this relation is bidirectional [28]. Studies report that three way concepts can also adapt granular computing techniques to achieve cognition [29]. It is essential for the cognitive systems to establish cognitive relation on the information learned for intelligent behaviour of the system [30], [31]. Further, a quantum theories based conceptual spaces has been proposed for the application of elderly assistance [32]. In order to distinguish a real world object, it is essential model the similarity relation by understanding their similarities and the difference [33], [34]. Literature reveals number of similarity metrics proposed using object classes, attribute class and both object and attribute classes [35]–[37]. Detailed explanation on fundamentals, theories and applications of Gӓ� rdenfors geometrical framework of conceptual spaces can be found in this literature [11]. However, we have presented brief fundamentals of geometrical framework of conceptual space in the next section of the paper.
3. Fundamentals of Geometrical Framework of Conceptual Spaces According to Gӓ� rdenfors, human cognition can well be modelled using geometric structure which are neither symbolic representation and associationism [15]. Further, concepts are closely tied by the notion of similarity and the most natural approach to model similarity is through geometric structures. In the following, we present the properties and notions of the geometrical framework of conceptual spaces.
3.1. Properties of Conceptual Spaces
According to the geometrical framework of conceptual spaces, geometric characteristics of conceptual spaces are spatial structures with the following properties:
I. Criterion P: A “natural concept” is a convex region of a conceptual space. The criterion P says that if an object O located between pair of pointe v1 and v2 own some relation with attributes in concept C then all the objects located between the points v1 and v2 also own the attributes possessed by the object O.
Journal of Automation, Mobile Robotics and Intelligent Systems
II. Criterion C: A concept is represented as a set of convex regions in a number of domains together with information about how the regions in different domains are correlated. The criterion C says that an Object O can be described with attributes from more than one category. This gives rise to theory of prototypes. Certain objects are judged to be more representative of an attribute category than others. The most representative member of a category is called prototypical member of that category.
VOLUME 13,
4. Proposed Work The proposed work models the cognitive functionalities of prosthetic arm by adapting the geometric framework. A cognitive system is usually exemplified by its cognitive functions such as receiving cues in the environment, responding to cues and capability to learn, memorize and distinguish [5]. As mentioned in previous sections, the prosthetic arm can receive sEMG signal, understand the information in the sig-
2019
nal and respond with the corresponding action for the given signal. We aim to achieve the other functionalities such as learning, memorizing and distinguishing cues from the environment using the geometric framework of conceptual spaces as shown in Fig. 1. Asses the abilities of the prosthetic arm for its treatment as a cognitive system
Prosthetic arm can be treated as cognitive system? Yes
No
Perform analogical reasoning by adapting geometrical conceptual spaces to prosthetic arm
3.2. Notions of Conceptual Spaces
The geometrical framework of conceptual spaces built on geometric structures with aforementioned properties is based on the following notion: I. Quality dimensions: Quality dimensions include properties of a real world objects. For example, temperature, brightness, colour, weight, etc., can be categorized as quality dimensions of an object. II. Domain: A domain is set of integral and nonseparable properties of an object from different quality dimensions. III. Prototype: Among a group of objects, certain objects are more representative of the category than other. The most representative object of a category is called a prototype. IV. Concept: A region in a conceptual space V. Salience: Salience is the weight of the attributed under a quality dimension VI. Similarities: Distance between the objects provides the similarity between the objects. Similar objects placed nearer in the geometrical framework of conceptual spaces. VII. Conceptual space: A conceptual space is the collection of concepts that interrelates different quality dimensions. Adaptation of geometrical framework of conceptual spaces implies the adaptation of properties and the notion of the framework to the target domain (prosthetic arm) [6]. In this research, we perform analogical reasoning by adapting the terminologies such as objects, attributes, quality dimensions, domains of the geometrical framework of conceptual space to prosthetic arm to achieve the cognitive functionality. In the next section, we present the details on using the conceptual space framework for modelling prosthetic arm functionalities.
N° 4
Concept A
Concept B
Similarity Explorer
Stop
Conceptual Space framework
Conceptual space
Fig. 1. Modelling cognitive functionalities of prosthetic arm using conceptual space framework To adapt the geometric framework of conceptual spaces to the domain of prosthetic arm we perform analogy reasoning [7]. In here, the source domain is the geometric framework of conceptual spaces and the target domain is the prosthetic arm. The analogy between the notions of source domain and target domain is carried out as follows. We regard configurations corresponding to sEMG signals as objects, components and actions of the prosthetic arm as attributes while different categories of attributes are quality dimensions and prosthetic arm itself is the domain. The actions required to accomplish the configurations are represented in terms of their corresponding force patterns [10]. Once the objects and attributes of the conceptual space of the prosthetic arm are identified, we proceed for modelling the learning and memorizing functionalities. We model the learning process through concept generation process. A concept in the conceptual space is a pair namely configuration (object) and description of the configuration (attribute). In this work, we use three granules of attribute description for each configuration namely focal attribute, general attribute and essential attribute [8]. This proposal regards an attribute as a focal attribute if the attribute is possessed by all the configurations (cardinality of focal attribute set is greater than 2). Similarly, an attribute is regarded an essential attribute if the attribute is possessed by only one configuration (cardinality of the essential attribute set is 1). An attribute is regarded as a general attribute if it is possessed by two or more configurations but not in all configurations (cardinality of the general attribute set is greater than 2 but less than m). For example, let us consider a set of three birds namely, crow, parrot and penguin. In this example, the focal attribute would be ‘birds’ since it is common to all the instances and one of the essential attribute would be ‘green’ since it is possessed only by ‘parrot’ but it is not posArticles
13
Journal of Automation, Mobile Robotics and Intelligent Systems
sessed by ‘crow’ and ‘penguin’. One of the general attribute would be ‘can fly’ since only ‘crow’ and ‘parrot’ can fly while ‘penguin’ is flightless. Consequently, each concept in the conceptual space will be of the form C:=(X, {FA, GA, EA}) where X, FA, GA, EA are the configuration, focal attribute, general attribute and essential attribute respectively. All the configurations of the prosthetic arm are learned in the form of these concepts. List of all the concepts that are learned during the concept generation process are memorized and stored in the conceptual space itself. Another important aspect of this proposal is that it models the distinguishing ability of the conceptual space using information granules. The distinguishing ability of the model indicates the ability to understand the similarity and difference between given configurations. However, the measure of similarity is directly related to difference between the configurations, i.e. higher the differences between the configurations, the less similar they are. The proposed work models the distinguishing ability using a similarity explorer as shown in Fig. 2. The similarity explorer receives two configurations as cues to analyse similarity between them. Concept A
Concept B Check similarity
Concepts A and B varies by an essential attribute?
Similar concepts
Concepts A and B varies by a general attribute?
Apply similarity Metric
Concepts A and B varies by an essential attribute?
Different concepts
Related concepts with similarity value
Fig. 2. Similarity Explorer
14
The similarity explorer of the proposed model classifies the given concepts as similar concepts if the concepts differ by essential attribute. Similarly, if the given concepts differ by focal attribute and general attribute, the similarity explorer classifies them as different concepts and related concepts respectively. It should be noted that the general attribute are possessed by two or more configurations (but not by all the configurations) as mentioned earlier. By this definition, a general attribute can be present in a minimum of 3 objects (say, GA1) and in a maximum of m-1 objects (say, GA2). In such case, concepts differing by GA1 are more similar than the concepts differing by GA2. To overcome this, we make use of a similarity metric shown in equation 1 as suggested by this literature [37]. Let C1:=(A1,B1) and C2:=(A2,B2) be the two concepts given as cue to the similarity explorer Articles
VOLUME 13,
N° 4
2019
of the proposed model. In equation 1, the term represents attributes common in both concepts. S (( A1 , B1 ),( A2 , B2 )) =
( B1 ∩ B2 ) =ω 1 1 ( B1 ∩ B2 ) + ( B1 − B2 ) + ( B2 − B1 ) 2 2
(1)
The terms, and represent the elements present in the attribute set B1 but not in B2 and elements present in the attribute set B2 but not in B1 and weight respectively. We set the weight since the assessment of similarity is symmetrical [36]. Using equation 1, the similarity explorer calculates the similarity measure between the given concepts. As a result, a numerical value directly proportional to the measure of similarity between the related concepts is obtained from the similarity explorer.
5. Experimental Analysis Experiments are conducted to test the cognitive abilities of the proposed model. Nina pro dataset is chosen for experimental analysis as a sample of prosthetic arm functionalities [38], [39]. This dataset has EMG signals corresponding to 52 configurations performed by 67 intact subjects and 11 amputated subjects as shown in Fig. 3. The actions of Ninapro dataset is classified into four main classes namely, finger movements, hand postures, wrist movements and grasping and functional movements. The proposed model adapts the geometric framework of conceptual spaces to prosthetic arm by adapting the notions of conceptual space as shown in Table 1. Tab. 1. Adapting the notion of Gӓrdenfors framework of conceptual space to prosthetic arm S. No
Notions of conceptual space
1
Cue(Object)
2
Description (Attribute)
3
Quality dimensions
4
Domain
Ninapro Dataset 52 configurations of Nina Pro dataset Components of prosthetic arm and force patterns of configurations
Four types of configuration Prosthetic arm
Here, the 52 configurations of prosthetic arm are the objects while the components and actions required to perform the configuration are the attributes of the cue. The four major classes of the configurations are quality dimensions while the prosthetic arm itself is the domain. For the purpose of convenience, we denote each configuration by combination of class id and configuration id. Class id is an alphabet (A – finger movements, B- hand postures, C- wrist movements and D- grasping and functional movements) while the configuration id is the number in
Journal of Automation, Mobile Robotics and Intelligent Systems
VOLUME 13,
N° 4
2019
Fig. 3. Configurations in Ninapro dataset [39]
ascending order. There are totally 12 configurations under class A, 8 configurations under class B, 9 configurations under class C and 23 configurations under class D. For example, the first configuration of finger movement class is denoted by A1 while the last configuration under the class of grasping and functional movements is denoted by D23. Learning and memorizing configuration. Prior to learning, the dataset is prepared and pre-processed. We make use the tool called Cmap (https:// cmap.ihmc.us/) to create and learn the list of concepts. In order to proceed for learning, we decode the descriptions of each Nina pro dataset configuration form of propositions. Fig. 5 shows the list of the
configurations learned and stored in the form of concept map by Cmap tool. We regard this concept map as conceptual space of the Nina pro dataset. In Fig. 5, configurations are represented in circle while the attributes such as focal, general and essential attributes are represented in yellow, green and grey rounded rectangle. Similarity exploration. Once the concepts are learned and stored in the conceptual space, the distinguishing ability of the proposed work is tested using similarity explorer. In Table 2, we have presented a case for each type of similarity relation between two configurations. Articles
15
Journal of Automation, Mobile Robotics and Intelligent Systems
Prosthetic Arm Grasp
A11-12
Can hold
Finger Movements
A7-8
Suspension
A5-6
Extension
Grasp
With
With
Prismatic four fingers
Grasp
Can hold
Fixed hook
Grasp
With
With
Prismatic and tip pinch
D17
Is
Lateral
Is
Flexion
Is
Wrist
D10-12
Is
Is
Over
D6
Is
Ring
Can Be
Index Finger
Can Be
Middle Finger
Involves
Movements
Is
Little Finger
Hand Posture
Involves
Oppose Base
Grasp
With
Index finger extension
Is
Ring Finger
Involves
Grasp
Is
Fixed hook
Grasp
B5
Grasp
A5-6
A3-4
D5
C1-2
C1-2
B2
Grasp
Fingers
Wrist Movements
Fingers
Prosthetic Arm
Finger Movements
D21
B3
Grasp
Function
Hand Posture
A5-6
C3-4
Grasping And Functional Movements
D3
A11-12
A9-10
Function
D1-2
Articles
Is
Is
Is
Is
Is
Can hold
Can do
Can Do
Is
Can hold Can do
Involves
Is
Parallel extension and felxion
Abduction
Power, three finger and precision sphere
Extension
Medium wrap
Suspension
Power, three finger and precision sphere
Lateral
Wrist extension
Hand Posture
Open a bottle
Small diameter Cut something
Over
Little Finger
Is
Flexion
Can do
Writing tripod
Is
Is
Prosthetic Arm
Ring
Is
C3-4
16
Can Do
VOLUME 13,
Adduction
Small diameter
Can Do
N° 4
2019
Finger Movements
Is
Extension
Is
Flexion
Movements
Can Be
A3-4
Is
Middle Finger
Hand Posture
Involves
Flexed Together
C9
Is
Wrist extension
D4
Is
Grasp
B8
Is
Close
D14-15
Is
Prismatic and tip pinch
Grasp
Can hold
Tripod
B7
Is
Index Finger
Grasp
Hand Posture
D18-19 Wrist Movements
D20
Function
B6
Grasp
Can hold
Involves
Is Can do
Is
Can do
Is
Extension
Stick
Close
Parallel extension and felxion Pronation
Power disk
Turn a screw
Flexed Together
Can hold
Large diameter
C3-4
Is
Pronation
A11-12
Is
Thumb Finger
A1-2
Is
Index Finger
D23
Is
Cut something
Prosthetic Arm
Can Do
Wrist Movements
D16
Is
Quadpad
Wrist Movements
Can do
Deviations
B5
Is
Fingers
A3-4
Is
Extension
A1-2
Is
Extension
C9
Prosthetic Arm Function A9-10
A1-2 B2
Movements D4
Movements B6
D8
Is
Can Do Can do
Is
Is
Is
Can Be
Is
Can Be
Is
Is Stick
Close
Grasping And Functional Movements
Open a bottle
Thumb Finger
Flexion
Little Finger
Abduction
Index Finger
Adduction Fingers
Journal of Automation, Mobile Robotics and Intelligent Systems
Prosthetic Arm A9-10
Wrist Movements
Grasping And Functional Movements
D7
Fingers
B2
A7-8
Grasp
C5-6
D4
Wrist Movements
A7-8
Grasp
Finger Movements
D22
B3
Hand Posture
B7
C5-6
B4
B4
Fingers
B2
Hand Posture
C3-4
B1
Movements
B2
B8
B3
D1-2
D9
Wrist Movements
B4
Grasp
B1
Fingers
C1-2
C7-8
D13
Can Do
Finger Movements
Is
Abduction
Involves
Function
Is
Prismatic four fingers
Can do
Suspension
Can Be
Ring Finger
Is
Flexion
Is
Wrist flexion
Can do
Wrist flexion
Can hold
Medium wrap
Is
Turn a screw
Involves
Point
Is
Wrist extension
Is
Thumb Finger
Is
Thumb Finger
Is
Wrist
Can Be
Flexion
Is
Fingers
Is
Large diameter
Needs
Wrist
Can hold
Power disk
Can Be
Little Finger
Is
Deviations
Is
Can hold
Is
Is
Needs
Is
Is
Is
Can Be
Involves
Is
Is
Is
Is
Is
Is
Is
Is
Ring Finger
Quadpad
Extension
Little Finger
VOLUME 13,
Point
Little Finger
Thumb Finger
Up
Thumb Finger
Flexion
Flexion
Writing tripod
Oppose Base
Up
Pronation
Tripod
Fig. 4. Decoding configurations into proposition based on their attributes
2019
Tab. 2. Case study on distinguishing configurations on Nina pro dataset Case
Configuration A Configuration B Similarity Type
Reason
Fingers
Ring Finger
N° 4
Similarity Measure
Case 1
Case 2
Case 3
A1: Index finger flexion
A1: Index finger flexion
A1: Index finger flexion
Similar concepts
Related concepts
Different concepts
A3: Middle finger flexion
Differed by essential attribute ‘index finger’ and ‘middle finger’ 0.25
C7: Pointing index finger Differed by general attribute ‘movement’ and ‘hand postures’ as well as differed by essential attribute ‘flexion’ and ‘point’ 0.17
G1: Walking
Differed by focal, general and essential attribute
0
Case 1 – Similar concepts: Let the two configuration for which we wish to explore similarity be ‘index finger flexion’ (A1) and ‘middle finger flexion’ (A3) as shown in Table 2. The similarity explorer of the proposed model distinguishes A1 and A3 as similar concept. The concepts A1 and A3 are similar by the focal and the general attribute but different by essential attributes ‘Index finger’ and ‘Middle finger’ and hence classified as similar concepts by the similarity explorer of the proposed model. Case 2 – Related concepts: Let the two configuration for which we wish to explore similarity be ‘index finger flexion’ (A1) and ‘pointing index finger’ (C7) as shown in Table 2. The similarity explorer of the proposed model classifies the configurations as related concepts since they have same focal attribute but different general and essential attribute as ‘flexion’ belong to ‘movements’ and ‘point’ belong to ‘hand postures’ as shown in Fig. 5. The concepts are classified into related concepts since they differ by general attribute ‘flexion’ and ‘hand posture’. The numerical measure of similarity between these two configurations is presented in later parts of this section. Case 3 – Different concepts: Let the two configuration for which we wish to explore similarity be ‘index finger flexion’ (A1) and ‘walking’ (imaginary configuration under a different conceptual space say, ‘prosthetic leg’). The similarity explorer has classified these two concepts as different concepts. This is because that the focal attribute is prosthetic arm and it is the root or base of the conceptual space as shown in Fig. 5. If there exist, another focal attribute, it would lead to the formation of different conceptual space. In Articles
17
VOLUME 13,
Fig. 5. Conceptual space of Nina pro dataset
Journal of Automation, Mobile Robotics and Intelligent Systems
18
Articles
N° 4
2019
Journal of Automation, Mobile Robotics and Intelligent Systems
the following, we evaluate the distinguishing ability of the model for each aforementioned case using the similarity metric shown in equation 1. We set since the assessment of similarity is symmetrical [36]. Case 1- Similar concepts: In order to calculate the similarity for this case, let concept A:= (X1,Y1)= ({A1},{(prosthetic arm), (finger , movements),(index finger, flexion)}) and concept B:=(X2,Y2)=({A3},{Middle finger, flexion}). Considering A and B, we obtain|Y1 Y2|=1, |Y1-Y2|= 1and |Y2-Y1|=1
1 1 S((X1,Y1), (X2,Y2))= = 0.25 1 1 2 1+ + 2 2
Case 2: Related concepts: In order to calculate similarity for this case, let Concept A:= (X1,Y1)= ({A1}, {(prosthetic arm), (finger , movements),(index finger, flexion}) concept B:=(X2,Y2)=({C7},{(prosthetic arm), (finger , hand posture),(Index finger, point)}). considering A and B, we obtain|Y1 Y2|=1, |Y1-Y2|= 2 and |Y2-Y1|=2
1 1 S((X1,Y1), (X2,Y2))= = 0.17 1 1 2 1 + ×2+ ×2 2 2
Case 3: Different concepts: For different concepts, the similarity measure will be close to zero. Consider the numerator of the metric which is union of two concepts of interest. Since the focal attributes are different, the concepts do not possess common attributes leading to the numerator of the metric 0. This will further lead to the similarity value 0. It can be inferred from the evaluation that concepts with high similarity yield higher numerical values in similarity analysis as shown in last column 2. According to Gӓ� rdenfors, the notion of concept is imprecise in the geometrical framework of conceptual spaces without topological structure [38]. In his theory, a concept is a region formed by quality dimensions where each quality dimension represents the features of the objects. However, human cognition shows higher correlation to both featural and structural information [36]. In this paper, the structural information of a concept is represented using information granules such as focal, general and essential attributes in addition to actual attribute (feature) information as shown in Fig. 5. The introduction of attributes classes preserves the property of the conceptual space that similar concepts are held nearer than different concepts. With this improved notion of concepts, we have modelled the cognitive abilities of prosthetic arm.
6. Conclusion
The main objective of the proposed work is to model the cognitive functionalities of prosthetic arm using conceptual space framework. For this purpose, we have regarded prosthetic arm as a cognitive sys-
VOLUME 13,
N° 4
2019
tem considering its abilities such as receiving, recognizing and responding cues from the environment. The notions of conceptual space framework are adapted to prosthetic arm domain. We have also adapted a different notion of concept for handling dynamic actions and preserving the relationship between the attributes during concept learning. As a result, cognitive functionalities of the prosthetic arm such as learning, memorizing and distinguishing between the configurations are modelled. In this paper, we have used the set of configurations and its descriptions for our illustrative experiments. However, conducting real-time experiments based on actual sEMG signal corresponding to Ninapro data set exercises would be one of the potential future works of this research.
ACKNOWLEDGEMENTS
This research is funded by Department of Science and Technology, Government of India under the Grant SR/ CSRI/118/2014.
Compliance with Ethical Standards
This article does not contain any studies with human participants or animals performed by any of the authors.
Conflict of Interest
The authors declare that they have no conflict of interest.
AUTHORS
M.S. Ishwarya – School of Information Technology & Engineering, Vellore Institute of Technology, Vellore.
Ch. Aswani Kumar* – School of Information Technology & Engineering, Vellore Institute of Technology, Vellore, e-mail: cherukuri@acm.org. *Corresponding author
References [1] M. C. Amoretti, M. Frixione, A. Lieto, and G. Adamo. “Ontologies, Mental Disorders and Prototypes”. In: D. Berkich and M. V. d’Alfonso, eds., On the Cognitive, Ethical, and Scientific Dimensions of Artificial Intelligence: Themes from IACAP 2016, Philosophical Studies Series, 189–204. Springer International Publishing, Cham, 2019 DOI: 10.1007/978-3-030-01800-9_10. [2] R. A. Andersen, B. Pesaran, P. Mitra, D. Meeker, K. V. Shenoy, S. Cao, and J. W. Burdick, “Cognitive state machine for prosthetic systems”, patent no. 6952687, 2005. Articles
19
Journal of Automation, Mobile Robotics and Intelligent Systems
20
[3] M. Asano, I. Basieva, A. Khrennikov, M. Ohya, Y. Tanaka, and I. Yamato, “Quantum Information Biology: From Information Interpretation of Quantum Mechanics to Applications in Molecular Biology and Cognitive Psychology”, Foundations of Physics, vol. 45, no. 10, 2015, 1362–1378 DOI: 10.1007/s10701‑015‑9929‑y. [4] M. Atzori, A. Gijsberts, S. Heynen, A.‑G. M. Hager, O. Deriaz, P. van der Smagt, C. Castellini, B. Caputo, and H. Müller, “Building the Ninapro database: A resource for the biorobotics community”. In: 2012 4th IEEE RAS EMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob), 2012, 1258–1265 DOI: 10.1109/BioRob.2012.6290287. [5] M. Atzori and H. Müller, “The Ninapro database: A resource for sEMG naturally controlled robotic hand prosthetics”. In: 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2015, 7151–7154 DOI: 10.1109/EMBC.2015.7320041. [6] A. Augello, S. Gaglio, G. Oliveri, and G. Pilato, “An algebra for the manipulation of conceptual spaces in cognitive agents”, Biologically Inspired Cognitive Architectures, vol. 6, 2013, 23–29 DOI: 10.1016/j.bica.2013.07.004. [7] R. Belohlavek, “Introduction to Formal Concept Analysis”, 2008. [8] T. R. Besold and K.‑U. Kühnberger, “Towards integrated neural-symbolic systems for humanlevel AI: Two research programs helping to bridge the gaps”, Biologically Inspired Cognitive Architectures, vol. 14, 2015, 97–110 DOI: 10.1016/j.bica.2015.09.003. [9] H. Burnett and O. Bonami, “A Conceptual Spaces Model of Socially Motivated Language Change”, Proceedings of the Society for Computation in Linguistics, vol. 2, no. 1, 2019, 135–145 DOI: 10.7275/5vmp‑cs05. [10] S. M. de Paula and R. R. Gudwin, “Evolving conceptual spaces for symbol grounding in language games”, Biologically Inspired Cognitive Architectures, vol. 14, 2015, 73–85 DOI: 10.1016/j.bica.2015.09.006. [11] B. Fan, E. C. C. Tsang, W. Xu, D. Chen, and W. Li, “Attribute‑oriented cognitive concept learning strategy: a multi‑level method”, International Journal of Machine Learning and Cybernetics, vol. 10, no. 9, 2019, 2421–2437 DOI: 10.1007/s13042‑018‑0879‑5. [12] W. Fierz, “Conceptual Spaces of the Immune System”, Frontiers in Immunology, vol. 7, 2016 DOI: 10.3389/fimmu.2016.00551. [13] P. Gärdenfors, “Reasoning in conceptual spaces”, Reasoning: Studies of Human Inference and Its Foundations, 2008, 302–320. [14] P. Gärdenfors, Conceptual spaces: the geometry of thought, MIT Press: Cambridge, Mass, 2000. Articles
VOLUME 13,
N° 4
2019
[15] P. Gärdenfors. “Representing Actions and functional properties in conceptual spaces”. In: Body,language and mind, vol. 1, 167–196, 2007. [16] S. Schockaert and S. Li, “Reasoning about Betweenness and RCC8 Constraints in Qualitative Conceptual Spaces”, 2018, 1963–1969. [17] E. E. Hayslett‑Ubell, “Upper limb prosthetics”, Wheaton Journal of Neuroscience Senior Seminar Research, vol. 1, 2016. [18] H. Hoffmann, J. Choe, and C. M. Thibeault. “Device and method to automatically tune the nerve stimulation pattern of a sensory‑feedback capable prosthesis”, patent no. 10166394, 2019. [19] C. Huang, J. Li, C. Mei, and W.‑Z. Wu, “Three‑way concept learning based on cognitive operators: An information fusion viewpoint”, International Journal of Approximate Reasoning, vol. 83, 2017, 218–242 DOI: 10.1016/j.ijar.2017.01.009. [20] C. Li, J. Li, and M. He, “Concept lattice compression in incomplete contexts based on K‑medoids clustering”, International Journal of Machine Learning and Cybernetics, vol. 7, no. 4, 2016, 539–552 DOI: 10.1007/s13042‑014‑0288‑3. [21] J. Li, C. Huang, J. Qi, Y. Qian, and W. Liu, “Three‑ -way cognitive concept learning via multi‑granularity”, Information Sciences, vol. 378, 2017, 244–263 DOI: 10.1016/j.ins.2016.04.051. [22] A. Lieto, A. Chella, and M. Frixione, “Conceptual Spaces for Cognitive Architectures: A lingua franca for different levels of representation”, Biologically Inspired Cognitive Architectures, vol. 19, 2017, 1–9 DOI: 10.1016/j.bica.2016.10.005. [23] B. Markman and D. Gentner, “Commonalities and differences in similarity comparisons”, Memory & Cognition, vol. 24, no. 2, 1996, 235–249 DOI: 10.3758/BF03200884. [24] G. Masterton, F. Zenker, and P. Gärdenfors, “Using conceptual spaces to exhibit conceptual continuity through scientific theory change”, European Journal for Philosophy of Science, vol. 7, no. 1, 2017, 127–150, 10.1007/s13194‑016‑0149‑x. [25] N. Masuyama, C. K. Loo, and N. Kubota, “Quantum‑Inspired Bidirectional Associative Memory for Human–Robot Communication”, International Journal of Humanoid Robotics, vol. 11, no. 02, 2014, 1450006 DOI: 10.1142/S0219843614500066. [26] J. Niu, C. Huang, J. Li, and M. Fan, “Parallel computing techniques for concept‑cognitive learning based on granular computing”, International Journal of Machine Learning and Cybernetics, vol. 9, no. 11, 2018, 1785–1805 DOI: 10.1007/s13042‑018‑0783‑z. [27] Norton, K., “A brief History of Prosthetics”, InMotion, vol. 17, no. 7, 2007, 1–3 DOI: 10.1161/01.CIR.0000035036.22612.2B.
Journal of Automation, Mobile Robotics and Intelligent Systems
[28] Ch. Aswani Kumar, M. S. Ishwarya, and C. K. Loo, “Formal concept analysis approach to cognitive functionalities of bidirectional associative memory”, Biologically Inspired Cognitive Architectures, vol. 12, 2015, 20–33 DOI: 10.1016/j.bica.2015.04.003. [29] R. Shivhare, A. K. Cherukuri, and J. Li, “Establishment of Cognitive Relations Based on Cognitive Informatics”, Cognitive Computation, vol. 9, no. 5, 2017, 721–729 DOI: 10.1007/s12559‑017‑9498‑9. [30] A. J. Thurston, “Paré and Prosthetics: The Early History of Artificial Limbs”, ANZ Journal of Surgery, vol. 77, no. 12, 2007, 1114–1119 DOI: 10.1111/j.1445‑2197.2007.04330.x. [31] R. Shivhare and A. K. Cherukuri, “Three‑way conceptual approach for cognitive memory functionalities”, International Journal of Machine Learning and Cybernetics, vol. 8, no. 1, 2017, 21–34 DOI: 10.1007/s13042‑016‑0593‑0. [32] M. S. Ishwarya and C. Aswani Kumar, “Quantum Inspired High Dimensional Conceptual Space as KID Model for Elderly Assistance”. In: A. Abraham, A. K. Cherukuri, P. Melin, and N. Gandhi, eds., Intelligent Systems Design and Applications, 2020, 98–107 DOI: 10.1007/978‑3‑030‑16660‑1_10. [33] A. Tversky, “Features of similarity”, Psychological Review, vol. 84, no. 4, 1977, 327–352 DOI: 10.1037/0033‑295X.84.4.327. [34] S. Vosniadou and A. Ortony, eds., Similarity and Analogical Reasoning, Cambridge University Press: Cambridge, 1989. [35] H. Wang and C. Schmid, “Action Recognition with Improved Trajectories”. In: 2013 IEEE International Conference on Computer Vision, 2013, 3551–3558 DOI: 10.1109/ICCV.2013.441. [36] L. Wang and X. Liu, “A new model of evaluating concept similarity”, Knowledge‑Based Systems, vol. 21, no. 8, 2008, 842–846 DOI: 10.1016/j.knosys.2008.03.042. [37] K. X. S. de Souza and J. Davis, “Aligning Ontologies and Evaluating Concept Similarities”. In: R. Meersman and Z. Tari, eds., On the Move to Meaningful Internet Systems 2004: CoopIS, DOA, and ODBASE, Berlin, Heidelberg, 2004, 1012– 1029 DOI: 10.1007/978‑3‑540‑30469‑2_12. [38] W. Xu, J. Pang, and S. Luo, “A novel cognitive system model and approach to transformation of information granules”, International Journal of Approximate Reasoning, vol. 55, no. 3, 2014, 853–866 DOI: 10.1016/j.ijar.2013.10.002. [39] F. Zenker and P. Gärdenfors, eds., Applications of Conceptual Spaces: The Case for Geometric Knowledge Representation, Synthese Library, Springer International Publishing, 2015 DOI: 10.1007/978‑3‑319‑15021‑5.
VOLUME 13,
N° 4
2019
[40] Y. Zhao, J. Li, W. Liu, and W. Xu, “Cognitive concept learning from incomplete information”, International Journal of Machine Learning and Cybernetics, vol. 8, no. 1, 2017, 159–170 DOI: 10.1007/s13042‑016‑0553‑8.
Articles
21
Journal of Automation, Mobile Robotics and Intelligent Systems
VOLUME 13,
N° 4
2019
Design of Control Algorithms for Mobile Robots in an Environment with Static and Dynamic Obstacles Submitted: 23rd April 2019; accepted: 20th December 2019
Robert Piotrowski, Bartosz Maciąg, Wojciech Makohoń, Krzysztof Milewski
DOI: 10.14313/JAMRIS/4-2019/34 Abstract: This article proposes the construction of autonomous mobile robots and designing of obstacle avoidance algorithms for them. Nowadays, mobile robots are gaining more and more popularity on the customer as well as industrial market, for example as automatic vacuum cleaners or lawnmowers. Obstacle avoidance algorithms play an important role in performance of this types of robots. The proposed algorithms were designed for builds with rather not expensive electronic components, especially sensors with limited precision and dynamics. The project began with the selection of needed parts and building materials as well as designing of the PCB and assembling the whole construction. The project included also designing and developing the software responsible for, among others, implementation of obstacle avoidance algorithms. After the project’s completion, a series of tests in a closed environment was conducted to verify the quality of vehicles’ performance. Results of tests were positive. Keywords: obstacle avoiding, control system, software development, mobile robot, mechatronics
1. Introduction
22
Autonomous vehicles find frequent use in various fields of human life as aid or substitution of a laborer in dangerous, hard or wearisome work conditions. Besides that, they are also used in the industry, military or consumer markets, e.g. package delivery, surveillance by unmanned aerial vehicles (UAV) or toys. Primary features of autonomous vehicles are: independence in decision-making, collecting and processing data about surrounding environment and influence on that environment [1]. Necessity of work in unknown or dynamically changing environment as well as the purpose of eliminating human’s participation in work, have made it necessary to develop effective obstacle avoidance algorithms [2]. Robots equipped with these systems can easily pass by obstacles, both static and dynamic, situated on their path. Popular nowadays robotic vacuums are an example of this kind of vehicles [3]. Due to their small dimensions they are able to reach areas that are diffi-
cult to access. Variety of models are available to use, starting with simple ones containing only the vacuum cleaner function, up to multifunctional cleaning robots. Many of them use advanced systems like creating a virtual map of the room, optimizing energy drain needed to cover the work area as well as communication systems allowing the supervisory control of the vehicle by mobile application. Obviously it is necessary for those robots to be able to avoid obstacles not to damage household appliances, furniture and own construction. Another example of use of autonomous vehicles is terrain exploration. The Curiosity Rover made by NASA has been sent to Mars to survey its surface to analyze soil and rock composition and to search for water and minerals [4]. It is also a scientific research station measuring and processing collected data. Avoidance of obstacles and moving in hard, uneven and rocky environment is a key aspect of its proper functioning. Autonomous vehicles also find use in the industry. For example, Amazon branches are using mobile robots to aid workers [5]. Machines independently search for specific packages and then carry them to the demanded place using special lifts. They are able to transport weights up to 340 kg and significantly accelerate the work, freeing laborers of need for searching and carrying heavy packages. With this in mind, it is essential for robots to avoid obstacles and other machines so that they can work effectively. The paper is organized as follows. In the second section, the realization of mobile robots and board is described. The design of control algorithms is presented in the third section. The fourth section proposes software development. Verification tests results are given in Section 5. The last section presents the conclusions.
2. Realization of Mobile Robots and the Board To begin with, it was necessary to set goals for construction of mobile robots. Requirements for them were as follows: small dimensions, agility, low power consumption and ability to gather data from the environment. Moreover, the goal was to design effective control methods and obstacle avoidance algorithms
Journal of Automation, Mobile Robotics and Intelligent Systems
which would be implemented in digital hardware with small costs. All electric schematics were made using the Computer Aided Design (CAD) software – Eagle. The project of the machine was divided into a few parts containing elements responsible for various features. The central processing unit (CPU) is an 8-bit microcontroller from ATmega series. The main part consists of previously mentioned CPU as well as all necessary components such as quartz resonator, passive voltage filters and reset pull-up resistor. Memory capacity of CPU is 32 kilobytes, which is enough to sustain software features. It allows processing of gathered data and realization of control algorithms. The part of the electric scheme presenting CPU and all its connections is shown in Figure 1. Connections labeled as IA1, IA2, IB1 and IB2 are Pulse-Width Modulation (PWM) channels controlling motors’ speeds and direction.
VOLUME 13,
N° 4
2019
them into electric impulses, further sent to the microprocessor. Due to CPUs limitations, only information about quantity of rotations is gathered and data about its direction is overlooked. The direction is calculated programmatically. Resolution of outputs is also limited to 6 impulses per rotation. Connections of motor controller have been presented in Figure 3. Pins labeled as M1+, M1-, M2+ and M2- are directly connected to motors.
Fig. 3. Part of the scheme with motors’ controller The following part is measuring equipment. Three optical sensors were chosen. They work in range of 4 – 30 centimeters. One of them is settled in the middle of the front edge of the PCB and two on both sides of it, rotated by 15 degrees. Each of the sensors has been connected in a manner shown in Figure 4.
Fig. 1. Part of the scheme with CPU The second part of project scheme is the power supply. It contains a voltage regulator, a Li-Pol (Li thium-polymer) battery pack and a LED (Light Emitting Diode) informing about power supply set on. The used accumulator provides high current efficiency and a long lifespan level. This part of the scheme is presented in figure 2.
Fig. 2. Part of the scheme with voltage regulation The next part are motors, their controller and encoders. Two DC motors settled at the back of construction are controlled with a dedicated module. Encoders count spins of motors shafts and convert
Fig. 4. Part of the scheme with connections of optical sensor The last part is communication. The connection between robots and other devices is provided by the UART module or Bluetooth module. Also, LED diodes were mounted, which can be programmed to inform the user about the current machine state. Due to a mismatch in voltage standards (3.3 V for Bluetooth module and 5 V for ATmega microcontroller), the signal coming from CPU has to be passed through a voltage divider, to lower the voltage to roughly 3.3 V. There is no need to boost the voltage of a signal coming from module to CPU, since 3.3 V is already interpreted as a high state in 5 V standard logic. The connections of the Bluetooth module are shown in Figure 5. Next stage was the PCB design project. The board is the chassis of the robot, all electrical and mechanical parts are mounted on it. The dimensions are 90.15 mm by 88.57 mm (length, width). Projects of upper and bottom layer were prepared using the EAGLE software (see Figures 6-7). Articles
23
Journal of Automation, Mobile Robotics and Intelligent Systems
Fig. 5. Part of the scheme with connections of the Bluetooth module
VOLUME 13,
N° 4
2019
After that part of design has been completed, a simplified 3D model of the robot was made using Autodesk Inventor environment. Imported files of PCB were the base to be supplemented by other elements, such as motors, wheels and optical sensors. Figure 8 shows a render of the robot model. After the design stage was completed, the constructions of robots were prepared. The final effect of the assembled robot is shown in Figures 9-10.
Fig. 9. Front view of the mobile robot
Fig. 6. Upper layer of the board
Fig. 10. Top view of the mobile robot
Fig. 7. Bottom layer of the board
Fig. 8. Render of the robot model 24
Articles
The working area is limited to provide an easier control of their work and eliminates part of external factors such as smoothness and surface texture which may actually affect performance. The working area also had to be big enough to freely test the behavior of robots controlled by implemented avoidance algorithms. Static obstacles were spatial elements of different shape and size, settled on the surface of the robots working area. Design of the board was also prepared with CAD software – Autodesk Inventor. It is a square of 1.25 m side length, which gives 1.5625 m2 of place to work with. The height of boarding walls is 15 cm. After designing the board, a 3D model was prepared (see Figure 11). Final construction was made with laminated chipboard, which is a universal, cheap material easy to process further. The base is separated into two parts. The walls are made with single parts. Everything is assembled with screws. Figure 12 presents the assembled board.
Journal of Automation, Mobile Robotics and Intelligent Systems
VOLUME 13,
N° 4
2019
Fig. 11. Render of the board
Fig. 12. Assembled board
3. Design of Control Algorithms The purpose of each obstacle avoidance algorithm is to analyze data coming from sensors and, based on that, adjust the direction of vehicle’s movement so the robot remains on a safe, non-colliding path. Changing the vehicle’s movement is achieved by changing speeds of both motors. Each motor is controlled by a PWM signal. Changing its speed is obtained by changing the duty cycle (the higher the duty cycle, the higher the speed). Although motors that were used are exactly the same model, produced by the same company, they reach different angular speeds with the same control signal applied. This issue causes the vehicle to move on a curve instead of a straight line. Hence, creating a speed controller that would help reach and maintain a desired speed becomes essential. A classical PI controller has been arbitrarily chosen for this purpose. It allows for a short settling time and eliminates the steady state error [6]. The derivative term has not been used in order to avoid disturbance amplification which would have a significant effect on control quality, especially when considering the low resolution of encoders. Coefficients of the controller have been chosen experimentally. The structure of controlling vehicles has been split into two modules (see Figure 13). A high-level module is a currently active obstacle avoidance algorithm which is, among others, setting speeds for both motors. A low-level module is a PI speed controller, which is responsible for reaching and maintaining a certain speed, set by the other module. where Vset refers to velocity set point, V is the current vehicle’s velocity and V* is velocity measured by equipment. Signals e, u and ω refer to error, control signal and motor’s angular speed respectively. Z is the disturbance signal.
Fig. 13. Structure of vehicles’ control Two obstacle avoidance algorithms were implemented and tested. The first one is based on a simple set of rules. A vehicle can be in one of three states available – “Move forward”, “Stop” or “Turn right”. Transitions between these states are determined by current sensor readings, whether or not any obstacle has been detected within the sensors’ range. This algorithm has been called the Rule Set [7]. An illustration depicting its functioning is presented in Figure 14.
Fig. 14. Rule Set principle The second algorithm is based on searching new path by calculating the derivative of arithmetic mean of measured distances. When an obstacle is detected within the specified range, the vehicle starts turning around, scanning the environment and calculating the difference between current and previous readings’ mean. The scanning is finished when the derivative is small enough which means that the longest path without any obstacles has been found. Articles
25
Journal of Automation, Mobile Robotics and Intelligent Systems
In an ideal situation the longest path would be found if the derivative was equal to 0. However, in practical terms it is impossible due to the noise coming from sensors and floating point precision. It is also worth mentioning that, from the mathematical perspective, the path that has been found is a local maximum but not necessarily a global one. This approach may not find the longest possible path, but works significantly faster and adds smoothness to the vehicle’s movement. The discussed algorithm has been called the Derivative Search. An illustration depicting its functioning is presented in Figure 15.
VOLUME 13,
N° 4
2019
Concerning implementation of obstacle avoidance algorithms, two most important peripherals are motors and sensors. An API has been created for both of these. Each peripheral has its own, dedicated class, located in a separate header file. A class exposes a public interface containing only a few methods with self-explanatory names, therefore, introducing a new level of abstraction. All low-level logic is encapsulated and contains actions such as configuration of proper physical ports, receiving or transmitting electrical signals on corresponding ports. Public interfaces of these classes have been presented on Listings 1a and 1b. Listing 1. Public interfaces responsible for: sensors void Begin(byte pin); int GetVoltage(); float GetDistance();
motors
void Begin(byte colour, byte side); void SetPWM(int duty); long GetEncoderTicks();
Fig. 15. Derivative Search principle
4. Software Development Software itself is a key part of the presented work. It includes many aspects such as choosing the right platform and Software Development Kit (SDK) related to it, developing a unified Application Programming Interface (API) which makes teamwork much more comfortable, implementing obstacle avoidance algorithms and implementing communication via Bluetooth. Arduino has been chosen as the platform [8]. It’s a widely used, well documented platform, supporting many different microcontrollers and has a large base of libraries, simplifying usage of various peripherals. Arduino platform allows to write code in both C and C++ languages, which means it supports usage of Object Oriented Programming [9]. When it comes to IDE, one could potentially use Arduino IDE, but despite its advantages and simplicity of use it also has its flaws, making code management harder, especially in more advanced projects. That’s why it was decided to use Atom with PlatformIO extension [10]. It is an open-source project, developed by the community which supports microcontrollers based on the Arduino platform. Contrary to Arduino IDE, it allows to create a complex project tree, provides code-hints and autocompletion and supports Git source control [11]. 26
Articles
Used sensors have nonlinear voltage-distance characteristics. In order to achieve the distance measured given in the SI units, the approximation of characteristics has been made, with the following model [12]:
(1) L = a ∗ exp(b ∗ u[k ]) + c ∗ exp(d ∗ u[k ]) where L is the distance given in centimeters, u[k] is an output signal from A/D converter and a, b, c, d are coefficients. Coefficients of this model were calculated using MATLAB’s Optimization Toolbox and are equal to: a = 71.83; b = -0.016; c = 16.99; d = -0.00283. In order to compare averaged data collected with the model, a standard deviation of error has been calculated. Its value was equal to 0.0136 which has been considered satisfying. A visual comparison between the mathematical model and averaged data collected from sensors is presented in the Figure 16.
Fig. 16. The curve fitted to measurement points After finishing the fundamental elements of a software, the next step was to define the program’s core structure. The main goal was to create software architecture that would allow to introduce separation
Journal of Automation, Mobile Robotics and Intelligent Systems
of concerns as well as to help write clean, readable and maintainable code and make it more extensible. In particular, implementing a new obstacle avoidance algorithm should have no impact on existing and already tested code. In order to separate concerns and meet the Single Responsibility Principle, a structure called the Result has been created [13]. This structure is created and returned by current algorithm in its every iteration and contains information about desired motor speeds and Light Emitting Diodes (LED) states. Furthermore, to meet the extensibility requirement, an abstract class, called ObstacleAvoidanceAlgorithm has been created. This is a base class for all other classes that implement the algorithms. It contains two abstract methods (Run and SetLEDs) that define the functionality of a specific algorithm as well as implementation of all common features for algorithms. In addition, each algorithm has its field of view range and hysteresis defined in order to reduce the impact of sensors’ noise. Their values can be changed via methods that are meant to be used by supervisory control. The ObstacleAvoidanceAlgorithm‘s code can be seen in Listing 2.
VOLUME 13,
N° 4
2019
Two algorithms described in the previous section, that is Rule Set and Derivative Search have been implemented in respect to that manner. Their flowcharts are presented in Figures 17-18.
Listing 2. A header file containing definition of the ObstacleAvoidanceAlgorithm class protected: bool obstacleDetected; float hysteresis; float boundary; float sensorReadings[3]; bool checkForObstacles(); virtual void setLEDs(Result &r) = 0; public: ObstacleAvoidanceAlgorithm(); void GetDistances(float left, float middle, float right); void SetBoundary(float newBoundary); void SetHysteresis(floatnewHysteresis); float GetBoundary(); float GetHysteresis(); virtual Result Run() = 0;
With that being done, a clean architecture is established. Implementing a new algorithm has absolutely no impact on existing code. Moreover, it opens up an opportunity to switch between various algorithms without shutting down or even stopping the vehicle – a sort of “hot swapping”. The procedure of adding a completely new obstacle avoidance algorithm looks as follows: Create a new class and put it in a separate file, Make this class inherit from ObstacleAvoidanceAlgorithm, Override two abstract methods from the base class, Create any number of fields and helper methods needed. Points 3 and 4 can be reordered or performed in parallel, depending on the programmer’s approach.
Fig. 17. Flowchart of the Rule Set algorithm Apart from that, the structure of the main program’s loop has been established. It contains two parts – the first one is responsible for executing the algorithm, setting speeds of motors using PI controller and flashing diodes. The second one takes care of communication via Bluetooth, which in turn is discussed further in this section. First of all, to make communication via Bluetooth possible, an application that would allow user to interact with robots and implemented program was needed. The decision was made to make an application for mobile devices with Android as their operating system. The MIT App Inventor has been chosen as the environment because of its accessibility and simplicity [14]. It was created by employees and students of MIT in order to make it possible for everybody to easily create applications for Android devices. It allows for programming an application and designing its graphical interface by simply pulling over certain parts of the interface or programming functions from a speArticles
27
Journal of Automation, Mobile Robotics and Intelligent Systems
cially provided toolbox. The final look of the application is presented in the Figure 19.
Fig. 18. Flowchart of the Derivative Search algorithm
VOLUME 13,
N° 4
2019
its currently running algorithm and its parameters, as well as the state in which robot is. Two buttons are located below which allow to change running algorithm and inform the user about the one being currently used by highlighting its background green. Beneath, two textboxes are located that allow user to input the field of view range and hysteresis values. The input is being taken from a numerical keyboard to avoid encountering an error with the data type provided. Furthermore, in case of providing field of view range larger than 30 cm or below 4 cm, which are limits dictated by sensors used, the application sets this value automatically to maximum or minimum values respectively. Moreover, it is worth mentioning that sending values of parameters to the robot is only possible when any of the provided values have changed from the ones already in the robot’s program. On the bottom of the screen buttons responsible for turning the robot on and off are placed. Moreover, further enhancements to process messages coming via Bluetooth were added to the created application. However, in consideration of eventual future development of the project it was made in a manner that other ways of communication would work, without any changes in the software, despite Bluetooth being the only one possible for the time being. In order to achieve this, a new class named Communication was created. It consists of a set of generic methods which allow for two-way communication patterns, essential for supervisory control. This class internally uses Stream pointer which supports any stream-based media e.g.: Wi-Fi, Ethernet, USB, as well as many others. This issue is resolved by Dependency Injection. Due to the specific structure of programs based on Arduino platforms a setter injection type has been used. The class Communication provides Begin method which injects the dependency that is a concrete implementation of a certain communication object. The class’ header file is presented on a Listing 3. Listing 3. Header file containing definition of the Communication class private: Stream *stream; public: String ReadMessage(); bool Available(); void SendMessage(String msg); void Begin(Stream *str);
Fig. 19. Graphical interface of the mobile application
28
The application allows for connecting to one robot at a time by tapping the CONNECT button. It also shows information about connection status in the top right corner. After establishing a connection the application sends a certain set of messages that allows it to download data from the connected robot about Articles
5. Verification Tests A series of verification tests was performed for each of the algorithms. Tests were carried out in an environment with static, dynamic or both types of obstacles in order to measure the quality of implemented algorithms. In an environment with only static obstacles three boxes were used, two cuboid ones as well as one in
Journal of Automation, Mobile Robotics and Intelligent Systems
the shape of a cylinder. For both algorithms the results of this test were positive. In case of RuleSet algorithm collisions with obstacles were sporadic, occurring when turning or were caused by wheels that are protruding out of general shape of a robot. Changing values of the range of field of view and its hysteresis for this algorithm barely affected the performance of the mobile vehicles. Lowering the values resulted in slightly fewer collisions. On the other hand, for DerivativeSearch algorithm, changing values of parameters caused a significant difference. In narrow spaces robots used to zigzag a lot while lowering the field of view range resulted in moving more smoothly. In addition, with high values set, normal collisions happened to occur more often than with lower values. As of environment with dynamic obstacles the mobile robots themselves were obstacles for each other. In this case, results of conducted tests were also positive. The functioning of both algorithms was satisfying. Changing values of parameters impacted only the DerivativeSearch algorithm, where again, lowering the range of field of view influenced the smoothness of movement of vehicles. The last type of environment was nearly identical to the static one with the only difference being two robots running at the same time instead of one. Results of tests for both algorithms were positive. In case of RuleSet algorithm collisions only occurred with static obstacles for the very same reasons as described in previous paragraphs, no collision occurred between the two robots. However, in case of DerivativeSearch algorithm, collisions happened with static as well as dynamic obstacles. Same as before, lowering the range of field of view resulted in better performance. It is worth to mention that each environment was limited by board’s walls to ensure testing in the same conditions. The arrangement of obstacles was identical throughout all series of tests that were performed. The tests were 10 minutes long each and consisted of gathering data about performance of robots after changing currently running algorithm as well as parameters which are field of view range and hysteresis. Tab. 1. A comparison between two algorithms
Parameter
Algorithm
Rule Set Derivative Search
Type of obstacles
Static
Dynamic Mixed Static
Dynamic Mixed
Voltage drop on a battery over 10 minutes of work [V]
Number of collisions occurred during tests [-]
0.11
5
0.11
0
0.10
11
0.11
5
0.10
0.11
6
11
VOLUME 13,
N° 4
2019
To compare both algorithms two conditions were taken into consideration. First of them being number of collisions that occurred during tests and the second one being the energy consumption over time of the test. All of these results are presented in Table 1.
6. Conclusions The implemented obstacle avoidance algorithms perform satisfyingly. However, limitations resulting from the used hardware and precision of sensors do not allow for flawless results. In case of RuleSet algorithm it performed well in overall, despite its simplicity. Continuity of movement is not preserved due to the way of handling the avoidance of obstacles by the algorithm. Furthermore, changing values of field of view range and its hysteresis affected the performance of the robot. Moreover, it is worth to mention that the vehicle with RuleSet algorithm implemented did not use a lot of energy. The best environment for this algorithm was the one with dynamic obstacles in consideration of number of collision occurring during tests. The second algorithm, DerivativeSearch, is more complex in the way it works. Due to this fact, changing the range of field of view and its hysteresis affected its performance. The only problems for robots running this algorithm occurred in narrow spaces when obstacles were closer than the provided value of field of view range and its hysteresis, which resulted in zigzagging and stopping. However, changing values of these parameters affected the performance of this algorithm resulting in a smoother movement under these circumstances. The energy drain of this algorithm was the same as when running the RuleSet algorithm. Furthermore, once again the environment where this algorithm performed best was the environment with dynamic obstacles. A lot of different factors affect the performance of obstacle avoidance algorithms, one of them being precision of sensors. Both algorithms performed worse in an environment with static and dynamic obstacles at the same time, mostly due to high density of obstacles on the board. The main reasons of collisions occurring during tests where wheels protruding out of the general shape of the robot as well as a narrow field of view. Moreover, when the vehicle was turning and an obstacle appeared in its field of view closer than 4 cm it was not detected due to the precision of sensors. The loosely-coupled, clean and maintainable code architecture makes further extensions fairly easy and does not impact the existing code. Thanks to that, future works may include implementing additional obstacle avoidance algorithms that are more sophisticated and require more computational power such as the Curvature-Velocity Method (CVM) and the Vector Field Histogram (VFH) described in [15] and [16]. Further enhancements may also include hardware changes, mainly adding more sensors in order to increase the view angle, which will improve control Articles
29
Journal of Automation, Mobile Robotics and Intelligent Systems
quality. They may also include a whole redesign of the board in order to shift the center of mass closer to the motors’ axis. This would result in better stability of the vehicle, allowing it to take sharper turns at higher speeds. Hardware improvements may also include adding peripherals to support various communication media e.g. USB or Wi-Fi. Thanks to the unified software interface that has been developed, any of these peripherals can be used and swapped at any time with all software remaining intact.
AUTHORS
Robert Piotrowski – Gdańsk University of Technology, Faculty of Electrical and Control Engineering, email: robert.piotrowski@pg.edu.pl.
Bartosz Maciąg – Gdańsk University of Technology, Faculty of Electrical and Control Engineering, email: maciagbartosz96@gmail.com. Wojciech Makohoń – Gdańsk University of Technology, Faculty of Electrical and Control Engineering, email: wojtek.makohon@gmail.com. Krzysztof Milewski* – Gdańsk University of Technology, Faculty of Electrical and Control Engineering, email: milewskik51@gmail.com. *Corresponding author
References [1] O. Khatib, “Real-time obstacle avoidance for manipulators and mobile robots”. In: 1985 IEEE International Conference on Robotics and Automation Proceedings, vol. 2, 1985, 500–505 DOI: 10.1109/ROBOT.1985.1087247. [2] P. Szulczyński, D. Pazderski, K. Kozłowski, “Real‑time obstacle avoidance using harmonic potential functions”, Journal of Automation, Mobile Robotics and Intelligent Systems, vol. 5, no. 3, 2011, 59–66. [3] I. Ulrich, F. Mondada, J. Nicoud, “Autonomous vacuum cleaner”, Robotics and Autonomous Systems, vol. 19, no. 3, 1997, 233–245 DOI: 10.1016/S0921-8890(96)00053-X. [4] J. P. Grotzinger, “Analysis of Surface Materials by the Curiosity Mars Rover”, Science, vol. 341, no. 6153, 2013, 1475–1475 DOI: 10.1126/science.1244258. [5] J. Li, H. Liu, “Design Optimization of Amazon Robotics”, Automation, Control and Intelligent Systems, vol. 4, no. 2, 2016 DOI: 10.11648/j.acis.20160402.17. [6] U. Tochukwu, “Effects of PID Controller on a Closed Loop Feedback System”. In: Proceedings of Conference: Ternopil National Technical University. Ukraine, 2013 DOI: 10.13140/2.1.2650.0167. 30
Articles
VOLUME 13,
N° 4
2019
[7] G. Narvydas, R. Simutis, V. Raudonis, “Autonomous Mobile Robot Control Using IF-THEN Rules and Genetic Algorithm”, Information Technology And Control, vol. 37, no. 3, 2008. [8] L. Louis, “Working Principle of Arduino and Using It as a Tool for Study and Research”, International Journal of Control, Automation and Systems, vol. 1, no. 2, 2016, 21–29 DOI: 10.5121/ijcacs.2016.1203. [9] A. Urdhwareshe, “Object-Oriented Programming and its Concepts”, International Journal of Innovation and Scientific Research, vol. 26, no. 1, 2016, 1–6. [10] “PlatformIO Documentation, Release 4.2.1”. PlatformIO, https://buildmedia.readthedocs.org/ media/pdf/platformio/stable/platformio.pdf. Accessed on: 2020-01-20. [11] N. N. Zolkifli, A. Ngah, A. Deraman, “Version Control System: A Review”, Procedia Computer Science, vol. 135, 2018, 408–415 DOI: 10.1016/j.procs.2018.08.191. [12] H. J. Motulsky, L. A. Ransnas, “Fitting curves to data using nonlinear regression: a practical and nonmathematical review”, The FASEB Journal, vol. 1, no. 5, 1987, 365–374 DOI: 10.1096/fasebj.1.5.3315805. [13] D. Riehle, H. Züllighoven, “Understanding and using patterns in software development”, Theory and Practice of Object Systems, vol. 2, no. 1, 1996, 3–13 DOI: 10.1002/(SICI)1096-9942(1996)2:1<3::AIDTAPO1>3.0.CO;2-#. [14] K. Prutz, H. Abelson, “Expanding Device Functionality for the MIT App Inventor IoT Embedded Companion”, Term paper, Massachusetts Institute of Technology, May 17, 2018. [15] R. Simmons, “The Curvature-Velocity Method for Local Obstacle Avoidance”. In: Proc. of the IEEE International Conference on Robotics and Automation, 1996, 3375–3382. [16] J. Borenstein, Y. Koren, “The vector field histogram-fast obstacle avoidance for mobile robots”, IEEE Transactions on Robotics and Automation, vol. 7, no. 3, 1991, 278–288 DOI: 10.1109/70.88137.
Journal of Automation, Mobile Robotics and Intelligent Systems
VOLUME 13,
N° 4
2019
Automated Tracking and Real Time Following of Moving Person for Robotics Applications Submitted: 2nd April 2018; accepted: 20th December 2019
Jignesh J. Patoliya, Hiren K. Mewada
DOI: 10.14313/JAMRIS/4-2019/35 Abstract: Presently the interaction of robots with human plays an important role in various social applications. Reliable tracking is an important aspect for the social robots where robots need to follow the moving person. This paper proposes the implementation of automated tracking and real time following algorithm for robotic automation. Occlusion and identity retention are the major challenges in the tracking process. Hence, a feature set based identity retention algorithm is used and integrated with robot operating system. The tracking algorithm is implemented using robot operating system in Linux and using OpenCV. The tracking algorithm achieved 85% accuracy and 72.30% precision. Further analysis of tracking algorithm corresponds to the integration of ROS and OpenCV is presented. The analysis of tracking algorithm concludes that ROS linking required 0.64% more time in comparison with simple OpenCV code based tracking algorithm. Keywords: Visual tracking, Robot operating system, identity retention, Gaussian Mixture Model
1. Introduction Robots have become an integrated part of the human and cooperation between the human and robot seems to be a promising way to increase the productivity and flexibility. The new open source software like robot operating system (ROS) has established the convenient way of use of robotics accessible in both research and consumable applications. Object tracking is one of the important steps for the applications requiring recognition, monitoring and communications with human. And the challenging task is to recognize and track human from the group of people. The motion of the human and occlusion are the common causes for false detection and lost tracking. Secondly, to share the physical environments between the human and robots, there is a need to avoid collision with robots [1]. If movement of human is detected by the robot then this information can be used by the robots to prevent the collision. For robotics applications, various sensors have been used to detect the human like laser sensor to detect the movement of leg, RGB camera, RGB-D (visible
band images with depth information), ultrasonic sensor, proximity sensor etc. Proposed application needs identification of the desired person from the group as well as tracking of that person. Therefore, looking to the complexity and with consideration of quantitative parameters like accuracy and range of detection camera based tracking is most suitable in comparison with other available sensor. This paper proposes the implementation of human tracking algorithm linked with ROS for robotic applications in visible band. Vast research is carried out till date for tracking humans in computer vision applications. Major constraints involved in the implementation of tracking algorithm for robotics applications are the performance of sensors, illumination variation, shadow, occlusion and computational complexity to perform in real time. However, few considerations have been given to resolve occlusion problem in real time scenario for ROS applications. This paper proposes the real time tracking along with crucial occlusion handling for robots such that robot do not miss the actual target among the group of people to be followed. With this objective the section II discuss the previous work about the tracking which has been targeted for robotic applications specifically. The section III represents constrains involved in the ROS based tracking and it is partitioned into two parts. First part discusses the adopted tracking algorithm for real time video application and second part discusses the integration of this tracking algorithm with ROS environment. In Later section, evaluation of the proposed tracking algorithm and conclusion are presented.
2. Literatures Surveys in ROS and Tracking ROS provides the huge support for computer vision applications allowing integration of OpenCV libraries and tools. The availability of the high-end camera suitable to ROS makes it easy to characterize and detect the human. Generally five approaches have been proposed in various literatures. This includes detection of human from depth images (i.e. uses of Microsoft Kinect sensor), tracking in visible band images (i.e. RGB images), Face detection based tracking, head position and torso estimation based tracking and leg detection based tracking (i.e. Laser range de-
31
Journal of Automation, Mobile Robotics and Intelligent Systems
32
tection model). In addition, tracking applications are classified either as indoor tracking or outdoor tracking (i.e. surveillance applications). Blob detection using GMM to detect number of objects in the frame has been discussed in many literatures [2, 3] and even segmentation of objects with its shape is also presented by Hiren et al [4] using Zynq processor. MATLAB is one of the powerful tools for computer vision applications. In [5], various human detection & tracking algorithms developed and implemented using ROS and MATLAB16B. Decision tree based machine learning technique is implemented to predict human activity. Classification of human activity is performed using Machine learning algorithms like k-nearest neighbours, multi class SVM and Decision Trees which gave a prediction accuracy of 96.02% on the test set. In [6], authors discussed the issues in fully automated outdoor surveillance systems. In a realistic scenario, object tracking in a single camera is performed using background subtraction followed by region correspondence. They proposed recurrent motion Image method to carry out object detection by solving the problem of shadows and by handling spurious objects in the classification process. This system was tested on real-time video at approximately 15Hz for the colour images of 320x240 resolutions and proved to be computationally efficient. A project work to measure the orientation of the object and to track its position on 3-dimension space was carried out by J. Diaz et al [7]. In this work,the orientation of human hand is measured by tracking it at shorter range. A leap motion sensor has been used to track the human hand. They presented two approaches, firstly, uses of a feature based computer vision algorithms and secondly, uses of a contour based particle filter to track and determine the orientation. They obtained promising results in hand tracking and gesture recognition which can be used human-interaction applications. To track the articulated objects, a visual feedback technique using machine learning algorithm was proposed in [8]. They developed a model to control the robot motion using the obtained estimation as feedback signal. They conducted the experiment using HRP-4 humanoid robot. A ROS based robot is developed to track the trajectory in [9]. A 2D map of the environment was required for localization of the robot. The author proposed tracking of two different trajectories using a robot that follows nonholonomic constraints. In [10], a vision based tracking of a moving target is proposed. Two quad-rotors, one as a target and another as a tracker is used. Tracker is equipped with various sensors such as camera and LIDAR to track the target. The camera feed is converted into frames and is processed using an image processing algorithm based on target detection. The trajectory of the moving target is then estimated using a coordinate transformation algorithm. Based on the estimated trajectory, the tracker moves itself in order to capture the new frames of the tracker. The complete model is Articles
VOLUME 13,
N° 4
2019
developed in ROS and is evaluated on the basis of real-world frames. A framework for the human following robot was proposed by Priyandoko et al. [11]. They used RGBDepth images and laser sensor in the implementation. They developed four tracking models using the face, leg, colour and blob for the indoor environment. The implementation is constraints for single human only. The concluded that face detection need front face to identify which is not possible for human following robot application, whereas leg detection is unreliable. Colour detection is efficient as it need low processing. However, the approach is not applicable in an outdoor environment. Hence, this paper proposes the algorithm based on blob detection which is computationally efficient and fast.
3. Proposed Model 3.1. Constraints for Real Time Tracking The goal of the paper is to make robot smart to track the person in real time, the tracking methods offering low inference speed are considered. The algorithms which use the use appearance features i.e. face, histogram of gradient features, mean-shift algorithms are relatively slow in computation and hence they cannot be used for the real-time tracking [12]. Instead, statistical, geometrical and positional features offering fast computation can be used. Lastly, paper uses and combine ideas from existing methods and attempt to implement them in an efficient way. Real time object tracking need automated detection of object of interest from the captured camera video. The object can be persons, animals, vehicles, etc. Inorganic objects like tress, also referred as scene structure [13] may create confusion in identification of desired objects. Though background subtraction has few limitations i.e. fails to segment object under color characteristic variation and fails to discriminate between the shadow and desired object, due to its fast processing it has been used to detect the moving object in the proposed algorithm. Robot detects a multiple moving people from the group of the people. However, it has to follow the specific people from this group. Hence, it has to track the people to whom it has been targeted. This makes it very challenging as it has to discriminate the desired person from the group and if any occlusion occurs, then it has to maintain identity of real person to follow. Where, occlusion is referred as overlapping of two objects and hence object disappears in the frame as it is blocked by other object. Various occlusion types were discussed in [14] like partial occlusion where all key features of the objects to be tracked are not available, full occlusion where object to be tracked is completely behind other object and long term occlusion. All These occlusion restricts the accurate position measurement of the desired person. The robot fails to recognize the correct person amongst the group of the people due to this occlusion. This paper proposes the
Journal of Automation, Mobile Robotics and Intelligent Systems
implementation of ROS integrated tracking application where feature descriptions are used to discriminate the real person from the group of the people and to maintain its identity under occlusion.
3.2. Implementation of Person Tracking and Occlusion Handling Using Features Sets
Person Detection: The very first step in the tracking is the foreground object detection. The standard Gaussian mixture model (GMM) based foreground detection method is used. In GMM, the probability of each pixel is calculated using the equation 1 as follows [2]: Person Detection: K
is the foreground object P(The Xi )very = ∑first Wi ,tstep η( Xint ,the µi ,ttracking ,σ i ,t ) (1)
detection. The standard Gaussian mixture model (GMM)
i =1 based foreground detection method is used. In GMM, the probability each pixel calculated using W thei,tequation Where K represents the ofnumber ofisdistributions, th 1 as follows [2]: is weight associated with i Gaussian at time t with K mean mi,t and variance the probability function i,t. η P ( Xi ) s Wis i ,t ( X t , i ,t , i ,t ) i 1 given as (1) Where K represents the number of distributions, W i.t is 1 − ( Xt − µ ) ∑−1 ( Xt − µ ) 1 th 2 (2) , , η ( X µ ∑weight e ) = associated n 1 with i Gaussian at time t with mean 2 2 i,t (and 2π )variance ∑ i,t . is the probability function given as This GMM provides number of blobs corresponds 1 X X (2) 1 2 X , , to detected number of people ine the given frame with 2 subtracting the This background. Eachnumber blob isofrepresented GMM provides blobs corresponds to by rectangle window thepeople detected area of the detectedcovering number of in the given frame with subtracting the background. is represented by person. The morphological operationsEach are blob performed window covering the detected area of the to separate the rectangle actual blobs from the noise blobs. The person. The morphological operations are performed to goal for the tracking is to assign theblobs. iden-The goal separatethe the human actual blobs from the noise tity to each detected the person and istrack to them. for the tracking the human to assign the identity to each detected person andoftrack to them. To assign the To assign the identity (ID),thecentroid each detected identity (ID), centroid of each detected blob is calculated blob is calculated and corresponding IDs are assigned and corresponding IDs are assigned to each blob. to each blob. This paper follows the tracking algorithm as expressed Israni and [3]. algorithm The Hungarian This paper by follows theMewada tracking ascost ex-function is calculated to track the person. It offers an advantage of pressed by Israni and Mewada [3]. The Hungarian finding an optimal object from the given sets in cost function ispolynomial calculated toThus track of- it is time. for the real person. time objectIttracking fers an advantage of the finding an optimal from one of best suitable methods.object The major constrains involved in the Hungarian Cost matrix calculation the given sets in polynomial time. Thus for real time is true identification. Therefore, to avoid the false object tracking blob it is one of the best suitable methods. identification of the person from the set of blobs, the blob The major constrains Hungarian area is involved calculated in andthe compared with Cost a predefined threshold. If calculated area exceeds this threshold then matrix calculation is true blob identification. Thereonly false it will identification be counted as person blobperson and it will be fore, to avoid the of the tracked in the neighbor frame. Due to variation in the from the set of video blobs,format the blob area and by the (i.e. size of isthecalculated frame contained compared withvideo), a predefined threshold. If calculated the threshold also varies with the type of datasets. Using the trial and error method, has countbeen found that area exceeds this threshold then only it willit be the threshold is related with the frame size and it can be ed as person blob and it will be tracked in the neighbor calculated as. frame. Due to variation in the video format (i.e. size of R C by the video), the (3)threshold also the frame contained 100 varies with theWhere type of datasets. the Using the trial and threshold, C are the represents R and error method, itnumber has been found theofthreshold of rows and that columns the frame is of video. [3] proposes thebe example of theas. true blob related with theFigure frame1 size and it can calculated t
n
2
1
2
1
t
VOLUME 13,
N° 4
2019
blob (i.e. presence of human in captured frame), detected blob from GMM is calculated over the number of frames. As shown in figure 1, one of the frames contains the blob area of 663 pixels. If same blob area is observed over the consecutive next frames, it has been found that area is increasing and it forms the shape identical to human pose. Thus using the equation 3, as it exceeds the threshold value of 4423, it can be counted as true blob. Same algorithm is repeated for all blobs obtained from GMM for each frames. pose. Thus using the equation 3, as it exceeds the threshold value of 4423, it can be counted as true blob. Same algorithm is repeated for all blobs obtained from GMM for each frames.
Fig. 1. True blob detection from the frame Occlusion detection: The major challenge in tracking process is occlusion which causes the invisibility of the desired person to 1. True blob detection from the frame be tracked.Fig. There are two causes for the invisibility of theOcclusion desireddetection: person. Firstly, there is no movement major to challenge in tracking is occlusion of theThe person be tracked. Thisprocess happened because which causes the invisibility of the desired person to be has in the tracking method, GMM is used and GMM tracked. There are two causes for the invisibility of the a limitation that itFirstly, can detect movingofperson. desired person. there is only no movement the Usingperson the to bounding the because detected blobs, be tracked.box Thisover happened in the tracking method, GMM ishas usedbeen and used GMM to hasmatch a Kuhn–Munkres algorithm limitation that it can detect only moving person. Using blob between two consecutive frames. This algorithm the bounding box over the detected blobs, Kuhn– calculates matching ofused bounding Munkres algorithm score has been to matchbox blobwithin betweentwo consecutive frames as expressed in figure 2. two consecutive frames. This algorithm calculates matching score of bounding box within two consecutive frames as expressed in figure 2.
i Frame 1
(i+1) Frame
2
22
11 i Frame 1
2
(i+1) Frame 1
2
Fig. 2. Occlusion two objects: matchingscore score of Fig. 2. Occlusion of twoofobjects: matching of bounding boxes in two frames (top), normal tracking bounding boxes inwithout two frames (top), normal tracking occlusion (bottom) without occlusion (bottom)
Figure 2 represents two cases. Firstly, there is no occlusion in next frame for cases. two object. In this case, Figure 2 represents two Firstly, there is no objects moves with linear displacement in consecutive occlusion for two object. In this case, frames.in next And frame the displacement of the object is objects independent moves with linear displacement inthe consecutive of other objects and motion of camera. detection related to the person for the video dataset frames. The And present statedisplacement of each object canofbethe modeled as is indethe object consists of 576 R ×XC768 frame size. To identify the true X Cxi , Cy iobjects , Ai , si , and Cxi , motion Cy i , Aof si camera. the i , = blob (i.e.τpresence of human in captured frame), detected pendenti of other 100 (3) state of each object can be modeled as Where Cx blob from GMM is calculated over the number of frames. The present i and Cy i represents the pixel location of As shown in figure 1, one ofRthe frames contains Where t represents the threshold, and C are the the blob A s give area and span of the blob blob’s center, = Xi Cxi , Cyi and area of 663 pixels. If same blob area is observed over the i , Ai , sii , ∆Cxi , ∆Cyi , ∆A i , ∆si number of rows and columns of the frame of video. window. Rest parameters are corresponding velocity consecutive next frames, it has been found that area is Figure 1 [3] proposes ofshape the true blob Cxi and CyiWithout represents the pixel location of blob’s displacement. occlusion, each state parameters increasing the and example it forms the identical to human Where
[
detection related to the person for the video dataset consists of 576 X 768 frame size. To identify the true
]
center, Ai and si give area and span of the blob window. Rest parameters are corresponding velocity displaceArticles
33
Journal of Automation, Mobile Robotics and Intelligent Systems
ment. Without occlusion, each state parameters can be predicted without any correction. This estimation model of Kalman filter can easily track objects in the video. Secondly, first object is occluded by second object and hence GMM will detect single object having large blob area (i.e. shaded region) in comparison. Major constraints in tracking is occlusion. The best condition to identify the occlusion (i.e. occluder and occluded person) is that number of the detected blobs in the current frame is less than the number of blobs detected in the previous frame and the area of the blob having occlusion (i.e. are of bounding box) is always larger than the blob of a single person.Further occlusion can be classified as self-occlusion where part of person overlap itself, inter-object occlusion due to overlapping of two persons. Self-occlusion can be detected easily by counting number of bounding boxes and number must be equal for two consecutive frames. The mismatch of number of bounding boxes between the frames represents the inter-object occlusion. This can be detected by comparing the aspect ratio of two bounding boxes. Let ARi = si2 /Ai and ARi +1 = si +12 /Ai +1 are the aspect ratio of bonding boxes in two consecutive frames. If aspect ratio is greater than 0.5 then it depict the inter-object occlusion i.e.
{
Occl = 1 ARi > 0.5 or ARi +1 > 0.5 else 0
(4) This process is repeated for all detected blobs (i.e. persons) in the current frame.
Identity retention process: The important aspect of ROS based tracking algorithms not only to detect the occlusion, but to maintain the identity of all detected person such that robot can tracked the desired person from the group of the people. Thus during the demerging process, the preservation of identity is must. To preserve assigned identity to each person, features based matching algorithm is used. The aim is that robot should detect and track the person in real time. Hence, unsupervised model with mutual com-
Fig. 3. Person identification and tracking algorithm
34
Articles
VOLUME 13,
N° 4
2019
parison the statistical features of the detected persons is used in the algorithm. The set of features including central moments, derivative of Gaussian, steerable filter based features, Gray level local variance, energy of the gradient, discrete cosine transform features, Laplacian filter based edge features of the person under occlusion are used as presented in [3]. Let Fcp1 represent features set of the desired person having identity of P1 and Fcp2 is the features set of the person who occluded P1 and assigned identity is P2. Similarly, Frp1 and Frp2 are features sets of person P1 and P2 in the reference frame where both are separate and identifiable. The identity assignments during the demerging process are calculated as follows:
S1= S= 2
∑ Fcp (i ) − Frp (i ) + ∑ Fcp (i ) − Frp (i ) i
1
1
i
2
2
∑ Fcp (i ) − Frp (i ) + ∑ Fcp (i ) − Frp (i ) i
1
2
i
2
1
(5) (6)
Using the comparison, if S1< S2 then P1 will be assigned to the blob having features set of Fcp1 and P1 will be assigned to the blob having features set Fcp2 or assigned identity will be changed accordingly. Similar process is extended to detect the multiple objects and to maintain the identities of multiple occluded objects. For present robotic application, robot needs to track desired person only. So the algorithm will keep the identity of the desired person by discarding other detected identities. This identity of the tracked person along with position (i.e. (x, y) position and width and height of the blob) are published to the ROS node. The algorithm is tested for various videos including PETS2009-S2L1 [15] and town Centre datasets [16]. Overall flowchart of the proposed model is presented in figure 3. The tracking results among the few frames where occlusion occurs and algorithm handle the identity retention are presented in figure 4. It shows that person with assigned identity of 20 is being occluded by another person having identity of 2. And algorithm maintains the identity of both persons in demerging process of both the persons.
Journal of Automation, Mobile Robotics and Intelligent Systems
3.3. Linking of Tracking Algorithm with ROS To fulfil the need of emerging robotics applications, the community has developed a common framework entitled Robot Operating System (ROS). It supports multiple programming languages. The proposed model is implemented using OpenCV. Entire implementation is carried out using ROS kinetic with Ubuntu 16. In ROS, all the software modules are represented as nodes. These nodes communicate using an environment provided by a process called ROS Core. A variety of communication mechanisms are available in ROS. The one which is used in the proposed model is Publish – Subscribe Mechanism. It is an asynchronous mechanism in which two nodes can communicate using a pre-defined topic.
(a) Frame 141
(b) Frame 142
(c) Frame 143
(d) Frame 144
VOLUME 13,
N° 4
2019
camera and fetch frames from it. These frames are processed to create blob areas as explained earlier. The blob area along with its width and height are converted to custom ROS messages using cvBridge library. The tracking algorithm also generates the identity for all the detected people. These identities are also appended in the previously generated ROS message. Finally, this ROS message is passed on to the second node using the Publish-Subscribe mechanism. Custom messages are generated using ROS message utility. The subscriber node accepts the data sent by Publisher node and generates the tracked output. The subscriber node has two functions, one to instruct actuator to follow the person based on the message it received and second to display the image of the tracked person. For the second functionality, the subscriber node converts ROS message back into vector which is compatible with OpenCV and then it is displayed. The ROS message formation and communication are presented in figure 5 where region inside the dotted rectangle presents the ROS environment and outer region represents the interfacing of peripheral components.
Fig. 5. Conceptual diagram of ROS message formation and communication
(e) Frame 145
(f) Frame 146
Fig. 4. Occlusion handling and identity retention process in frames of PETS2009-S2L1 video One of the nodes has to adopt the role of publisher which sends the message, and another node has to adopt the role of subscriber to receive the message. The subscriber node subscribes to a topic and then waits for the message to arrive. Whenever the publisher node sends a message on the predefined topic, the subscriber node will get a callback with the received message. In the proposed implementation, ROS package of GMM [17] is used in the implementation. It finds optimum number of Gaussian components using Bayesian approach. Then morphological operation is used to process the output obtained from this GMM. This removes the noise and true blob as explained in section III (b). A publisher node is created to connect with the
4. Evaluation and Discussion of the Tracking Algorithm Initially, the algorithm is evaluated on two datasets of PETS2009-S2L1 and town center. The data-sets contain videos with brightness adaption and occlusion. Frame rate for the videos is 25 frames per second. The direction of the person changes abruptly and randomly. This is one of the challenging tasks and this scenario fits to the robotic applications. The algorithm’s detection is evaluated by comparing the number persons detected by algorithm with actual number of persons to be detected (i.e. ground truth datasets). This can be calculated using intersection of union (IoU). Which measures overlap of two bounding box. Figure 6 represents accuracy calculation where True Positive (TP) represents actual overlap region of detected and ground truth object, False positive (FP) represents the detected object which is not part of ground truth object and False negative (FN) gives region which is not detected by the algorithm. Articles
35
Journal of Automation, Mobile Robotics and Intelligent Systems
Segmented TN
VOLUME 13,
N° 4
2019
Ground Truth
FP TP FN
Fig. 6. Object detection Accuracy measurement using dice similarity coefficient Using these parameters, accuracy is defined as Acc(%) =
2TP ∗ 100 (TP + FP ) + (TP +FN )
(7)
The object detection accuracy graph for each frame in the video is presented in figure 7.
Fig.7. Accuracy of object detection in the video The evaluation of tracking algorithm includes calculation of Multi object tracking accuracy (MOTA) and Precision (MOTP) [3].Where, MOTA calculates algorithms accuracy to detect and track the persons in the video, and MOTP calculates the positional accuracy of the tracked person in the video. They can be calculated using following equations 8 and 9. MOTA= 1 −
36
∑ t mt + FPt + mmet ∑ t gt
MOTP =
∑ i ,t d i ,t ∑ t ct
(8) (9)
Where, mt is number of misses, mmet is number of mismatch, di,t is distance between the object i and its corresponding object in ground truth set and ct gives number of found matches in given time t. It achieved 85% and 70% MOTA and 70.70% and 72.30% MOTP. Further analysis has been carried out to see the conversion time and speed of the OpenCV and its linkage with ROS. The figure 8 presents the execution time comparison for the program compile using OpenCV only and when program is linked with ROS for six videos. Average execution time for to process entire video and provide the output message of location of identity is 61.95 sec. It proposes that linking of ROS hardly invokes approximately 0.64% execution time. Articles
Fig. 8. Video execution time comparisons for OpenCV and inclusion of ROS with OpenCV
5. Conclusion Social robots are taking the role to interact with human for various social service requirements like hospitals, airports, museums, etc. Where, social robot helps human by following him. To fulfil this requirement, a reliable tracking is required. This paper attempts to develop reliable tracking by conquering the challenge of occlusion and identification of the person to be followed from the group of people. For real time application, the fast detection and identification of the desired person is required. Hence, GMM based person tracking is used in the proposed model. For fast and accurate identification of the desired person, a set of features which are rotational and scale invariant for efficient detection and identity retention is used. The algorithm founds to be 85% accurate in detecting the number of persons with 72.30% precision rate. All implementation has been done using ROS and OpenCV. Analysis of linking of OpenCV with ROS is also expressed in the study and it has been observed that ROS addon 0.64% time in the OpenCV code. In feature, a prototype model will be developed. As part of software testing, architecture can be further revised such that it process parallel processing of tracking and identity retention process.
AUTHORS
Jignesh J Patoliya* – Electronics and Communication Engineering Department, Chandubhai S Patel Institute of Technology, Charotar University of Science and Technology, Post Changa, Gujarat, India, email: jigneshpatoliya@charusat.ac.in. Hiren K Mewada – Assistant Research Professor, Electrical Engineering Department, Prince Mohammad Bin Fahd University, Kingdom of Saudi Arabia, email:mewadahiren@gmail.com. *Corresponding author
Journal of Automation, Mobile Robotics and Intelligent Systems
References [1] M. Munaro, C. Lewis, D. Chambers, P. Hvass, E. Menegatti, “RGB-D Human Detection and Tracking for Industrial Environments”. In: Intelligent Autonomous Systems 13, 2016, 1655–1668 DOI: 10.1007/978-3-319-08338-4_119. [2] D. Reynolds, “Gaussian Mixture Models”. In: Encyclopedia of Biometrics, 2009, 659–663. [3] D. Israni, H. Mewada, “Identity Retention of Multiple Objects under Extreme Occlusion Scenarios using Feature Descriptors”, Journal of Communications Software and Systems, vol. 14, no. 4, 2018, 290–301 DOI: 10.24138/jcomss.v14i4.541. [4] H. K. Mewada, A. V. Patel, K. K. Mahant, “Concurrent design of active contour for image segmentation using Zynq ZC702”, Computers & Electrical Engineering, vol. 72, 2018, 631–643 DOI: 10.1016/j.compeleceng.2018.01.024. [5] W. Niu, J. Long, D. Han, Y.-F. Wang, “Human activity detection and recognition for video surveillance”. In: 2004 IEEE International Conference on Multimedia and Expo (ICME), vol. 1, 2004, 719–722 DOI: 10.1109/ICME.2004.1394293. [6] O. Javed, M. Shah, “Tracking and Object Classification for Automated Surveillance”. In: Computer Vision — ECCV 2002, 2002, 343–357 DOI: 10.1007/3-540-47979-1_23. [7] R. Diaz, J. Manuel, Model-based object tracking with an infrared stereo camera, Master Thesis, Örebro University, School of Science and Technology, 2015. [8] A. Bolotnikova, Articulated Object Tracking from Visual Sensory Data for Robotic Manipulation Master Thesis, University of Tartu, Faculty of Science and Technology, Institute of Computer Science, 2017. [9] K. L. Besseghieur, R. Trębiński, W. Kaczmarek, J. Panasiuk, “Trajectory tracking control for a nonholonomic mobile robot under ROS”, Journal of Physics: Conference Series, vol. 1016, 2018 DOI: 10.1088/1742-6596/1016/1/012008. [10] Y. Wei, Z. Lin, “Vision-based Tracking by a Quadrotor on ROS”, IFAC-PapersOnLine, vol. 50, no. 1, 2017, 11447–11452 DOI: 10.1016/j.ifacol.2017.08.1814. [11] G. Priyandoko, C. K. Wei, M. S. H. Achmad, “Human Following on ROS Framework a Mobile Robot”, SINERGI, vol. 22, no. 2, 2018, 77–82 DOI: 10.22441/sinergi.2018.2.002. [12] R. Collins, Y. Liu, M. Leordeanu, “Online selection of discriminative tracking features”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 10, 2005, 1631–1643 DOI: 10.1109/TPAMI.2005.205. [13] O. Javed, M. Shah, “Tracking and Object Classification for Automated Surveillance”. In: European Conference on Computer Vision — ECCV 2002, 2002, 343–357
VOLUME 13,
N° 4
2019
DOI: 10.1007/3-540-47979-1_23. [14] B. Y. Lee, L. H. Liew, W. S. Cheah, Y. C. Wang, “Occlusion Handling in Videos Object Tracking: A Survey”, IOP Conference Series: Earth and Environmental Science, vol. 18, 2014 DOI: 10.1088/1755-1315/18/1/012097. [15] “PETS 2009 Benchmark Data”. Binghamton University – State University of New York, http:// cs.binghamton.edu/~mrldata/pets2009.html. Accessed on: 2020-02-20. [16] B. Benfold, I. Reid, “Stable multi-target tracking in real-time surveillance video”. In: Computer Vision Pattern Recognition CVPR 2011, 2011, 3457–3464 DOI: 10.1109/CVPR.2011.5995667. [17] “Expectation Maximization algorithm to obtain Gaussian mixture models for ROS”. Robotics with ROS, https://ros-developer.com/2017/11/16/ expectation-maximization-algorithm-obtaingaussian-mixture-models-ros. Accessed on: 2020-02-20.
Articles
37
Journal of Automation, Mobile Robotics and Intelligent Systems
VOLUME 13,
N° 4
2019
Advanced Structural Fibre Material for Single Link Robotic Manipulator Simulation Analysis with Flexibility Submitted: 7th March 2019; accepted: 20th December 2019
S. Ramalingam, S. Rasool Mohideen
DOI: 10.14313/JAMRIS/4-2019/36 Abstract: The aim of this article is to investigate the characteristics of a composite fibre advanced materials used as a robotic link manipulator for replacement of rigid one. The composite material is combination of two and more fibre processed and bonded with epoxy, resulting hybrid form of material component with required properties which are to be analyzed for suitability with respect to its function, reliability, durability, safety and cost-effectiveness. The composites generally have high-strength, high-stiffness (graphite, kevlar, etc.) low-density (epoxy, polyvinyl) strong and stiff with lightweight. In this investigation, five different composite structural fibres are taken as a flexible link with joint flexibility for case study analysis. The rotating structural fibre link, loaded and tested different types of joint stiffness coefficients (kc). The numerical evaluations are conducted for structural fibre material for replacement rigid manipulator. The modeling of structural fibre single flexible link on the basis of Euler-Bernoulli beam theory and Lagrange’s equations of motion is studied and accurate modes of the system are obtained. Keywords: Structural fibre, E – Glass, S – Glass, Kevlar-29, Carbon fibres, Dynamic Model, Simulation, Vibration
1. Introduction
38
The flexible link materials in a robotic system are capable of changing their structural configuration while being able to maneuver through highly cluttered and constrained environments. The mechanical flexibilities on the manipulator can be added in both the links and joints of the manipulator. Link flexibility reduces the weight of the link that is designed to operate at high speed with low inertia [1]. The soft-link manipulators have certain merits like, low costs, easy industrial fabrication process. The flexible link has a lower stiffness than metal; the soft link material manipulator can reduce the damage from impact loads, when the robot is used to handle material from one place to another, material storage controlling of material and protection of materials [2]. The conventional material robotic link systems generally made of rigid type, such as steel and alumi-
num and are dominated in many successful applications [3]. A robotic link that fulfills the requirements like work assignment, safety, reliability, durability and cost-effectiveness are considered in design of manipulator. In the other applications like land transport system, Marine vessels for inland and water ways, ocean structures such as boats, ships, pontoons, harbor structures, buoys, underwater structures, submarines composite material is used, and the other areas such as aerospace, defense, security systems, energy appliances, Thermal, hydroelectric, wind, solar, wave energy etc. the flexible material is used. Robotic manipulators are also used, including manufacturing industrial automation, power plant stations, materials fabrication and space stations maintenance and surgery in the clinical field [4]. However, the studies of flexible robots, which generally make use of an elastic and flexible type material, are now drawing more attention in research area [5]. The flexible soft robots have additional properties relative to conventional rigid robot manipulators [6-8]. These hard and rigid robots are usually performing limited actions with high precision with the help of sensors and feedback control system in well-defined environments. But flexible robots have distributed deformation with infinite number of degrees of freedom and also capable entering into an obstacle place. This leads to an idea of introducing new type of material for robotic manipulator. Due to technology update, ranging from new materials and process techniques, to new design and control tools, it is now becoming possible to create systems whose structure is composed almost entirely of flexible structural materials. These systems are composites of flexible materials that together give rise to entirely new modes of function and behavior, in many ways [9]. Introducing lightweight manipulators have plenty of advantages such as reduced inertia, minimize the gravity effect on link and drives have been discussed in this article. The joint flexibility increases whole structure flexibility, which helps to derive accurate dynamic model for control purpose and perform a critical task carried by the robot end link via obstacle point to the exact location. The next generation of light weight robots like humanoid robots, soft robots, micro surgery and composite artificial links will also introduce flexible joints. The flexible joint has an elastic property and serves as an energy storage device; it
Journal of Automation, Mobile Robotics and Intelligent Systems
helps the system in terms of the energy consumption for the same motion of the link. Future generation of lightweight robots with variable joint stiffness of humanoids robots will implement flexible joints are active research. Hence light weight manipulators with flexible joint in robotic system have prime importance issue in dynamic modeling and simulation analysis. The importance of mechanical flexibility, how it is critical in design and fabrication of flexible robot in initial stage are briefed. For this purpose, composite materials fibre are taken as link material for simulation and numerical study. The mode of vibration is the characteristic patterns assumed by the system are to be drawn with shape of maximum displacement curve due to vibration. The mode shapes are plotted and analyzed for control system purpose.
2. Material and Methods 2.1. Composite Fibre Materials The hybrid composites are a kind of fibre-reinforced materials; it is usually resin-based, in which two different fibres are integrated into a single matrix [10]. The simple theory of composite is of combining two or more materials to get the required properties of materials. In any combination of dissimilar materials could in fact be thought of as a hybrid. The example for structural material is rigid plastic foam bonded with thin skins of some high-performance FRPs. The skin is carrying the high surface tensile and compressive loads. The core provides lightweight and structural stability. The combination of sheets of aluminium alloy with laminates of fibre-reinforced resin, as in the commercial product ARALL (Aramid-Reinforced Aluminium, Davis, 1985) is a related variety of layered hybrid. A mixing of fibrous and particulate fillers in a single resin or metal matrix produces another kind of hybrid composite. In high-technology fields the question of cost may be insignificant by comparison with the advantages of optimizing properties. In aerospace applications, purpose of using hybrids is to utilize the natural toughness of GRP or of Kevlar-fibre-reinforced plastics (KFRP) to avoid brittleness of typical CFRP. The important aspect of using hybrids is to provide adequate material stiffening, strengthening and toughening, composite to suit specific requirements which produce a single-fibre types of composites. The advanced structural fibres, such as E-Glass, S-Glass, Kevlar, Carbon fibres, are taken for simulation study and amplitude of vibration single link robotic manipulator estimated.
VOLUME 13,
N° 4
2019
to the actuator shaft and a gear is provided for joint flexibility in the experimental model. The entire arrangement is enclosed by casing. This arrangement is called hermetically sealed joint which is treated as a flexible joint. Flexibility in joints is energy storing device, which helps the system for low energy consumption. The length of link ‘l’ in meter and ‘ρ’ is uniform mass density per unit length in kg/m. The joint and link is attached at the rotor shaft of the motor. ‘E’ and ‘I’ represent young modulus N/m2 and area moment of inertia m4 of link respectively. The angular position is θ(t) and u(x, t) is the transverse component of the flexible displacement of flexible link. ‘Mp’ is mass of payload with inertia ‘Ip’ at the end-point of link are physical parameters of system. The motor clamped to the joint is stationary. ‘Ih’ is the inertia of the hub and ‘Kt’ stiffness of spring. The input torque ‘τ(t)’ in Nm is applied to the motor. The single link rotates in a horizontal plane. The co-ordinate frames Xo-O-Yo and X1-O-Y1 are attached to the rigid body frame. Due smaller angular rotation θ(t) of pinned free robotic manipulator, the flexible deflection u(x, t) and total displacement y(x, t) of a links are calculated as, y= ( x , t ) u( x , t ) + xθ (t )
(1)
For equation of motion, the energy associated to the systems is calculated. The kinetic (KE) and potential (PE) energies of the system with end payload are included in equation. In the equation of motion, a non conservative work for a given input torque to the system is calculated. Using Hamilton, and lagrangian techniques, by ignoring the rotary inertia and shear deformation effects, the first two modes are used for modeling, after a mathematical manipulation, the link equation become with four boundary conditions and partial differential equation of fourth order is obtained.
EI
∂2 y ( x , t ) ∂4 y( x , t ) 0 + ρ = (2) ∂x4 ∂t2
The flexible deflection u(x, t) is determined and substituted in the equation which yields the equation of motion in terms total deflection of y(x, t).
2.2. Methodology
The terminology and model arrangement of a flexible manipulator working system as shown in Figure 1 and Figure 2, aims to formulate an equation of motion for numerical investigation. The flexible link is connected to the hub, the hub attached
Fig. 1. Block diagram of amplitude measurement setup single link Manipulator Articles
39
Journal of Automation, Mobile Robotics and Intelligent Systems
VOLUME 13,
N° 4
2019
From the above, the characteristic equations of the system are derived and the equation is solved for its Eigen values, D(λ ) = ( −ελ 4 + K c )[1 + cos λ cosh λ + + ηλ(cos λ sinh λ − sin λ cosh λ )] −
Fig. 2. Schematic diagram of structural fibre single link Manipulator
3. Dynamic Equation Using the boundary conditions, the obtained fourth order partial differential equation is solved. This equation describes the motion of flexible link manipulator. The solution is obtained by using assumed mode method, and lagrangian approach. The flexible deflection is the product, spatial and time function. The flexible deflection u(x, t) is approximated by,
= u( x , t )
n
n
(7) +2ηλ sin λ sinh λ ) = 0
To find frequencies and mode shapes of the system, the values of λi are obtained from the characteristic equation (7) and the corresponding values of constants Ai, Bi, Ci and Di are also calculated. Assume that the value of Di =1(one) according to assumed mode method. The normalization relations, the expression for kinetic and potential energy are determined. Tab. 1. System mode parameters λi (i = 0, 1, 2, 3, 4) for rigid and flexible joint with Load
n
∑ϕ1 ( x )q1 (t ) + ∑ϕ2( x )q2(t ) + + ∑ϕi ( x )qi (t )
=i 1=i 1
=i 1
for i = 1, 2, 3, ..., n
(3)
The input torque (T)in is set to zero. The orthogonality relationship of the second derivatives, which is true for a system with only structural flexibility, hence the total flexible deflection from the initial co-ordinate is determined by,
= y( x , t )
n
n
n
∑ϕ1 ( x )q1 (t ) + ∑ϕ2( x )q2(t ) + + ∑ϕi ( x )qi (t )
=i 1=i 1
=i 1
for i = 1, 2, 3, ..., n
(4)
The spatial function fi(x) is the mode shape, which is purely function of displacement and qi(t) time function. To substitute the value of y(x, t) in equation, this yields ordinary differential equation of fourth order and second order equation. The solution for fourth order equation of motion [11] is, φi ( x ) = Ai sin β i x + Bi sinh β i x +
+ C i cos β i x + Di cosh β i x (5) where, i = 1, 2, 3, ..., n and Ai, Bi, Ci and Di are mode shape constants. To simplify the computation the following parameters are used for equation, ρ 2 = β 4 = ω , λ β L EI Ib =
η =
40
−λ(sin λ cosh λ − cos λ sinh λ +
Articles
ρ L3
Ih ,ε = 3 3I b
M p L2 Kt , Kc = EI /L 3Ib
(6)
3.1. System State Space Model The equation of motion using assumed mode, Hamilton principle [12] with lagrangian technique are used to derive the equation. The equations are in the form of a state space model which utilized for simulation analysis. The equations of the state space form [13] and [14] for two modes.
• X= AX + Bu
(8)
(9)
4. Simulation Analysis The vibration mode analysis is significant in accurate model of flexible of the system [15]. In this system the development of the single link model and mode analysis are carried out. The mathematical equation was derived based on Lagrangian and assumed mode method (AMM) with orthogonality condition [16]. Finally transformed into state space model in equation (9) is used for simulation and modes were drawn with corresponding frequency.
Journal of Automation, Mobile Robotics and Intelligent Systems
VOLUME 13,
N° 4
2019
4.1. Types of Structural Fibre
Tab. 2. Typical properties of structural fibre
First Mode,E-Glass
2
Rigid Joint,kc =0
1.8
kc =1 kc =5
1.6 1.4 1.2
AMPLITUDE (m)
The five different variety of structural fibre manipulator have taken to examine the mode amplitude of vibration in terms of end point displacement and end point residual. Flexible composite structural fibre manipulators are made of combination of the material E- Glass, S- Glass epoxy, Kevlar epoxy and Carbon epoxy. The parameters are listed in Table1 for use of simulation.
1 0.8 0.6 0.4 0.2 0 0
0.05
0.1
0.15
0.2
0.25
0.3
TIME (sec)
Case 1: kc = 0, λ0 = 0, Case 2: kc =1, λ1 = 0.8657 Case.3: kc = 5, λ1=1.1009.
Fig. 3. First mode-3 types of Joint flexibility
4.2. E-Glass Structural Fibre
Tab. 3. E-Glass Structural fibre parameter
Second mode, E-Glass
2
case1,kc=0(rigid joint)
1.8
case2,kc=1 case3,kc=5
1.6 1.4
AMPLITUDE IN METRE
The E-Glass structural material for flexible single link manipulator is used to examine the dynamic performance. The amplitude mode of vibration plotted, size of the link and their parameter are given in the Table 2. The geometrical size of the link is constant for all five cases.
1.2 1 0.8 0.6 0.4 0.2 0 0
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
0.1
TIME IN SECONDS
Case 1: kc =0, λ1 =2.630312, Case 2: kc = 1, λ2=2.7899, Case.3: kc = 5, λ2 = 3.2175
Fig. 4. Second mode-3 types of Joint flexibility 4.2.1. E-Glass Mode Plots The composite structural fibre flexible link robotic manipulator is made of E-Glass with epoxy. Using data, the response of a link is simulated through step input torque for under damped condition. The program is carried for 0.3 seconds for first mode and other modes are 0.1seconds. The output response of advanced composite fibre manipulators link are observed. The mode-1, mode-2, for three types of joint flexibilities are shown in Figures 3 and 4. The amplitude of vibration is reduced when the joint flexibility is high (kc =5). The steady state at 0.04seconds in second modes.
4.3. S-Glass Structural Fibre The E-Glass properties are modified, resulted S-GLASS fibre with improved properties and compositions are SiO2–65%, Al2O2–25%, MgO-10%. It has plenty of merits such as higher production rate, improved mechanical properties, high strength, high stiffness, relatively low density, non flammable, resistant to heat, good chemical resistance, relatively insensitive to moisture, and able to maintain strength with wide range of condition. Their applications in the areas are military missiles, aircraft structures, laminate structures, storage tanks, composite and surf boards (Table 3).
Articles
41
Journal of Automation, Mobile Robotics and Intelligent Systems
VOLUME 13,
Tab. 4. S-Glass Structural fibre parameter
N° 4
2019
4.4. Kevlar-29 Structural Fibre This type of material compared to steel is twenty times stronger. The material types are (1) Kevlar- 29 (2) Kevlar- 49. These materials have wide range of application such as, water proof walking boots, car/motor bike tyre, and armour panels for light weight vehicles, fire resistance clothing, making cables, asbestos, brake linings etc, and making body of vehicles etc. The data for Kevlar-29 given Table 4. Tab. 5. Kevlar-29 Structural fibre parameter
4.3.1. S-Glass Mode Plots The S-glass fibre simulations for same size link are carried out. The output responses are noted for mode-1, mode-2, for three types of joint flexibilities in link system (Figures 5 and 6). First Mode,S-Glass
2
Rigid Joint,kc =0
1.8
kc =1 kc =5
1.6
4.4.1. Kevlar-29 Mode Plots By using MATLAB coding, the output response of first mode is 0.3 seconds and 0.1seconds for other modes. The output response of advanced composite fibre KEVLAR-29 manipulators link is observed and noted that the steady state time is reduced. The mode-1, mode-2, mode-3 and mode-4 for three types of joint flexibilities are shown in Figures 7and 8. The steady state is at 0.15sec for first mode when kc = 5.
1.4
AMPLITUDE (m)
1.2 1 0.8 0.6 0.4 0.2 0 0
0.05
0.1
0.15
0.2
0.25
0.3
TIME (sec)
First Mode,Kevlar-29
2
Case 1: kc = 0, λ0 = 0, Case 2: kc =1, λ1 = 0.8657 Case.3: kc = 5, λ1=1.1009.
Rigid Joint,kc =0
1.8
kc =1 kc =5
1.6
Fig. 5. First mode-3 types of Joint flexibility
1.4
AMPLITUDE (m)
1.2
Second mode, S-Glass
2
case1,kc=0(rigid joint)
1.8
0.8 0.6
case2,kc=1 case3,kc=5
1.6
0.4
1.4
AMPLITUDE IN METRE
1
0.2
1.2
0 0
1
0.05
0.1
0.15
0.2
0.25
0.3
TIME (sec)
Case 1: kc = 0, λ0 = 0, Case 2: kc =1, λ1 = 0.8657 Case.3: kc = 5, λ1=1.1009.
0.8 0.6
Fig. 7. First mode-3 types of Joint flexibility
0.4 0.2 0 0
0.02
0.04
0.06
0.08
0.1
TIME IN SECONDS
Case 1: kc =0, λ1 =2.630312, Case 2: kc = 1, λ2=2.7899, Case.3: kc= 5, λ2 = 3.2175
Fig. 6. Second mode-3 types of Joint flexibility 42
Articles
0.12
4.5. Kevlar-49 Structural Fibre The physical properties of Kevlar-49 are given in the Table 5. The application of Kevlar-49 in the areas like, boat hulls, aerospace industry, military planes, good resistance when hit by small arms and high modulus are used in cable and rope products.
Journal of Automation, Mobile Robotics and Intelligent Systems
VOLUME 13,
2019
Tab. 7. Carbon-HS Structural fibre parameter
Second mode, Kevlar-29
2
N° 4
case1,kc=0(rigid joint)
1.8
case2,kc=1 case3,kc=5
1.6
AMPLITUDE IN METRE
1.4 1.2 1 0.8 0.6 0.4 0.2 0 0
0.02
0.04
0.06
0.08
0.1
0.12
TIME IN SECONDS
Case 1: kc =0, λ1 =2.630312, Case 2: kc = 1, λ2=2.7899, Case.3: kc = 5, λ2 = 3.2175
First Mode,Kevlar-49
2
Fig. 8. Second mode-3 types of Joint flexibility
Rigid Joint,kc =0
1.8
kc =1 kc =5
1.6
Tab. 6. Kevlar-49 Structural fibre parameter
1.4
AMPLITUDE (m)
1.2 1 0.8 0.6 0.4 0.2 0 0
0.05
0.1
0.15
0.2
0.25
0.3
TIME (sec)
4.5.1. Kevlar-49 Mode Plots The output response of the link using Kevlar-49 and simulation time for first mode is 0.3 seconds and remaining modes are 0.1seconds. The output responses are noted that the steady state time reduced. But the oscillations are increased. The mode-1, mode-2, for three types of joint flexibilities are given in Figures 9 and 10.
Case 1: kc = 0, λ0 = 0, Case 2: kc =1, λ1 = 0.8657 Case.3: kc = 5, λ1=1.1009.
Fig. 9. First mode-3 types of Joint flexibility
case1,kc=0(rigid joint)
1.8
case2,kc=1 case3,kc=5
1.6
4.6. Carbon-HS Structural Fibre
1.4
AMPLITUDE IN METRE
The carbon fibres are produced using various raw materials, provided their chemical compound has high carbon content. In general, polyacrylnitrile (PAN), pitch or rayon/viscose (e.g. biocomposites) is used. PAN is manufactured product with well defined properties. Pitch is a natural product. Threads drawn from PAN or pitch pass through three stages: oxidation at approximately 200oC, carbonization at 800-1600oC and ultimately graphitization. The fibres are stretched during this process, and an anisotropic fibre is formed. Carbon fibres are often transversally isotropic and have a much higher stiffness in the transverse direction [17]. The mechanical properties are improved outstanding way by adding carbon fibres [18].
Second mode, Kevlar-49
2
1.2 1 0.8 0.6 0.4 0.2 0 0
0.02
0.04
0.06
0.08
0.1
0.12
TIME IN SECONDS
Case 1: kc =0, λ1 =2.630312, Case 2: kc = 1, λ2=2.7899, Case.3: kc = 5, λ2 = 3.2175
Fig. 10. Second mode-3 types of Joint flexibility
Articles
43
Journal of Automation, Mobile Robotics and Intelligent Systems
First Mode,Kevlar-49
2
Rigid Joint,kc =0 kc =5
1.6
1.2
1.2
AMPLITUDE (m)
1.4
1 0.8
kc =5
1 0.8
0.6
0.6
0.4
0.4
0.2
0.2 0
0 0
0.05
0.1
0.15
0.2
0.25
0
0.3
0.05
0.1
0.15
0.2
0.25
TIME (sec)
TIME (sec)
Case 1: kc = 0, λ0 = 0, Case 2: kc =1, λ1 = 0.8657 Case.3: kc = 5, λ1=1.1009.
Case 1: kc = 0, λ0 = 0, Case 2: kc =1, λ1 = 0.8657 Case 3: kc = 5, λ1=1.1009.
Fig. 9. First mode-3 types of Joint flexibility
Second mode, Carbon HS
2 case1,kc=0(rigid joint)
1.8
case1,kc=0(rigid joint)
1.8
case2,kc=1 case3,kc=5
1.6
0.3
Fig. 11. First mode-3 types of Joint flexibility
Second mode, Kevlar-49
2
case2,kc=1 case3,kc=5
1.6 1.4
AMPLITUDE IN METRE
1.4
AMPLITUDE IN METRE
kc =1
1.6
1.4
2019
Rigid Joint,kc =0
1.8
kc =1
N° 4
First Mode,Carbon HS
2
1.8
AMPLITUDE (m)
VOLUME 13,
1.2 1 0.8 0.6
1.2 1 0.8 0.6 0.4
0.4
0.2
0.2
0 0
0 0
0.02
0.04
0.06
0.08
0.1
0.02
0.04
0.06
0.08
0.1
0.12
TIME IN SECONDS
0.12
TIME IN SECONDS
Case 1: kc =0, λ1 =2.630312, Case 2: kc = 1, λ2=2.7899, Case.3: kc = 5, λ2 = 3.2175
Fig. 10. Second mode-3 types of Joint flexibility
4.6.1. Carbon-HS Mode Plots The CARBON-HS mode response plots for single flexible link manipulator are simulated and amplitude modes of vibration, for four modes with different stiffness co-efficient are given in Figures.11 and 12.
5. Results and Discussion
44
The system mode parameters λi based on the characteristic equation of a flexible system was determined and it was noticed that the stiffness co-efficient for flexible system is not equal to zero. This means no zero-mode flexible link system (λi not equal to zero). The response of the system using step input torque was calculated and simulation results were shown in Figures 3-12. The link was made of composite structural fibre material such as E-Glass fibre, S-Glass fibre Kevlar-29, Kevlar-49, and carbon HS are taken five Articles
Case 1: kc =0, λ1 =2.630312, Case 2: kc = 1, λ2=2.7899, Case 3: kc = 5, λ2 = 3.2175
Fig. 12. Second mode-3 types of Joint flexibility
different cases for simulation study. The natural frequency and mode were calculated using Eigen value approach. The frequency response function (FRF) of the system was obtained by MATLAB. The link has a uniform rectangular cross section with size 25mm x 2mm and length was 250mm. The geometric parameters were constant for all five types. The other parameters were found and listed in the Tables 1-7. The problem was analyzed making use of the solution obtained by the method briefed. The natural frequency was calculated. By introducing the viscous damping co-efficient to each mode, and the damped natural frequencies were calculated. The modes to the corresponding natural frequency are plotted as shown Figures 3-12.
5.1. E-Glass
A comparative study with different joint flexibility were made in order to exhibit the importance of the
Journal of Automation, Mobile Robotics and Intelligent Systems
both joint and link flexibility in the model, i.e., case-1 stiffness of the joint kc = 0, case-2 stiffness of the joint kc = 1, case-3 stiffness of the joint kc = 5. The Figure 3 and 4 gives the first, second, modes of E-Glass material flexible link with varying joint stiffness co-efficient. The maximum vibration amplitude for first mode of three types joint flexibility of links i.e., case-1 kc = 0, case-2 kc = 1, case-3 kc = 5. i.e., are 0, 0.151m and 0.122 m. The maximum vibration amplitude for second mode case1, i.e. rigid mode stiffness co-efficient (kc = 0) is 0.213m, case-2 (kc = 1) is 0.212m, case-3 (kc = 5) is 0.200m. For third mode the maximum vibration amplitude for case1, case-2, and case-3 are 0.00012m, 0.000074m and 0.000024m respectively. The Fourth mode amplitude of vibration are case1 (kc = 0), case-2 (kc = 1), case-3 (kc = 5), are 0.0000256 m, 0.0000254m and 0.0000024m respectively. The values of third and fourth mode compared to the first and second modes are very low. This shows that the amplitudes for higher mode values are very small, and noticed a reduced model which considers first two modes could be used for controller design.
5.2. S-Glass
The Figure 5 and 6 gives the first, second, modes of S-glass material flexible link with varying joint stiffness co-efficient. The maximum vibration amplitude for first mode of three types joint flexibility of links i.e., case-1 kc = 0, case-2 kc = 1, case-3 kc =5. i.e., are 0, 0.189m and 0.185 m. The maximum vibration amplitude for second mode case1, i.e. rigid mode stiffness co-efficient (kc = 0) is 0.185m, case-2 (kc =1) is 0.183m, case-3 (kc =5) is 0.180m.
5.3. Kevlar-29
The Figure 7 and 8 gives the first, second, modes of KEVLAR-29 material flexible link with varying joint stiffness co-efficient. The maximum vibration amplitude for first mode of three types joint flexibility of links i.e., case-1 kc = 0, case-2 kc = 1, case-3 kc =5. i.e, are 0, 0.188 m and 0.187 m. The maximum vibration amplitude for second mode case-1, i.e. rigid mode stiffness co-efficient (kc = 0) is 0.200 m, case-2 (kc = 1) is 0.189 m, case3 (kc = 5) is 0.188 m.
5.4. Kevlar-49
The Figure 9 and 10 gives the first, second, modes of KEVLAR-49 material flexible link with varying joint stiffness co-efficient. The maximum vibration amplitude for first mode of three types joint flexibility of links i.e., case-1 kc = 0, case-2 kc = 1, case-3 kc =5. i.e. are 0, 0.189m and 0.188 m. The maximum vibration amplitude for second mode case1, i.e. rigid mode stiffness co-efficient (kc = 0) is 0.200m, case-2 (kc =1) is 0.199m, case-3 (kc =5) is 0.191m.
VOLUME 13,
N° 4
2019
5.5. Carbon-HS The Figure 11 and 12 gives the first, second, modes of CARBON-HS material flexible link with varying joint stiffness co-efficient. The maximum vibration amplitude for first mode of three types joint flexibility of links i.e., case-1 kc = 0, case-2 kc = 1, case-3 kc =5. i.e. are 0, 0.200m and 0.200 m. The maximum vibration amplitude for second mode case1, i.e. rigid mode stiffness co-efficient (kc = 0) is 0.200m, case-2 (kc =1) is 0.199m, case-3 (kc =5) is 0.191m.The inclusion of joint flexibility cuts down the amplitudes and total deflection, but increases the oscillatory behavior of the flexible link in the initial stage itself. If the joint flexibilities are excluded, large error would be the result. As a result, a controller design based on the rigid joint would probably be ineffective and possible with flexible joint.
6. Conclusion In structural fibre flexible link manipulator system, from characteristic equation the Eigen values (λi) were found. The corresponding natural frequencies, damped natural frequencies and modes are evaluated. The values of third and fourth modes were compared with simulation plots which shown very low values. But the system initial stages itself increases the oscillatory behavior motion of the composite fibre material link. Further it was noticed that the amplitude of higher modes are numerically low values, therefore the first two modes are used for controller design. The steady state for composite materials for the first and second modes are 0.3sec, 0.03sec. for E-Glass, 0.25sec, 0.025sec for S-Glass. 0.25sec, 0.02sec for KEVLAR-29. 0.158sec, 0.019sec for KEVLAR-49 and 0.1sec, 0.001sec carbon HS. The steady state is very fast in carbon fibre but increased oscillatory behavior. The inclusion of joint flexibility was reduced the total deflection all materials, but increased the oscillatory behavior initially itself. If the joint flexibilities are excluded, large error would be the result. As a result, a controller design based on the flexible joint would be successful compared to rigid manipulator. The controller design is under progress for the above cases.
AUTHORS S Ramalingam* – Department of Mechanical Engineering, B S Abdur Rahman Crescent Institute of Science and Technology, Chennai, India, email: sengalaniramalingam@gmail.com, suram_ar@rediffmail.com. S Rasool Mohideen – School of Mechanical Science, B S Abdur Rahman Crescent Institute of Science and Technology, Chennai, India, email: dean.sms@crescent.education. *Corresponding author
Articles
45
Journal of Automation, Mobile Robotics and Intelligent Systems
References
46
[1] Q. Liu, Y. Lin, Z. Zong, G. Sun, Q. Li, “Lightweight design of carbon twill weave fabric composite body structure for electric vehicle”, Composite Structures, vol. 97, 2013, 231–238 DOI: 10.1016/j.compstruct.2012.09.052. [2] G. Sucharitha, S. Ramalingam, “Optimization and Improvement of Material Handling System”, International Journal of Pure and Applied Mathematics, Special Issue, vol. 116, no. 17, 2017, 185–190. [3] E. Şahin, “Swarm Robotics: From Sources of Inspiration to Domains of Application”. In: Swarm Robotics, 2005, 10–20 DOI: 10.1007/978-3-540-30552-1_2. [4] S. Ramalingam, S. R. Mohideen, S. Manigandan, T. P. P. Anand, “Hybrid polymer composite material for robotic manipulator subject to single link flexibility”, International Journal of Ambient Energy, 2018, 1–8 DOI: 10.1080/01430750.2018.1557551. [5] S. Ramalingam, S. R. Mohideen, “Composite materials for advanced flexible link robotic manipulators: an investigation”, International Journal of Ambient Energy, 2019, 1–6 DOI: 10.1080/01430750.2019.1613263. [6] K. Shimoga, A. Goldenberg, “Soft materials for robotic fingers”. In: Proceedings of 1992 IEEE International Conference on Robotics and Automation, 1992, 1300–1305 DOI: 10.1109/ROBOT.1992.220069. [7] A. Albu-Schaffer, O. Eiberger, M. Grebenstein, S. Haddadin, C. Ott, T. Wimbock, S. Wolf, G. Hirzinger, “Soft robotics”, IEEE Robotics Automation Magazine, vol. 15, no. 3, 2008, 20–30 DOI: 10.1109/MRA.2008.927979. [8] B. A. Trimmer, A. E. Takesian, B. Sweet, C. Rogers, D. Hake, D. J. Rogers, “Caterpillar locomotion: A new model for soft-bodied climbing and burrowing robots”. In: 7th International Symposium on Technology and the Mine Problem, Monterey, CA May 2-5, 2006. [9] H. Lipson, “Challenges and Opportunities for Design, Simulation, and Fabrication of Soft Robots”, Soft Robotics, vol. 1, no. 1, 2014, 21–27 DOI: 10.1089/soro.2013.0007. [10] S. I Wayan, S. I Gusti Agung Kade, K. Arnis, “Mechanical Properties of Rice Husks Fiber Reinforced Polyester Composites”, International Journal of Materials, Mechanics and Manufacturing, vol. 2, no. 2, 2014, 165–168 DOI: 10.7763/IJMMM.2014.V2.121. [11] W. T. Thomson, Theory of vibration with applications, 3rd ed., Prentice-Hall, 1988. [12] R. R. Craig, Structural dynamics: an introduction to computer methods, Wiley, 1981. [13] M. O. Tokhi, A. K. Azad, Flexible Robot Manipulators: Modelling, simulation and control, IET Digital Library, 2008. [14] K. Ogata, Modern control engineering, Pearson Education, 2010. Articles
VOLUME 13,
N° 4
2019
[15] K. Ibrahim, A. A. Aly, A. A. Ismail, “Mode Shape Analysis of a Flexible Robot Arm”, International Journal of Control, Automation and Systems, vol. 2, no. 1, 2013. [16] S. Ramalingam, S. Rasool Mohideen, “Numerical Analysis of Robotic Manipulator Subject to Mechanical Flexibility by Lagrangian Method”, Proceedings of the National Academy of Sciences, India Section A: Physical Sciences, 2019 DOI: 10.1007/s40010-019-00619-2. [17] J. Obradovic, S. Boria, G. Belingardi, “Lightweight design and crash analysis of composite frontal impact energy absorbing structures”, Composite Structures, vol. 94, no. 2, 2012, 423–430 DOI: 10.1016/j.compstruct.2011.08.005. [18] A. Mohajerani, S. Hui, M. Mirzababaei, A. Arulrajah, S. Horpibulsuk, A. Abdul Kadir, M. T. Rahman, F. Maghool, “Amazing Types, Properties, and Applications of Fibres in Construction Materials”, Materials, vol. 12, no. 16, 2019 DOI: 10.3390/ma12162513.
Journal of Automation, Mobile Robotics and Intelligent Systems
VOLUME 13,
N° 4
2019
Economical Photovoltaic Tree with Improved Angle of Movement Based Sun Tracking System Submitted: 8th January 2019; accepted: 30th January 2020
Bishal Karmakar, Rammohan Mallipeddi, Md. Nasid Kamal Protiq
DOI: 10.14313/JAMRIS/4-2019/37 Abstract: In this paper we propose a renewable energy storing system in order to harness optimal solar radiation. In consideration of photosynthesis process of a tree, photovoltaic cells of this artificial tree are arranged by Fibonacci pattern instead of leaves. This design will increase efficiency of storing energy compare to flat structure. To solve the problem of low efficiency of a solar system, it is necessary to orient photovoltaic cells with varying direction of periodic sun irradiation. Taking this into account, a tracking system based on one-degree angle of movement maintaining small angle of tolerance, is implemented with a specific movement flexibility for any promising fluctuation of sunlight. A incomplex techniqe and low cost design makes it economical and easy implementable. Keywords: renewable energy, low cost photovoltaic tree, sun tracking system
1. Introduction A solar tree is a structure fusing sun oriented vitality innovation on a solitary column, similar to a tree trunk. These days, with the development of population in the present world and energy demand; a renewable source that does not cause contamination and other characteristic turnover must be taken over. For this situation, the solar energy is the best alternative for this world. A solar panel or photovoltaic cell consumes plenty space and Bangladesh is a densely populated country which needs much power, so the advantage of such energy should be taken which requires a very less space to generate power efficiently. Solar tree is one of those which do not need much space but produce sufficient energy. “TREE” acronym stands for: T=Tree generating, R=Renewable, E=Energy, E=Electricity.
The importance of a solar tracking is to figure out the exact position of the sun. This will provide PV plates those consists of a tracking system, to store maximum energy from the sun all the time. Solar panels can be highly efficient when they are directly faced towards to the sun. The nature of the PV cells
is to provide highest power when the angle between them and sun is nearly zero. The study shows that, the panel absorbed less energy compared to the one that can move along with the direction of sunlight. [1]. For the collection of efficient solar energy, a solar panel have to be within 20 degrees from normal or perpendicular to the sun [2]. To know the precise location of the sun, a tracking system consisting of mathematical relations is needed. It been calculated that a solar system which use a tracking unit can produce 13%-20% (single axis) and around 30 % (dual axis) more power that the conventional ones [3]. Jay Taneja et. al. introduced a process of building micro-solar power subsystems for wireless sensor network nodes. They also evaluate the deployment by analyzing the effects of the range of solar profiles experienced across the network [4]. Whereas A. Kavaz et. al. describes the whole system with necessary calculations, technical specifications, basic characteristics, working principle and 3D rendering of the chosen design in [5]. Dr. Suwarna Torgal proposed to mount Silicon-crystalline Photo-Voltaic (SPV) on some tall poles which can directly provide electrical energy from solar energy [6]. C. Bhuvaneswari proposed a new idea to design a solar tree using nanowire solar cell [7]. In [8] S.N. Maity also introduced a SPV system which will occupy a very less space. However, Jay Taneja et. al. introduced a process of building micro-solar power subsystems for wireless sensor network nodes. They also evaluate the deployment by analyzing the effects of the range of solar profiles experienced across the network in [9].
2. Methodology The apprehend starts from the photo sensor, as an input device for the microcontroller. Depending on four LDR (Light dependent resistor), the microprocessor received several data from the sensor, and then provides two signals to servo motors for the specified movement. There are two set of sensors for each tracking mechanism. Comparing between two LDR, the microcontroller processes the input data and generates signals for two servo motors for their movements. This method will cause multiple or single shafts that performs as a torque transmitter to create rotations for the solar panels to change their positions. The movement of the motors will halt its current position at the very moment it received same
47
Journal of Automation, Mobile Robotics and Intelligent Systems
amount of sunlight to each sensor. In addition to this, when the water sensor gets a few drops of rain, it will generate a signal for the microcontroller which leads canopy system cover the top of the tree in order to protect the solar panels. The following system calculate voltage differences using voltage divider rule to figure out light intensity for the LDRs. With this concept, a set of values of Light dependent resistor can be obtained due to the voltage difference of the LDR in both light and shade cases. This data can prove general characterizes of photo sensor as for highest amount of voltage make the sensor more sensitive.
VOLUME 13,
N° 4
2019
The panels change its position from angular value α to –α as shown in figure 2. Therefore, two scenarios will be obtained. First one is the sensor oscillates between either sides of the sun position and the second one is the sun being virtually sliding westward, so that the panel and the LDR are at angle θ. So, a specified torque calculation is required for a precise movement. -α
-α
α
θ
α
3. Proposed Prototype 3.1. System Construction To simplify the design, calculation, maintenance a two axis symmetric design have been constructed. The main body is a simple 4 feet aluminum squire tube attached with a squire base made of aluminum which should carry the upper panels and servos. One of three panel is placed high above the other panels at a 90-degree vertical angle. The angle provides a constant area for the sunlight of the sun trajectory. The other two panels are attached with two separately controlled servos. Two 1 feet limbs are connected with the main body at the height of 2.7 feet to hold the servos. If needed, additional limbs can be added to increase to total amount of tracking panels which increase power to the consumer end. However, this limb shouldn’t be too long because of a possibly high bending moment and can create vibration during the tracking process. Additionally, a display and a battery level indicator is added in order to observe the voltage and current drawn by the panels and the charging level of the battery which is used to store the energy. Moreover, a voltage booster module was needed to meet the rating which is needed to charge the battery.
Fig. 2. Panel orientation according to the position of sun Calculation of torque required for the rotation. Length of the solar panel a = 12.5 cm, width of the solar panel b = 9.8 cm, mass of the solar panel m = 50 g, length of the shaft L= 13 cm. We know, moment of inertia of a rectangular body,
I=
m ( a2 + b2 ) 12
(1)
Irectangular = 1.05 × 10−4 Kgm −2 where a is angular acceleration and w is angular speed. I = moment of inertia. α=
∇ω t
(2) where, ∇ω = ω1 − ωo and ωo = initial angular velocity ω1 = final angular velocity For a design speed of 500 rpm (52.35 rads–1) Time (t) = 10 sec from potentiometer value 10 k ohms (i.e. 1000 is 1 sec) Therefore Angular acceleration a = 5.235 rads–1
T = I × α = 5.497 × 104 Nm mL2 , where L is the length of the shaft for shaft I = 12 and M is its mass
48
Ishaft = 1.4 × 10−5 kgm2
Fig. 1. 3D model
Torque = I × α = 7.37 × 10−5 Nm
3.2. Operation Strategy
3.3. System Operation
Once the system sets it initial position, it shows a tendency to seek an optimal position and start sending signals to operating motor for the correction of the orientation and keep tracking the sun.
Here we have used two sets of photo resistor each consists of two LDR. Light dependent resistor is working as input devices for the Arduino. In the prototype, two of the solar plates have the tracking
Articles
= 6.23 × 10−4 Nm Hence total torque
Journal of Automation, Mobile Robotics and Intelligent Systems
mechanism which allows them to follow sunlight by measuring the light intensity. As the basic nature for a photo sensor is to decrease resistance with the increment of light intensity. So following this law, the microprocessor compare data for both LDR and provide signals to the respective servo to move as show in figure 3 and figure 4. Therefore, servo start its movement maintaining 1o angle of tolerance towards the LDR which have the maximum amount of light intensity. Whereas photo sensor does its tracking, water sensor detects rain drops and send signals to microcontroller. Then microcontroller provide command to the servo motor installed on the top of the tree hence the motor rotate anti clockwise with canopy to protect the plates being wet. The motherboard which is controlling the whole system, gets its power from the same battery that have been charged by the solar panels.
VOLUME 13,
N° 4
2019
3.5. Sensor Calculation Improving upon the sensitivity of the system, comparison between LDR values and initial tolerance value was extremely needed. So we have to assign a small initial tolerance value. As a set of tracking system consists two LDR. Variable 1 = 1st LDR value Variable 2 = 2nd LDR value For the first case when sun is almost on 0 deviation (θ) from α to –α shown in figure 5, the values of both LDR should be equal to tolerance or less that the initial value. Variable 1 – Variable 2 <= tolerance Variable 2 – Variable 1 <= tolerance Now when the sun has a deviation angle from 0 degree to toward α degree, there should be difference between sensor values. Variable 1 < Variable 2 When the scenario is opposite that is sun moves form 0 degree to backwards –α, Variable 1 > Variable 2 At this moment it is very important to look upon maximum angle of tolerance at which the sun goes away from sensors. It’s been learned that the smaller the angle is, the more the sensitivity is. So following this study, we have used 1 degree movements concept for the trackers. With this small angle deviation will make the system more sensitive.
3.6. Observed Voltage Fig. 3. Block Diagram for the control system
3.4. Flow Chart
The experiment conducted on solar panels that are used on solar tree, showed various voltages in different weather before adding voltage booster. The data are as follows-
Fig. 4. Software Flow diagram Articles
49
Journal of Automation, Mobile Robotics and Intelligent Systems
Tab. 1. Observed Voltages in normal day Time
Voltage [V]
1
5am–7am
approx. 2.1
3
9am–11am
approx. 3.7
7am–9am
4
11am–1pm
6
3pm–5pm
5
1pm–3pm
N° 4
2019
3.7. Result and Analysis
No of sample
2
VOLUME 13,
approx. 3.5 approx. 4.5 approx. 4.3 approx. 3.6
As sunlight consists of two factors, one is direct beam that carries almost about 90% of the solar energy and another is the diffuse beam which import rest of the materials. The diffuse part can be denoted as blue sky on a normal day that gets a raise proportionally on cloudy days. So for the maximum amount of energy, it is necessary for the panel visible towards sun directly for the most possible times. But due to cosine angle between direct beam and the panel, efficiency will be dropped. For the experiment, three solar plates having maximum 21-watt rating have been used. We have analyzed some of the results of our two single axis rotating and one fixed solar panels with all three solar panels being fixed. A major power values shows that the rotating panels provides more energy than fixed mount. It can be said that, around 15% efficiency is increased due to one-degree angle of tolerance based movable panels than the fixed ones. Tab. 3. Analyzed data
No of sample
Time
Voltage [V]
1
5am–7am
approx. 1.3
3
9am–11am
approx. 3.2
approx. 2.5
4
11am–1pm
approx. 3.47
6
3pm–5pm
approx. 2.2
5
1pm–3pm
Power obtained when panels have single axis movement (W)
8 am
3
3.45
4.8
5.7
9 am
4.5
11 am
6.88
8.1
13 pm
15.55
18.9
15 pm
9.1
10.4
12 pm
Tab. 2. Observe voltages in cloudy day
7am–9am
Power obtained when all panels are fixed (W)
10 am
Fig. 5. Voltage Response on Normal day
2
Time
14 pm 16 pm
8.7
10.3
7.33
5.0
10
12.0 8.3
approx. 3.1
Fig. 7. Power Response for Fixed panel over the panels having Tracking system
Fig. 6. Voltage Response on Cloudy day 50
Articles
Fluctuation between steady state and movable solar panels has been investigated through MATLAB platform. Figure 10 shows the deflection between various times with power generation in watt. For con-
Journal of Automation, Mobile Robotics and Intelligent Systems
stant panels, the lowest watt rating is 3 which gradually increases up to 15.55 watt and then it shows its continuous decrease characteristics with the decrement of sunlight intensity. Whereas for the tracking system the graph founds more convenient. Its minimal power is 3.45 that goes up to 18.9 watt and then goes down with light intensity.
4. Conclusion We suggested more efficient ecopowered solar tree with very small angle of tolerance dependent sun tracking system to store more energy. It can state easily that with rotating solar plates absorbed more power than fixed panels. Not only the efficiency analysis for solar have been done but also a calculation for maximum torque needed for any tracking system have been shown here.
AUTHORS
Bishal Karmakar* – Department of Electrical and Electronic Engineering, Varendra University, Bangladesh, e-mail: bishal@vu.edu.bd, bishal.karmakar95@ gmail.com. Rammohan Mallipeddi – School of Electronics Engineering, Kyungpook National University, South Korea. Md. Nasid Kamal Protiq – Department of Electrical and Electronic Engineering, Eastern University, Bangladesh.
VOLUME 13,
N° 4
2019
DOI: 10.1109/IPSN.2008.67. [5] A. Kavaz et al., “Solar Tree Project”. www.researchgate.net/publication/322628682_Solar_ Tree_Project. Accessed on: 2020-03-03. [6] D. S. Torgal, “Concept of Solar Power Tree”, International Advanced Research Journal in Science, Engineering and Technology, vol. 3, no. 4, 2016 DOI: 10.17148/iarjset.2016.3440. [7] C. Bhuvaneswari, R. Rajeswari, C. Kalaiarasan, K. M. S. Muthukumararajaguru, “Idea to Design a Solar Tree Using Nanowire Solar Cells”, International Journal of Scientific and Research Publications, vol. 3, no. 12, 2013. [8] S. N. Maity, “Development of Solar Power Tree – An Innovation that uses up Very Less Land and Yet Generates Much More Energy from the Sun Rays by SPV Method”, Journal of Environmental Nanotechnology, vol. 2, 2013, 59–69 DOI: 10.13074/jent.2013.02.nciset311. [9] J. Taneja, J. Jeong, D. Culler, “Design, Modeling, and Capacity Planning for Micro-solar Power Sensor Networks”. In: 2008 International Conference on Information Processing in Sensor Networks (IPSN 2008), 2008, 407–418 DOI: 10.1109/IPSN.2008.67. [10] P. Sharma, N. Malhotra, “Solar tracking system using microcontroller”. In: 2014 1st International Conference on Non Conventional Energy (ICONCE 2014), 2014, 77–79 DOI: 10.1109/ICONCE.2014.6808687.
*Corresponding author
References [1] H. Bukhamsin, A. Edge, R. Guiel, D. Verne, “Solar Tracking System. Project Background and Needs”. College of the Environment, Forestry, and Natural Sciences, Northern Arizona University, www.cefns.nau.edu/capstone/projects/ ME/2014/SolarTracking-B/Downloads/ Team_18%20Project%20Formulation%20Report.pdf. Accessed on: 2020-03-03. [2] N. Ijumba, C. Wekesah, “Application potential of solar and mini-hydro energy sources in rural electrification”. In: Proceedings of IEEE AFRICON ‘96, vol. 2, 1996, 720–723 DOI: 10.1109/AFRCON.1996.562978. [3] C. J. Nwanyanwu, M. O. Dioha, O. S. Sholanke, “Design, Construction and Test of a Solar Tracking System Using Photo Sensor”, International Journal of Engineering Research and Technology, vol. 6, no. 03, 2017 DOI: 10.17577/IJERTV6IS030212. [4] J. Taneja, J. Jeong, D. Culler, “Design, Modeling, and Capacity Planning for Micro-solar Power Sensor Networks”. In: 2008 International Conference on Information Processing in Sensor Networks (IPSN 2008), 2008, 407–418 Articles
51
Journal of Automation, Mobile Robotics and Intelligent Systems
VOLUME 13,
N° 4
2019
Development of Anthropomorphic Dual Arm Robot with Distinct Degrees of Freedom for Coordinated Operations Submitted: 2nd July 2019; accepted: 20th December 2019
Abhaykrishna K M, Nidhi Homey Parayil, Ibin Menon Reghunandan, Sudheer A P
DOI: 10.14313/JAMRIS/4-2019/38 Abstract: Development of assistive robots for helping the disabled is a field of research that has gathered attention recently. According to surveys, about one billion people in the world population have some kind of disability. Dual arm robots are a suitable solution for helping people with mobility impairments. The current development in the field of dual-arm robots is focused mainly in the industrial field to carry out cyclic tasks. This includes activities such as pick and place, assembling parts and doing other industrial operations. Unlike human arms, these dual arm robots lack versatility in doing a wide variety of tasks with adequate coordination between the arms. Due to these constraints, industrial dual arm robots cannot be directly implemented for assisting the disabled. This paper focuses on designing a compact dual arm robot which closely mimics human arms to do coordinated tasks with lesser Degrees of Freedom (DoF). Therefore, the developed robot extends its capabilities from industrial applications to daily life activities. Closed loop control is used in actuating the proposed 9 DoF dual arm robot with distinct DoF. Target position is acquired using image processing. Hand to hand coordination in various operations such as pick and place, transferring objects, serving food, etc. has successfully experimented. Keywords: Dual arm robot, Anthropomorphic ratio, Distinct Degrees of Freedom, Coordination, Object detection
1. Introduction
52
In recent years, there has been a growing interest in the field of robot based coordination. It has huge potential applicability in fields like industry, medical services and rehabilitation. As humans have a complex brain structure with well-developed neural network architecture, their ability to control and coordinate arm motions are highly accurate and advanced. Dual robotic arm can be used for imitating the human arm motions for doing such coordinated tasks for various applications in manufacturing and service industries. Dual arm robots are basically a combination of two robotic arms attached to the same base that can replicate coordinated tasks.
Dual arm robots can achieve a wide variety of tasks similar to human hands. It can do multiple tasks and consumes less space compared to two independent manipulators [1]. Dual arm robots can approach a target in various angles and it requires less cycle time compared to single arm robots. Current practice is to increase the Degrees of Freedom (DoF) to achieve complex tasks of the manipulator robot. As the number of DoF increases, robot control becomes difficult due to redundancy. It will also increase the energy requirement and cost, as the number of actuators is high. Terak et al. [1] proposed the design and control of a 7 DoF dual arm robot and discussed the importance of arms for the stability of the robot. The design comprised of a rotating torso and two 3 DoF arms, aiming to achieve the best performance in terms of dynamic balance but the feasibility to do coordinated tasks is not taken into account. Moreover, the implementation of coordination between the two arms was not achieved. Banerji et al. [2] proposed a systematic approach to judge the feasibility of wrist joint ranges and to find reasonable target trajectories or orientations. Though cooperative tasks were achieved, the project used two separate individual manipulators which require larger space compared to a dual arm robot. In the study proposed by Tsarouchi et al. [3], benefits of using of dual arm robot in an assembly line were analysed and concluded that the workstation setup increased the workspace and simplified the programming. Most of the dual arm robots he analysed are bulky and designed for specific applications as they are used in the industries. Huang et al. [4] developed a collision model of a dual arm robot used for rehabilitation of hemiplegic patients. Safety was guaranteed by including passive compliance but the performance was compromised. An innovative design of 8 DoF dual arm robot with the tilted shoulder is presented by Lee et al. [5]. Though the proposed design increased manipulability of the robot, redundancy issues prevail. The dual arm robot developed by Park et al. [6] is designed for doing assembly operations in industries. The robot is capable to do multiple coordinated tasks in industrial applications but as the design is bulky its application to the household is limited. YuMi developed by ABB was the world’s first truly collaborative dual-arm robot. It is equipped with dual-arms, flexible hands, universal parts feeding system, camera-based part location, leadthrough programming and precise motion control [7].
Journal of Automation, Mobile Robotics and Intelligent Systems
Majority of the models proposed in the literature have redundancies due to the increased degrees of freedom. Moreover, the cost of the robots used and those fabricated for studies are considerably high. Most of the dual-arm robots are task specific as they are designed for industrial applications and their size is considerably high. Their design does not mimic the human arm and hence the extent of their application to do household activities is limited. Furthermore, coordination has not been given enough importance in the case of industrial dual-arm robots. A dual arm robot with human-like or anthropomorphic design provides more flexibility to mimic the household tasks such as feeding, opening a bottle and picking objects which in turn can be used to replace caretakers to help the disabled. This paper aims at designing and developing an anthropomorphic dual-arm robot that is compact, low cost and capable of multitasking with minimum degrees of freedom. Furthermore, the implementation of coordination between the arms is tested for applicability in assisting disabled people.
VOLUME 13,
N° 4
2019
arms for doing coordinated tasks. In order to increase the workspace in the upper area of the transverse plane, the shoulder support is tilted by 45 degrees upwards about the coronal plane as shown in Fig. 2. Tab. 1. Joint DoF of dual arm robot Right hand Joint Shoulder
Left hand
No of DoF
Type of motion
No of DoF
Type of motion
1
Pitch
1
Pitch
2
Elbow Wrist
2
Pitch & Yaw
2
Roll & Pitch
1
Pitch & Yaw Roll
2. Modelling of Dual Arm Robot This section deals with the conceptual design of the dual arm robot followed by kinematic modelling using anthropomorphic link dimensions, static structural analysis, dynamic modelling and trajectory planning. The conceptual design of the dual arm robot is shown in Fig. 1.
Fig. 2. Joint tilts at shoulder Tab. 2. Dimensions of dual arm robot Robot links
Length in mm
Shoulder
314.6
Forearm
178
Upper arm Wrist
Fig. 1. Conceptual design of dual arm robot
226.8 131.7
The elbow joint axis is inclined at 45 degrees inwards with respect to the shoulder yaw axis as shown in Fig. 3 so as to maximize the common workspace between the arms.
2.1. Kinematic Modelling In order to obtain human-like characteristic motion, all DoF are chosen to be revolute. The robot developed has four DoF on the left arm and five DoF on the right arm. This configuration with less DoF leads to less energy consumption, easier control and low cost. Joint DoF are assigned as shown in Table 1. The dual arm robot is designed as two independent serial link manipulators that are fixed to a T shaped base. Novel changes at shoulder and elbow joints are implemented to make the robot sufficiently dextrous in the common workspace to orient both
Fig. 3. Joint tilts at the elbow Articles
53
Journal of Automation, Mobile Robotics and Intelligent Systems
The dimensions of the links of dual arm robot are calculated using anthropomorphic ratios of a human body [8] for a reference human height of 1220mm and are shown in table 2. Denavit Hartenberg (D-H) convention is used for kinematic modelling [9]. Coordinate frames are assigned to each joint as shown in Fig. 4.
VOLUME 13,
N° 4
2019
in the user-defined upper and lower bounds of x. In the trust region method, the function f is approximated to a simpler function q, that imitates the behaviour of the function f in a neighbourhood N around x, called the trust region. The trust-region subproblem is to compute a trial step s by minimizing q over trust-region N. Once trial step s is computed, if f(x+s) is less than f(x), then x is changed to x +s and steps are repeated. If the condition is not satisfied, x would remain unchanged and the size of the trust space is reduced. The flow chart of this algorithm is as shown in Fig. 5. Tab. 3. DH Parameters of the right arm Link
θ(deg)
D(m)
a(m)
α(deg)
1(Base)
0
-0.1573
0
0
θ2
0
0.2268
0.178
0
-90
2
θ1
4
θ3 + 90
6
θ5 – 90
3
5
Fig. 4. Frame assignment of joints in the dual arm robot D-H parameters for right and left arms are formulated and are shown in table 3 and table 4 respectively. Forward Kinematics is used to find the position and orientation of the end-effector with respect to a reference frame when joint variables are known. The pose of ith frame in terms of D-H parameters is obtained from standard homogenous transformation matrix. The final transformation matrix of each arm is found by multiplying the individual transformation matrices of joints for the corresponding arm.
θ4
0
0
0
54
Articles
0
0.1317
90
45
90 0
Tab. 4. DH Parameters of the left arm Link
θ(deg)
D(m)
a(m)
α(deg)
1(Base)
0
0.1573
0
0
θ12
0
0.2268
0.3097
0
2
3 4
5
θ1
θ3 – 90 θ4
0
0
0
0
90
-45
-90 0
2.1.1. Novel Gripper Design A windshield wiper inspired gripper is designed that is capable of holding objects of various shapes. Furthermore, the gripping surface is made wider to increase the area of contact. The rubber padding on the gripping links ensures better grasping. The CAD model of the gripper is shown in Fig. 6.
Fig. 5. Trust region method The joint variables are obtained from the given pose of the end-effector using inverse kinematics. It is calculated by equating the final transformation matrices to the desired end-effector pose. The non-linear equations in inverse kinematics are solved in MATLAB iteratively using lsqnonlin, which is a nonlinear leastsquares solver. lsqnonlin uses the iterative trust-region reflective algorithm [10] for optimizing a function f(x)
0
Fig. 6. CAD Model of Gripper
Journal of Automation, Mobile Robotics and Intelligent Systems
A CAD model is made from the anthropomorphic dimensions and other design constraints using SolidWorks. CAD model for the dual arm robot is shown in Fig. 7.
VOLUME 13,
N° 4
2019
2.3. Dynamic Modelling In dynamic analysis, the joint torque equations are formulated in terms of joint angle, joint velocity and acceleration. In this section, the joint torque at the shoulder is calculated for horizontal arm configuration where the required torque is maximum. Lagrangian-Euler (L-E) method is used to find the joint torque equations. Lagrangian is defined as the difference of total kinetic and potential energy of the robot system. The generalised torque τ i at joint i is calculated from the equation (1), where the parameters are obtained for each link from equations (2)-(7) [9]. The terms qi , q i , qi are displacement, velocity and acceleration of joint i respectively. Mij represents the inertia and hijk represents velocity induced reaction torque at joint i. Gi is the gravity loading force at joint i. n
n
n
τ i = ∑Mij (q )q j + ∑∑hijk q j q k + Gi j =1
for i = 1, 2, ..., n
Fig. 7. CAD Model of dual arm robot The base is made up of rigid square bars. A skeleton-like structure is designed for the upper arm. Bearing housings are specifically designed for shoulders to accommodate multiple DoF. Tapering is provided for the upper arms to make it more human-like.
2.2. Static Structural Analysis
Static analysis is used to optimize the design by determining the stresses and deformation developed in the robot links due to static forces. In this section, static analysis is carried out in ANSYS R18.2 to find Von Mises stresses [11] for a given payload at the end effector. A payload of 0.5kg is applied at gripper of the robot arm and the weights of the motors are applied at respective joints during the analysis. After assigning material properties of Aluminium and Mild steel to the CAD model, part dimensions are optimised. Yield strength of Aluminium is 276MPa and yield strength of Mild steel is 250MPa. It is found from the static structural analysis that the induced stresses in all parts of the robot are very much less than the yield stresses of the corresponding material. Maximum stress developed among Aluminium parts is 19 MPa and among Mild steel parts is 10.8 MPa. It ensures a factor of safety above 10 with a maximum deflection of 0.18 mm at the forearm, which is within the acceptable range. Hence all subsystems are safe in design. Fig. 8 shows the static analysis results for each part.
0 −1 Q j = 1 0 0 0 0 0
j =1 k =1
0 T Q j −1T dij = j −1 j i 0 Mij =
hijk =
n
∑
0 0 0 0
0 0 0 0
(2)
j≤i j >i
T Tr[d pj I p d pi ]
p = max ( i , j )
∂ ( d pk ) T Tr I p d pi p = max ( i , j , k ) ∂qp n
∑
(1)
(3) (4) (5)
n
Gi = −∑mp gd pi p rp p =i
0T j −1Q j j −1Tk −1Qk k −1Ti for i ≥ k ≥ j ∂dij 0 = Tk −1Qk k −1T j −1Q j j −1Ti ∂qk for i ≥ j ≥ k 0 for i < j or i < k
(6) (7)
2.4. Trajectory Planning
Trajectory planning describes the motion of the robot in the 3-dimensional workspace, in terms of position, velocity and acceleration [12]. In order to obtain a smooth motion, triangular velocity profile is used which gives a parabolic blend curve for angle variation with respect to time. The desired variation of joint angle, velocity and acceleration are shown in Fig. 9.
Articles
55
Journal of Automation, Mobile Robotics and Intelligent Systems
(a) Deformation of roll joint coupling
(b) Equivalent stress of roll joint coupling
(c) Deformation of roll joint housing
56
Articles
VOLUME 13,
N° 4
(d) Equivalent stress of roll joint housing
(e) Deformation of elbow plate
(f) Equivalent stress of elbow plate
2019
Journal of Automation, Mobile Robotics and Intelligent Systems
VOLUME 13,
N° 4
(g) Deformation of elbow housing
(j) Equivalent stress of forearm
(h) Equivalent stress of elbow housing
(k) Deformation of shoulder clamp
(i) Deformation of forearm
(l) Equivalent stress of shoulder clamp
2019
Fig. 8. Static Analysis results in ANSYS
Articles
57
Journal of Automation, Mobile Robotics and Intelligent Systems
VOLUME 13,
N° 4
2019
3.1. Target Acquisition The target object is placed on a work plane of fixed height having a reference point of known pose. The relative position of the target object with respect to the reference point is obtained using image processing.
(a) Joint angle variation with time
Fig. 11. Target position determination
(b) Velocity variation with time
In the work plane depicted in Fig. 11, sticker A is pasted at the reference point of a known pose in robot frame. Sticker B is pasted at a known distance from sticker A and sticker C is pasted on the target object. With the help of sticker colour, the coordinates of stickers in pixel dimensions are obtained using image processing. The scale factor f of the work plane is obtained from the equation (8), where LR is the known real-world distance between sticker A and sticker B. f =
(c) Acceleration with respect to time Fig. 9. Desired trajectory
3. Control Strategy of Robot The dual arm robot is controlled from a computer and serial communication is implemented on the actuators which are connected using daisy chain setup. Position of the target is obtained with the help of image processing. Inverse kinematics is used to obtain the joint angles for the end effector to reach the target pose. The control architecture is shown in Fig. 10.
LR
( x a − x b ) + ( ya − y b ) 2
2
(8)
The relative position of the target object in x and y directions with respect to the reference point in real-world dimensions is obtained from the equations (9) and (10).
t x = f × ( xc − xa )
t y = f × ( yc − ya )
(9)
(10)
The procedure for the detection of stickers is explained in section 5.4.
3.2. Actuator Control
Closed loop feedback control is used for controlling the servo motors of dual arm robot joints (Fig. 12). An inbuilt PID controller is used for each joint actuator.
Fig. 12. Closed loop control Fig. 10. Control architecture of dual arm robot
58
Articles
The hardware setup for actuator control is shown in Fig. 13. A computer is used as the main controller. Commands are communicated to the actuators using serial communication in Python environment.
Journal of Automation, Mobile Robotics and Intelligent Systems
Fig. 13. Hardware setup for Motor Control
VOLUME 13,
N° 4
2019
proach is used for detecting the object [14], wherein it trains different images using bounding boxes, optimizes the detection and stores it in a database. Once the object is detected and its dimension is obtained in pixels from the bounding boxes, these dimensions are then further converted to real-world coordinate values using the scale factor used in target acquisition.
4. Coordination of Dual Arm Robot
4.2. Task Implementation
The dual arm robot has more dexterity and flexibility compared to a single manipulator arm to do coordinated tasks [13]. Coordination is achieved by determining the relative pose between the two end effectors. The motion planning and control for dual-arm coordination are the major hurdles in this stage. The task of transferring objects from one arm to the other is selected for demonstrating coordination between the two arms. The position of the target object on the work plane is obtained by image processing. The first arm approaches the object using inverse kinematics with the pose of the target object represented by transformation matrix 0Tt equated to the desired end effector pose 0T11 as shown in equation (11).
To find the target pose for the second arm, a task dependent transformation TR needs to be applied to the pose of the first arm end effector. The TR is a function of the size of the bounding box (h) created from image detection and the target pose for the second arm 0T5 is obtained using equation (12). The second arm is then approached towards the first arm to reach the target for manipulating the object.
The configuration during the approach is shown in Fig. 14.
The workspace of the dual arm robot is plotted in MATLAB using the kinematic model developed. Monte Carlo method is used to plot the workspace of dual arm robot by providing the angle ranges for individual joints and plotting the corresponding end effector positions [15]. The plotted workspace of the dual arm robot is shown in Fig. 15.
T11 = 0Tt (11)
0
Fig. 14. Right arm of the dual arm robot approaching the target
4.1. Object Detection Once the first arm grasps the object, it is then oriented in front of the robot camera to detect the object and to get its dimensional data for carrying out the coordinated tasks. Object detection is done using 2D image processing as it requires less computational power and is less expensive. Deep-learning based object detection is used with a trained database from which the object is identified and a bounding box is created around it. You Only Look Once(YOLO) ap-
T5 = 0T11TR (12)
0
5. Results and Discussion 5.1. Workspace Plot
Fig. 15. Workspace Plot in MATLAB
5.2. Joint Torque Plots Joint torques are determined using L-E method as shown in Fig. 16. Maximum torques obtained at the shoulder joint is 4 Nm and at the elbow joint is 1 Nm. The CAD model is imported and simulated in MSC ADAMS multibody dynamics software for validating the results obtained from L-E method. The results from software simulation are shown in Fig. 17. Maximum torque obtained at the shoulder joint is of 4.5 Nm Articles
59
Journal of Automation, Mobile Robotics and Intelligent Systems
and at the elbow joint is 1.5Nm. The peak torque values from the mathematical model are agreeing with the simulation results and trend of variation of elbow and shoulder joints are almost same. Actuator selection is carried out by adding a FOS of 1.4 to the peak torque values. ROBOTIS Dynamixel MX-64T and MX28T servo motors are selected for joint actuation by verifying the datasheets.
(a) Torque at shoulder joint
VOLUME 13,
N° 4
2019
5.3. Trajectory Generation The proposed trajectory mentioned in section 2.4 is verified by plotting the velocity and position obtained from the encoder values of motors for arm roll motion. The plots are shown in Fig. 18.
(a) Angle variation with respect to time
(b) Torque at elbow joint Fig. 16. Matlab plot for joint torques by LE method (b) velocity profile with respect to time
(a) Torque at shoulder joint (c) Acceleration profile Fig. 18. Obtained joint trajectory
5.4. Target Detection
(b) Torque at elbow joint Fig. 17. ADAMS simulation results 60
Articles
Target acquisition is tested by using a nut as the target object placed on the work plane using OpenCV [16,17]. A colour sticker is pasted on the nut and the position of the centre of the sticker is detected. The steps in image processing are shown in Fig. 19.
Journal of Automation, Mobile Robotics and Intelligent Systems
(a) Original image acquired
VOLUME 13,
(b) Image converted to HSV scale and masked
(d) Image after noise removed using median blur Fig. 19. Image processing algorithm
N° 4
2019
(c) Threshold image using Gaussian filter
(e) Contour detected and the centre of the sticker obtained
5.5. Joint Position Error
5.6. Experimentation
A trajectory for both arms to reach a point in the common workspace is selected for calculating joint position errors. Deviation from the desired joint angles is analysed using encoder values obtained from the Dynamixel motors. The desired and actual trajectories of both end effectors are plotted using MATLAB as shown in Fig. 20. The maximum error in joint angles for the given trajectory is recorded as 0.0276 radians which is at the right elbow joint. The error is due to the dimensional variations in fabricated parts and limited accuracy of the controller.
Experimentation is carried out on the fabricated system which is shown in Fig. 21. The designed gripper is 3D printed as shown in Fig. 22. The task tested is to detect the position and pick the object placed on the table using one arm, identify the object as bottle and obtain its dimensions (length and breadth of the bounding box). Then using equation (11) and equation (12), the other arm is approached towards the first to transfer the object. A Force Sensing Resistor (FSR) is attached to the gripping surface to limit the contact force.
Fig. 20. Desired and actual trajectories of end effectors of dual arm robot
Fig. 21. Fabricated model of dual arm robot Articles
61
Journal of Automation, Mobile Robotics and Intelligent Systems
VOLUME 13,
N° 4
2019
First, the position of the target object is obtained in pixel dimensions by image processing using OpenCV as explained in section 5.4. Two reference points at a known distance, are placed on the work plane to obtain the scale factor. The pose of one reference point on the work table is found using lead through programming. The position of the target object with respect to the base frame is calculated using the scale factor of the work plane.
Fig. 22. 3D printed gripper
(a) Detecting first reference point
(b) Detecting second reference point
(c) Detecting target object position
(d) Obtaining the pose of first reference point by lead through method
(e) Grasping the target object
(f) Detecting the object
(g) Transferring object to the other hand Fig. 23. Steps involved in the coordinated task
62
Articles
Journal of Automation, Mobile Robotics and Intelligent Systems
Fig. 24. Configuration for pouring water from bottle to cup
VOLUME 13,
N° 4
2019
aiding a disabled person for household activities. The dual arm robot can be attached to a mobile base to further extend the household application. Further improvement of coordination between the arms could be achieved in the developed model by integrating neural networks and machine learning in performing inverse kinematics. Electroencephalogram (EEG), speech recognition and eye movement detection can be incorporated to capture brain signals from humans to decide the required tasks. Target acquisition can be improved by extending the 2D work plane to 3D space. The dual arm robot can also be combined with Autonomous Underwater Vehicles (AUV), Drones, Space robots, etc. to operate in environments where direct human intervention is difficult.
Acknowledgements
This research work was supported financially by Research and Consultancy of National Institute of Technology Calicut, India (NITC/DEAN(R&C)/UGRESEARCH-STRENGTHENING/2018). This support is gratefully acknowledged.
AUTHORS Fig. 25. Configuration for opening the cap of a bottle using roll motion Using inverse kinematics, the end effector is moved towards the target object for manipulation. The robot camera images for steps involved in the coordinated task are shown in Fig. 23. The object is detected and bounding box is obtained. The length and breadth dimensions thus obtained are used to achieve the coordinated task as mentioned in section 4.2. Other tasks tested on the dual arm robot include pouring liquid from bottle to cup and serving as shown in Fig. 24. Fig. 25 shows the robot opening the cap of the bottle for a person using the roll joint in the right arm. Thus the robot can be programmed to do tasks to help disabled people in doing their daily activities.
6. Conclusion and Future Scope An anthropomorphic dual-arm robot capable of performing coordinated tasks with minimum degrees of freedom is developed. Closed loop control with PID is used for joint actuation. Target acquisition is carried out with the help of image processing. A gripper is designed for holding bodies of various shapes and sensors are integrated with it to limit the amount of force required to grasp an object. The basic coordinated task of transferring an object from one arm to another has experimented with the help of object detection. The compact anthropomorphic design and reduced cost make the proposed dual arm robot suitable for
Abhaykrishna K M – Department of Mechanical Engineering, National Institute of Technology Calicut, India, e-mail: abhaykrishnakm@gmail.com. Nidhi Homey Parayil – Department of Mechanical Engineering, National Institute of Technology Calicut, India, e-mail: nidhihomey@gmail.com.
Ibin Menon Reghunandan – Department of Mechanical Engineering, National Institute of Technology Calicut, India, e-mail: ibinmenon57@gmail.com.
Sudheer A P* – Department of Mechanical Engineering, National Institute of Technology Calicut, India, e-mail: apsudheer@nitc.ac.in. * Corresponding author
References
[1] J. Tarek, Z. Chiheb, M. Aref, “Modeling and Simulation of a Dual Arm Robot During the Compensation of an External Force”, Arabian Journal for Science and Engineering, vol. 43, no. 9, 2018, 4713–4725 DOI: 10.1007/s13369-018-3089-2. [2] A. Banerji, R. Banavar, D. Venkatesh, “A Non-Dexterous Dual Arm Robot’s Feasible Orientations Along Desired Trajectories: Analysis & Synthesis”. In: Proceedings of the 44th IEEE Conference on Decision and Control, 2005, 4391–4396 DOI: 10.1109/CDC.2005.1582853. [3] P. Tsarouchi, S. Makris, G. Michalos, M. Stefos, K. Fourtakas, K. Kaltsoukalas, D. Kontrovrakis, G. Chryssolouris, “Robotized Assembly Process Using Dual Arm Robot”, Procedia CIRP, vol. 23, 2014, 47–52 DOI: 10.1016/j.procir.2014.10.078. Articles
63
Journal of Automation, Mobile Robotics and Intelligent Systems
[4] Y. Huang, S. Li, S. Li, Y. Ke, “Design and Implementation of Dual-Arm Robot with Homogeneous Compliant Joint”. In: 2017 9th International Conference on Intelligent Human-Machine Systems and Cybernetics (IHMSC), vol. 2, 2017, 265–270 DOI: 10.1109/IHMSC.2017.172. [5] D. Lee, H. Park, J. Park, M. Baeg, J. Bae, “Design of an anthropomorphic dual-arm robot with biologically inspired 8-DOF arms”, Intelligent Service Robotics, vol. 10, no. 2, 2017, 137–148 DOI: 10.1007/s11370-017-0215-z. [6] C. Park, K. Park, “Design and kinematics analysis of dual arm robot manipulator for precision assembly”. In: 2008 6th IEEE International Conference on Industrial Informatics, 2008, 430–435 DOI: 10.1109/INDIN.2008.4618138. [7] “ABB’s Collaborative Robot – YuMi – Industrial Robots From ABB Robotics”. https://new.abb. com/products/robotics/industrial-robots/irb14000-yumi. Accessed on: 2020-03-10. [8] D. A. Winter, Biomechanics and Motor Control of Human Movement, John Wiley & Sons, Inc., 2009. [9] R. K. Mittal, I. J. Nagrath, Robotics and Control, Tata Mc Graw-Hill, 2003. [10] T. F. Coleman, Y. Li, “On the convergence of interior-reflective Newton methods for nonlinear minimization subject to bounds”, Mathematical Programming, vol. 67, no. 1, 1994, 189–224 DOI: 10.1007/BF01582221. [11] R. Budynas, K. Nisbett, Shigley’s Mechanical Engineering Design, McGraw-Hill Education, 2014. [12] J. J. Craig, Introduction to Robotics: Mechanics and Control, Pearson, 2004. [13] T. Liu, Y. Lei, L. Han, W. Xu, H. Zou, “Coordinated Resolved Motion Control of Dual-Arm Manipulators with Closed Chain:”, International Journal of Advanced Robotic Systems, vol. 13, no. 3, 2016 DOI: 10.5772/63430. [14] J. Redmon, S. Divvala, R. Girshick, A. Farhadi, “You Only Look Once: Unified, Real-Time Object Detection”. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, 779–788 DOI: 10.1109/CVPR.2016.91. [15] K. Shengzheng, H. Wu, L. Yao, D. Li, “Coordinated workspace analysis and trajectory planning of redundant dual-arm robot”. In: 2016 13th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI), 2016, 178–183 DOI: 10.1109/URAI.2016.7625731. [16] G. Bradski, A. Kaehler, Learning OpenCV. Computer Vision with the OpenCV Library, O’Reilly Media, Inc., 2008. [17] A. Sanin, C. Sanderson, B. C. Lovell, “Shadow detection: A survey and comparative evaluation of recent methods”, Pattern Recognition, vol. 45, no. 4, 2012, 1684–1695 DOI: 10.1016/j.patcog.2011.10.001.
64
Articles
VOLUME 13,
N° 4
2019
Journal of Automation, Mobile Robotics and Intelligent Systems
VOLUME 13,
N° 4
2019
RT-LAB Platform for Real-Time Implementation of Luenberger Observer Based Speed Sensorless Control of Induction Motor Submitted: 14th December 2018; accepted: 20th December 2019
Mansour Bechar, Abdeldjebar Hazzab, Mohamed Habbab, Pierre Sicard, Mohammed Slimi DOI: 10.14313/JAMRIS/4-2019/39 Abstract: This paper proposes RT-LAB platform for realtime implementation of Luenberger observer based on speed sensorless scalar control of induction motor. The observed shaft speed is derived from lyapunov’s theory. It is shown by an extensive study that this Luenberger observer with PI anti-windup speed controller is completely satisfactory at (nominal, variable, reverse) speed references and it is also robust to load torque disturbance. The sensorless control algorithm along with the proposed Luenberger observer is modeled, built in the Host PC and successfully implemented in real-time using digital simulator OP5600. The experimental results observed in the GW-Instek digital oscilloscope’s screen validate the effectiveness of the proposed Luenberger observer for speed sensorless scheme. Keywords: Digital simulator (OP5600), induction motor, Luenberger observer, RT-LAB platform, speed sensorless control, PI controller, PI anti-windup controller
1. Introduction The three-phase induction motor is one of the most widely used machines in the world due to its high reliability, relatively low cost and modest main-
tenance requirements [1]. The sensorless control of induction machines in variable speed operation is more complicate than DC machines. The main reasons are that they have more complex dynamics and they more request of complicated calculations. A high performance sensorless control of AC machine drives require speed information. Generally the mechanical sensors such as shaft encoders or resolvers provide this information [2]. In aggressive environments, the sensor might be the weakest part of the system, thus degrades the system’s reliability. This has led to a great many speed sensorless methods. A lot of senseless control of induction motor strategies have been proposed in the last few years for rotor speed observation. In general, these observers divided into five main schemes: Adaptive Observers [3]. Sliding Mode Technique [4]. Extended Kalman Filter [5]. MRAS observers [6]. Fig. 1 shows a chart of the different speed sensorless estimation strategies: This paper implements a Luenberger observer for rotor speed estimation using RT-LAB platform. The observer and scalar control models are initially built offline using Matlab/Simulink blocksets and implemented in real-time environment using RT-LAB software and an OP5600 digital simulator. This observer is designed to simultaneously estimate the rotor speed and fluxes also the stator currents, the speed observer stability is ensured through the Lyapunov theory. Experimental validation of the concept is also
Fig. 1. Speed sensorless estimation strategies 65
Journal of Automation, Mobile Robotics and Intelligent Systems
VOLUME 13,
provided in this paper to demonstrate the effectiveness of the proposed sensorless control algorithm with PI anti-windup speed controller at (nominal, variable, reverse) speed references with load disturbance application.
2. Induction Motor Model The dynamic model of induction motor in twophase stationary frame with assuming that the stator current and the flux as state variables. It is described as follows [7-8]: 2 di s 1 R s L m .R r .i s L m .R r φr L .Ls dt .Ls L2r r Lm 1 .Ls Lr wr.φr Ls Vs 2 di L s 1 R s m .R r i s Lm wr.φr L (1) .Ls .Ls Lr dt r L m .R r φ 1 Vs .Ls L 2r r .Ls dφr Lm .R r i R r φ wr.φ r dt Lr s Lr r dφ r L m .R r i s wr.φr R r φr dt Lr Lr
where: isα, isβ are stator current vector components in [α,β] stator coordinate system, Vsα, Vsβ are stator voltage vector components in [α,β] stator coordinate system, ϕrα, ϕrβ are rotor magnetic flux in [α, β] stator coordinate system, Lm is magnetizing inductance, Lr is rotor inductance, Ls is stator inductance, Rr is rotor resistance, Rs is stator resistance, wr is rotor angular L2 speed, σ is leakage coefficient σ = 1 − m , P is poles L r .L s L number, Tr = r the rotor time constant. Rr The electromagnetic torque can be expressed by:
= Te
3.P.L m .(ϕ rα .i sβ − ϕ r β .i sα ) 2.L r
(2)
3. Design of Luenberger Observer The Luenberger observer, which estimates states of the system (stator current and rotor flux components), is described by the following equations:
66
Xˆ = AXˆ + BU + L(Y − CXˆ ) (3) Yˆ = CXˆ Where Xˆ = [iˆ sα iˆ sβ ϕˆ r β ]T ; U = [V sα V sβ ]T ; Y = [i sα i sβ ]T ; Yˆ = [iˆ sα iˆ sβ ]T . The symbol ^ denotes the estimated values, X̂(t) is the vector of the states to be estimated and Ŷ is the vector with the desired outputs. L is the observer Articles
N° 4
2019
gain matrix, which is selected so that the system will be stable. The estimation error of the system sates (stator current and rotor flux components) may be obtained by:
( A − LC )e + ( ∆ A)Xˆ e = in which DA can be simply expressed as:
0 ˆ r ) 0 = ∆A A(wr ) − A(w = 0 0
0
0 − 0 0
0
Lm
σ .L s Lr
(4)
Lm
σ .L s Lr
∆wr
0
∆wr
−∆wr 0
0 ∆wr
(5) where: ∆wr = wr − wˆ r is the rotor speed error and e = X − Xˆ = e i sα
e i sβ
e ö rα
e ö r β T
(6) is the state estimation error, we should consider the following Lyapunov function to obtain the adaptive (1)speed estimation scheme [9]: = V eT e +
( ∆wr )
2
(7) λ where λ is a positive constant. By deriving the expression (7) in relation to time, gives: 2
d(eT ) 1 d T d(e ) ( ∆wr ) (8) V = e + e + dt λ dt dt
{
= V eT ( A−LC )T +( A−LC )} .e −2
Lm ∆wr (e i sα öˆ r β − e i sβ öˆ rα ) σ .L s Lr
2 d + ∆wr wˆ r (9) λ dt The adaptation law for the estimation of the rotor speed can be deduced as:
Lm t = wˆ r λ (10) ∫ (e ϕˆ − e ϕˆ )dt σ L s Lr 0 i sα r β i sβ rα The speed is estimated by a PI controller described as:
wˆ = K P(eisα ϕˆ r β − eis β ϕˆ rα ) + r K i ∫ (eisα ϕˆ r β − eis β ϕˆ rα ).dt
(11)
where Kp and Ki are the proportional and integral terms of the adaptive scheme. The feedback gain matrix L is chosen to ensure the fast and dynamic performance of closed loop observer [10-11]: L1 −L2 L = L2 L1 L3 −L4 L4 L3
with L1, L2, L3 and L4 are given by:
(12)
Journal of Automation, Mobile Robotics and Intelligent Systems
VOLUME 13,
1 1 (k − 1) + L1 = σ .Ts σ .Tr L2 = −(k − 1).wˆ r 1 1 σ .L s .Lm Lm + − Tr σ .Ts σ .Tr L r 2 L3 (k − 1). = 1 σ .L s .Lm 1 σ .Ts + σ .Tr (k − 1) + L r σ .L s .Lm L4 = .wˆ r −(k − 1) Lr
N° 4
2019
Vs = ω s * ϕ s (15)
The block diagram of sensorless scalar control of induction motor is shown in Fig. 3, the real-time implementations of Luenberger observer and control algorithm are evaluated by RT-LAB platform.
(13) where k is a positive coefficient obtained by pole placement approach [12]. A block diagram illustrating the behavior of the observer is shown in Fig. 2.
Fig. 3. Block diagram of sensorless scalar control of induction motor
5. Real-Time Platform Using RT-LAB Fig. 2. Block diagram of the proposed Luenberger observer
4. Sensorless Scalar Control Scheme The scalar control (Volt/Hertz) is a simple technique used to control induction motors speed in both open and closed loop. However, its dynamic performance is limited, even in closed loop, particularly when operating in regions of low speed [13]. The main concept is to keep the ratio between voltage and frequency constant to maintain available torque constant. If the voltage does not have a proper relationship with the frequency. The machine can operate in the saturation or field weakening region [14]. From the relationship between the electromagnetic flux and stator voltage the stator pulsation is deduced, expressed as:
Vs = ω s * ϕ s *
(
2
2
Rs Rs )) −ωr .Tr .σ ) +(1+(ωr .Tr . ω s.Ls ω s.Ls 1+(ωr .Tr .σ )2
(14)
Indeed, in practice, we are usually satisfied with a simplified control law, corresponding to the negligence of the ohmic drop (Rs = 0) in (14), to give us:
RT-LAB platform for validation of the proposed controller and observer is shown in Fig. 4, this platform was built using RT-LAB as real-time Platform, and it contains OP-5600 OPAL-RT real-time simulator used as a core of the hardware prototype system. Currently, CAOSEE Laboratory at Bechar University is equipped with one OP5600 OPAL-RT Simulator and one Drivelab OPAL-RT Board [15]. The scalar control algorithm and Luenberger observer are created in Matlab/Simulink software on the RT-LAB host computer with two subsystems SM and SC, the SM subsystem is converted into ‘C’ source code using Mathworks code generator Real-Time-Workshop (RTW) and compiled using RT-LAB. This code is then uploaded into the OP5600 target via network connections TCP/IP protocol. The test bench is shown in Fig. 4 that is composed of: (1) Squirrel cage induction motor with the following characteristics: D-connected, four poles, 125 W, 4000 rpm, 30 V, 133 Hz with the 1024 points integrated incremental coder, (2) DC generator motor, (3) Resistive load, (4) Inverter Drivelab Board, (5) Hall type current sensors, voltage sensors, (6) Auto transformer (0-450 V), (7) OP5600 real-time digital simulator with Host computer equipped by RT-LAB software, (8) Numerical oscilloscope. Articles
67
Journal of Automation, Mobile Robotics and Intelligent Systems
Fig. 4. A hardware prototype system based on the RTLAB platform The OP5600 real-time digital simulator has two main sections (upper section containing analog and digital I/O signal modules and bottom section containing one CPU with four cores Intel XeonQuadCore 2.40 GHz able to send and receive various digital and analog signals via Xilinx Spartan-3 FPGA I/O card FPGA Board). Now the Simulink model is executed via RT-LAB software and it is made compatible for real-time control. RT-LAB model contains two subsystems as depicted in Fig. 5, the console subsystem (SC) contains user interface blocks such as scopes and controls. It allows user to monitor the system completely while the simulation is running in real-time. On other hand, all the computational elements are included in master subsystem (SM), when executing the model in real-time the subsystem that contains the calculation part does not require any change because it is running on the simulator.
VOLUME 13,
N° 4
2019
simulation signals from OP5600 simulator to the outside world. The signals exchanged between the OP5600 simulator and drivelab board in the range of (+/-5~+/-20) V. Which can be implemented by A/D converters or D/A converters with acceptable accuracy [16]. The digital-out card provides digital output signals with specific voltage conditioning. The galvanic isolation of the digital-out card outputs make it ideal for environments, where voltage isolation is required. A pushpull transistor isolated output can sink up to 50 mA continuous, and up to +30 V applied at pin 18 of DB37 connectors this DC voltage defined according to user requirements.
Fig. 6. OP5600 rear view Finally, the scalar control algorithm and Luenberger observer are simulated on a real-time digital simulator OP5600 using one of a dual-quad computer. The gating pulses available at digital-out (slot 2 module B subsection 2) are applied to the inverter switches (MOSFETs) through gate driver circuit.
6. Experimental Results The performance of current estimator and closed loop speed estimator are tested in our experimental setup, the different waveforms are given those are obtained from digital oscilloscope with 4 channels. The sampling time for the acquisitions and for the calculation has been set at 400 μs.
6.1. PI Controller Fig. 5. RT-LAB model for real-time control
68
The analog-in block is used to transfer external sensed voltage and current signals (Ia,Ib,Ic and Va,Vb,Vc) from current sensors and voltage sensors into the real-time control environment. Each analogin and digital-in block can be used to access eight analog or digital signals. The sensed analog signals are applied to the analog-in card (Slot 1 Module B subsection 1) as shown in Fig. 6. The digital signal sensed from encoder sensor signal is applied to the digital-in card (slot 2 Module A subsection 1). The digital-out block is required to transfer real-time Articles
In this case the proportional-integral (PI) controller is used for speed regulation. Fig. 7 shows the PI diagram.
Fig. 7. Structure of PI controller adopted in speed loop control
Journal of Automation, Mobile Robotics and Intelligent Systems
First the motor is started under no load with nominal speed of 350 rad/sec, and after 2.5 sec, a rated load is applied, in 11 sec the motor is unloaded again. Fig. 8 shows the behavior of induction motor at nominal speed with disturbance load application. The reference speed is set to 350 rad/sec. it is clear that the PI controller exhibits good performances at steady state but in dynamic regime, when the motor started up the PI controller exceed the saturation value and generate inadequate performances to the system as a large overshoot and a long settling time as shown in Fig. 8.
Fig. 8. The reference, real and estimated speed, with estimation error
Fig. 9. Stator currents: ia, ib, ic [A] It can be also seen that the observed speed tracks the real speed even the load is applied, the estimation error between the two speeds is approximately neglected in steady state but this error increased at starting time. Fig. 11 shows the measured and observed currents, the observed currents iˆsα , iˆs β track the measured currents isα, isβ. It can be seen that the current waveforms are practically sinusoidal with low total harmonic distortion (THD= 53.20%) shown in Fig. 12. Fig. 13 shows the estimated rotor flux components. So, globally it can be seen from these results that the performances of the proposed Luenberger observer are satisfactory with PI speed controller.
VOLUME 13,
N° 4
2019
Fig. 10. Real and estimated current components isα, isβ, iˆsα , iˆsβ
Fig. 11. Zoom of real and estimated current components isα, isβ, ˆi sα , iˆsβ
Fig. 12. FFT analysis and spectrum of THD for stator phase current isα
Fig. 13. Zoom of Estimated rotor flux components ϕˆ rα , ϕˆ r β
Articles
69
Journal of Automation, Mobile Robotics and Intelligent Systems
6.2. Anti-windup PI Controller When occurs a large step change in the speed reference PI controller usually disregarding the physical limitation of the system so this phenomenon known by integrator windup that generate inadequate performances as shown in fig.8 a large overshoots , long settling time about 2 sec. Thus an anti-windup technique is used in this situation by cancelling the wind up phenomenon which is caused by the saturation of the pure integrator [17]. Fig. 14 shows the speed anti-windup PI controller diagram block, where Ti is tracking time constant gain.
VOLUME 13,
N° 4
2019
load disturbance very rapidly compared with the PI controller, Fig. 16 rotor speed sense reversing (3343, –3343, 3343 rpm) the anti-windup PI provide perfect superposition and better tracking than PI by reducing the overshoot and settling time. Fig. 17 shows speed estimation error, always converge to zero, that is indicate the effectiveness of the proposed luenberger speed observer. Generally the performances of the proposed observer have been verified experimentally by comparative study between the classical PI controller and anti-windup PI. The next Table 1 summarizes the comparative analysis of speed controllers.
Fig. 14. Structure of anti-windup PI controller adopted in speed loop control
Fig. 17. Speed estimation error Tab. 1. Comparative analysis between PI and antiwindup PI controller PI anti-windup PI Rise time [sec]
Maximum overshoot [%] Settling time [sec]
Fig. 15. Rotor speed estimation under variable speed reference with estimation error
Fig. 16. Speed sense reversing: estimated and real speeds
70
Speed variation test (3343, 2388, 3822, 3343 rpm) in Fig. 15, the anti-windup PI controller rejects the Articles
Speed dropping due to the load application [%]
0.53
0.53
2
0.8
7.14 5.36
2.28 2.5
7. Conclusion In this paper, we have validated the Luenberger observer based speed sensorless control of induction motor using RT-LAB platform. The overall controllers and speed observer were simulated in real-time with time-step 4μs on one core of a 4 cores, including digital and analog I/O access time. The experimental results obtained with PI anti-windup controller show good performances in both dynamic and steady state, with fast load disturbance rejection compared with the results obtained with PI controller. The efficiency of the designed observer allows us to conclude that the speed sensor can be eliminated and replaced by Luenberger observer to reduce the cost of the control system. The proposed technique was implemented successfully in real-time with assumption that the induction motor parameters values are constants (Tab. 2), but in fact during the very low speed operation the induction motor temperature rise and generated loss-
Journal of Automation, Mobile Robotics and Intelligent Systems
es (stator copper losses and rotor losses) affects the induction motor parameters, the speed observer performances are disregarding also, so further research is open to test the observer robustness against parameters variation in real-time. Tab 2. Induction motor parameters Pn [W]
120
Rs [W]
2
Ls [H]
0.02939
Lm [H]
0.02526
wn [min ] P
4000
J [kg/m2]
0.00037
–1
fc [SI]
0.00006
Rr [W]
Lr [H]
1.05
1.705
0.02939
Acknowledgements The authors thank Tahri Mohammed University and CAOSEE laboratory for all helps and supports.
AUTHORS
Mansour Bechar* – Laboratory of Research Control Analysis and Optimization of Electro-Energetic Systems, Tahri Mohammed University, Bechar, Algeria, E-mail: mansourbechar@gmail.com. Abdeldjebar Hazzab – Laboratory of Research Control Analysis and Optimization of Electro-Energetic Systems, Tahri Mohammed University, Bechar, Algeria.
Mohamed Habbab – Laboratory of Research Control Analysis and Optimization of Electro-Energetic Systems, Tahri Mohammed University, Bechar, Algeria.
Mohammed Slimi – Laboratory of Research Control Analysis and Optimization of Electro-Energetic Systems, Tahri Mohammed University, Bechar, Algeria.
Pierre Sicard – Group de Recherche en Electronique Industrielle (GREI) Université du Québec à TroisRivières C.P.500, Trois-Rivières (Québec), Canada. * Corresponding author
References
[1] A. Abbou, T. Nasser, H. Mahmoudi, M. Akherraz, A. Essadki, “Induction Motor Controls and Implementation using dSPACE”, WSEAS Transactions on Systems and Control, vol. 7, no. 1, 2012, 26–35. [2] J. Holtz, “Sensorless control of induction motor drives”, Proceedings of the IEEE, vol. 90, no. 8, 2002, 1359–1394 DOI: 10.1109/JPROC.2002.800726. [3] H. Kubota, K. Matsuse, “Speed sensorless field-oriented control of induction motor with rotor resistance adaptation”, IEEE Transactions on Industry Applications, vol. 30, no. 5, 1994, 1219– 1224 DOI: 10.1109/28.315232.
VOLUME 13,
N° 4
2019
[4] T. Chern, J. Chang, K. Tsai, “Integral-variable-structure-control-based adaptive speed estimator and resistance identifier for an induction motor”, International Journal of Control, vol. 69, no. 1, 1998, 31–48 DOI: 10.1080/002071798222910. [5] Y. Kim, S. Sul, M. Park, “Speed sensorless vector control of induction motor using extended Kalman filter”, IEEE Transactions on Industry Applications, vol. 30, no. 5, 1994, 1225–1233 DOI: 10.1109/28.315233. [6] C. Schauder, “Adaptive speed identification for vector control of induction motors without rotational transducers”, IEEE Transactions on Industry Applications, vol. 28, no. 5, 1992, 1054–1061 DOI: 10.1109/28.158829. [7] G. Tarchała, T. Orłowska-Kowalska, “Sliding Mode Speed Observer for the Induction Motor Drive with Different Sign Function Approximation Forms and Gain Adaptation”, Przegląd Elektrotechniczny, vol. 89, no. 1a, 2013. [8] I. Bendaas, F. Naceri, “A new method to minimize the chattering phenomenon in sliding mode control based on intelligent control for induction motor drives”, Serbian Journal of Electrical Engineering, vol. 10, no. 2, 2013, 231–246 DOI: 10.2298/SJEE130108001B. [9] J. Maes, J. Melkebeek, “Speed-sensorless direct torque control of induction motors using an adaptive flux observer”, IEEE Transactions on Industry Applications, vol. 36, no. 3, 2000, 778–785 DOI: 10.1109/28.845053. [10] D. Casadei, G. Serra, A. Tani, L. Zarri, F. Profumo, “Performance analysis of a speed-sensorless induction motor drive based on a constant-switching-frequency DTC scheme”, IEEE Transactions on Industry Applications, vol. 39, no. 2, 2003, 476–484 DOI: 10.1109/TIA.2003.808937. [11] B. Akin, “State estimation techniques for speed sensorless field oriented control of induction motors”, M.Sc. Thesis, Middle East Technical University METU, 2003. [12] S. Ao, L. Gelman, Advances in Electrical Engineering and Computational Science, Springer, 2009 DOI: 10.1007/978-90-481-2311-7. [13] B. K. Bose, Modern Power Electronics and AC Drives, Prentice Hall, 2001. [14] M. Suetake, I. N. da Silva, A. Goedtel, “Embedded DSP-Based Compact Fuzzy System and Its Application for Induction-Motor V/f Speed Control”, IEEE Transactions on Industrial Electronics, vol. 58, no. 3, 2011, 750–760 DOI: 10.1109/TIE.2010.2047822. [15] M. Bechar, A. Hazzab, M. Habbab, “Real-Time scalar control of induction motor using RT-Lab software”. In: 2017 5th International Conference on Electrical Engineering – Boumerdes (ICEE-B), 2017 DOI: 10.1109/ICEE-B.2017.8192002. [16] S. T. Cha, Q. Wu, A. H. Nielsen, J. Østergaard, I. K. Park, “Real-Time Hardware-In-The-Loop Articles
71
Journal of Automation, Mobile Robotics and Intelligent Systems
(HIL) Testing for Power Electronics Controllers”. In: 2012 Asia-Pacific Power and Energy Engineering Conference, 2012, 1–6 DOI: 10.1109/APPEEC.2012.6307219. [17] D. Zhang, H. Li, E. Collins, “Digital Anti-Windup PI Controllers for Variable-Speed Motor Drives Using FPGA and Stochastic Theory”, IEEE Transactions on Power Electronics, vol. 21, no. 5, 2006, 1496–1501 DOI: 10.1109/TPEL.2006.882342.
72
Articles
VOLUME 13,
N° 4
2019
Journal Journal of of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems
VOLUME 2019 VOLUME 13,13, N°N°4 4 2019
DEVELOPING A MULTIPLATFORM CONTROL ENVIRONMENT Submitted: 15th July 2019; accepted: 20th December 2019
Dariusz Rzońca, Jan Sadolewski, Andrzej Stec, Zbigniew Świder, Bartosz Trybus, Leszek Trybus DOI: 10.14313/JAMRIS/4‐2019/40 Abstract: IEC 61131‐3 control environment is called multiplatform if source programs can be executed by various proces‐ sors, beginning from 8‐bit microcontrollers up to 32/64‐ bit efficient CPUs. This implies that virtual machine (VM), i.e. a software implemented processor, is used as runtime by the host CPU. The VM executes certain intermediate code into which IEC 61131‐3 programs are compiled. En‐ vironment of this type called CPDev has been gradually developed by the authors over the last decade, begin‐ ning with initial report in this journal in 2008 [47]. Howe‐ ver, technical implementations of its functionalities have not been described so far. This involves such matters as intermediate language, parametrization of the compiler and VM, multiproject runtime, translators of graphical languages, device‐independent HMI, target platform and communication interfacing, which are presented in com‐ pact form in this paper. Some characteristic industrial im‐ plementations are indicated. Keywords: IEC 61131‐3 environment, virtual machine, intermediate language, parametrized compiler, interfa‐ cing, HMI, industrial controllers
1. Introduction There is a common opinion among practising en‑ gineers and university staff that runtime engineering environments based on IEC 61131‑3 standard [22] will remain state of industrial practice long into the next decade (e.g. [52], [56]). The standard de�ines �ive programming languages, i.e. textual IL, ST, graphic LD, FBD and mixed SFC, with time‑triggered scan cycle execution. IL, LD and SFC are preferred in manufac‑ turing based on PLCs, whereas general automation favours ST, FBD and also SFC. Edition 3.0 (2013) of the standard has introduced object‑oriented program‑ ming by extending the original function block concept. Open architecture for distributed control and au‑ tomation focused on interoperability of the devices con�igured by multiple tools, with function blocks as reusable software components, is de�ined in another IEC 61499 control standard [23]. Runtime software is event‑driven. Code measures have shown [17] that IEC 61499 is more appropriate for sequential controls, but IEC 61131‑3 suits better general applications. Ho‑ wever, due to insuf�icient support by commonly used tools and runtimes IEC 61499 has not been gene‑ rally accepted by industry so far [52], although some successful applications particularly in energy sector have been reported (e.g. [10]).
Compilers of control languages are basic compo‑ nents of IEC 61131‑3 tools. A compiler translates source program into machine code executed by run‑ time software of the controller processor. Three ap‑ proaches to compilation are encountered in practice. In the �irst one, most common, IEC 61131‑3 programs are directly translated into machine code. Execution is fast, so the approach is applied by leading manu‑ facturers of control equipment, such as Siemens, ABB, Schneider, Rockwell, Omron and others. Some tools from independent software providers, namely CODE‑ SYS [1] (market leader) and LogicLab [2] also directly translate source programs into machine code. It is ho‑ wever a single platform solution since change of CPU requires a new compiler. Contrary to the direct translation the second ap‑ proach involves two steps. At �irst IEC 61131‑3 pro‑ grams are translated to C/C++ and then another tool generates the machine code. Academic open‑source Beremiz [53] and independent GEB [16] apply this ap‑ proach which due to common C/C++ tools suits multi‑ ple platforms, not just one as before. The approach is particularly suitable for education, however necessity of another C/C++ translation limits somewhat com‑ mercial applications. In the third approach, source programs are compi‑ led into binary code of a dedicated intermediate lan‑ guage executed by specially designed virtual machine (VM), i.e. a software processor. The VM is in fact the runtime program of the target controller. The appro‑ ach does not need translation to C/C++ and may be applied to multiple platforms. It was originally intro‑ duced in ISaGRAF environment [21] with intermedi‑ ate Target Independent Code. STRATON [9] (indepen‑ dent) also applies this approach, however details re‑ main con�idential. VMs are typical for academic solu‑ tions reviewed in the next section. Nevertheless, time ef�iciency of the approach is lower due to overhead im‑ posed by the VM. The other IEC 61499 standard is supported by se‑ veral engineering tools of which nxtControl [38] used at academic institutions may be an example. Also ISa‑ GRAF provides now IEC 61499 programming option. The authors began developing an IEC 61131‑3 en‑ vironment over a decade ago encouraged by an equip‑ ment manufacturer and because of teaching needs. It was actually a next step from earlier works on multi‑ function controllers (e.g. [7]). Application of the �irst version of the environment called Control Program Developer (CPDev) was reported in this journal [47] (also shown at Automation 2008 Fair). The �irst ver‑
73
73
Journal Journal of of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems
sion compiled ST programs into intermediate Virtual Machine Assembler (VMASM) executed by runtime VM on AVR and MCS51 8‑bit microcontrollers. It alre‑ ady involved certain provisions to facilitate porting to different hardware and interfacing communications. Over the following years CPDev has been exten‑ ded on 32/64‑bit processors, augmented with transla‑ tors of graphical languages, HMI design tool and multi‑ project runtime (initially applied in FPGA [18]). PhD research has been supported by Model Driven Deve‑ lopment and unit‑testing extensions [29], [27]. So far several Small and Medium Scale (SMES) manufactu‑ rers use the environment. The early paper [47] and the others have focu‑ sed on CPDev functionalities rather than on explaining how they have been created. Therefore the purpose of this paper is to present technical aspects of: ‑ portability of the compiler and VM to different CPUs ‑ implementation of interfaces for various target plat‑ forms ‑ translators of graphical languages
‑ structure of device‑independent HMI tool
‑ data for binding variables to communication inter‑ faces. This paper is thus organized as follows. Next section reviews related work concerning IEC 61131‑ 3 environments. Section 3 summarizes basic features of CPDev. Section 4 overviews VMASM language, pa‑ rametrized compiler and presents example running throughout the paper. Architecture of the VM, im‑ plementation of target platform interfaces and multi‑ project runtime are described in Section 5. Section 6 presents translators of graphical languages and struc‑ ture of the HMI tool. List of data required to bind varia‑ bles to communication interfaces and some characte‑ ristic industrial implementations are shown in Section 7.
2. Related Work
74
74
An interest in virtual machines, i.e. abstract pro‑ cessors for speci�ic languages implemented by soft‑ ware on particular hardware platforms has been in‑ creasing since Java Virtual Machine (JVM) appeared over 20 years ago [55], followed later by Common Lan‑ guage Runtime (CLR) for .NET framework [46]. In par‑ ticular, most of ARM‑based mobile phones implement JVM processors in Jazelle technology. VM‑based control solutions with the standard IL as the intermediate language appeared in literature in si‑ milar time as the early paper [47] (besides commer‑ cial ISaGRAF). VM of [58] was written in C and app‑ lied in 8‑bit C8051 CPU. Whimori CDU [48] educati‑ onal tool translated LD into IL. UniSim [11], although restricted to a few data types, provided graphical pro‑ gramming. IL was also used in [57] for VM implemen‑ tation of ARM‑based softcontroller. More recently, IEC 61131‑3 applications have been executed by CLR, i.e. a .NET VM [5]. Development of various educational tools continues. For example, WEB PLC Simulator [41] executes control algorithm Articles
VOLUME N°44 2019 2019 VOLUME 13,13, N°
and plant model written in ST. HT‑PLC [26] runs LD programs on Arduino with Ethernet. Among relevant solutions in other areas one may mention security‑ oriented IoT environment [32] involving SIL interme‑ diate language, C/C++/Java‑to‑SIL compiler and VM. Scan cycle, important particularly in manufactu‑ ring, can be decreased by direct execution at a dedi‑ cated processor, as shown in [39] for a Toshiba CPU. Nowadays hardware processors are designed in FPGA technology, as demonstrated by early double proces‑ sor for a PLC [6] and in [42], [35] for current multi‑ core solutions. Comparison of execution times for a few PLCs and FPGA has been presented in [18]. A num‑ ber of FPGA designs can be found in the proceedings of PDES conferences (Programmable Devices and Em‑ bedded Systems). Details of IEC 61131‑3 compilers are typically not disclosed. As an exception the early paper [31] descri‑ bes structure of the compiler, code optimization and support of logic operations by a programmable de‑ vice. Nevertheless, one may generally expect that most of the compilers apply Static Single Assignment (SSA) syntax and corresponding translation described for instance in [8]. Occasionally semantic‑driven transla‑ tion can also be encountered [13]. SSA syntax has been recently applied for inference of types in dynamic lan‑ guages [45], such as Python. Parametrization of GCC compiler due to limited resources has been presented in [44]. Activity on Vertex (AOV) graph transformation al‑ gorithms are typically used to translate LD diagrams into IL or ST [12], [20]. Direct translation of AOV into ST is applied in CPDev. More advanced Enhanced Data Flow Graphs from [36] can implement IL, LD and SFC programs. Parametrized formal semantics for execution of SFC diagrams has been proposed in [4] to avoid am‑ biguities of IEC 61131‑3 standard. Unsafe components of SFCs created using CODESYS semantics (also app‑ lied in CPDev) can be detected by static analysis [49]. Principles of HMI design for control applicati‑ ons have been described in [14]. German standard VDI/VDO 3699 [51] also covers this area. Future ad‑ vanced solutions may involve 3D graphics and virtual reality [50]. Some features of industrial operator pa‑ nels and in‑vehicle infotainment systems [34] are cle‑ arly the same. Implementation guidelines and technical details concerning industrial communications can be found for instance in [59]. Comparison of industrial and pu‑ blic networks taking into account control, standardi‑ zation and dependability is discussed in [15].
3. CPDev Overview
The CPDev tool, initially providing just ST pro‑ gramming, supports now all IEC 61131‑3 languages, HMI design, documentation generator and research‑ oriented extensions. Multi‑project runtime for com‑ plex software is also available. Steps of control pro‑ gram processing are shown in Fig. 1. ST and IL pro‑ grams are compiled into mnemonics of the intermedi‑
Journal Journal of of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems
ate VMASM. LD, FBD and SFC diagrams are translated to ST and then compiled. Assembler converts the mne‑ monics into executable binary code uploaded into VM. The VM is written in C, so may be implemented in va‑ rious CPUs. SFC
FBD
LD
Translators
IL
ST
Compiler
VMASM mnemonics
Assembler
Binary code
Runtime VM
Fig. 1. Processing of control programs Basic functionalities of CPDev are practically the same as in other control environments. Automatic connections provided by LD and FBD editors may be an exception (A* algorithm [19]). HMI tool applies components from device‑independent libraries. Data for interfacing communications and I/Os are genera‑ ted by the compiler as an XML �ile. Model Driven Development extension translates SysML diagrams [40] into templates of programs or communication tasks [29]. Another extension sup‑ ports unit testing of POUs (Program Organization Units) [27]. All CPDev modules have XML interfaces to enable agile and round‑trip software development [28]. Extended runtime environment besides the VM for control functions involves HMI runtime for the inter‑ face. The VM and HMI runtime are executed in parallel exchanging data by means of global variables.
4. Intermediate VMASM and Compilation
4.1. Overview of VMASM As an IEC 61131‑3‑oriented language, Virtual Ma‑ chine Assembler accepts all elementary data types except WSTRING. Number of types required by parti‑ cular application may be restricted by parametrizing the compiler. VMASM instructions consist of functions and procedures. Functions are the same as in the stan‑ dard, so ADD, OR, EQ, CONCAT, EXPT, etc. They admit
VOLUME 2019 VOLUME 13,13, N°N°4 4 2019
up to 16 operands where the �irst one denotes the re‑ sult. Procedures shown in Table 1 (examples) are ty‑ pical for machine languages, so they control program �low, call subprograms, copy memory, etc. Tab. 1. Procedures of the VMASM language Mnemonic JMP JZ JR CALB RETURN MCD MEMCP FPAT
Meaning Unconditional jump Conditional jump Unconditional relative jump Subroutine call Return from subroutine Initialize data Copy memory block Fill memory block
Syntax of VMASM instructions is the following: [:label] instruction [operand1] [,operand2][,operand3]... Label is optional, instruction speci�ies number of operands, i.e. variables, labels or constants. Since in case of functions operand1 denotes a variable (result), so the syntax expresses memory‑to‑memory type of operation. Question mark ? at the beginning indicates label or in ?LR? pre�ix a temporary variable created by the compiler. Dot operator (.) selects components of arrays or structures. Note that in IL language, often used as intermedi‑ ate (Sec. 2), result of a command is kept in Current Re‑ sult (CR) register equivalent to accumulator. Later the CR is copied into a variable. By placing the result in the operand1, VMASM combines these two steps into one. The notion of accumulator does not exist in VMASM. 4.2. Parametrized Compiler
To ensure portability of implementations one has to take primarily into account sizes of addresses requi‑ red by CPUs, so typically 16 or 32 bits. Also some data types may be not needed. To provide such features the CPDev compiler is parametrized by means of an XML Library Con�iguration File (LCF) involving de�i‑ nitions of hardware resources, data types, instructions and conversions. Given the parametrized LCFs dedica‑ ted compilers and VMs for particular platforms can be created from one general compiler and generic VM. <HARDWARE> <AddressSize>2</AddressSize> <MaxCodeAddress>0xFFFF</MaxCodeAddress> <MaxDataAddress>0xFFFF</MaxDataAddress> </HARDWARE> <TYPES> <type name="UINT" implement="alias"> <alias name="WORD"/> </type> <deny-type name="LREAL" /> </TYPES>
Fig. 2. Specification of hardware and data types Sample speci�ication of <Hardware> resources in Articles
75
75
Journal Journal of of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems
the LCF �ile is shown in Fig. 2 (upper part) with AddressSize and MaxAddresses of code and data me‑ mories available for VM. 2 as AddressSize means 2 bytes needed for 16‑bit addressing of up to 64 kB me‑ mory (0xFFFF). 4 represents 32‑bit addressing of 4 GB. Note that the AddressSize de�ines addressing for VM only, so 16‑bit VM may be executed by a 32 or 64‑ bit CPU. <TYPES> part of the LCF �ile (Fig. 2) de�ines data types available in particular implementation. By using deny-type one can remove not needed types, for in‑ stance LREAL. The deny-type option enables con�i‑ guration of the environment for so‑called made‑to‑ measure PLCs, suitable particularly for IoT and Indu‑ stry 4.0 applications. Given the LCF �ile, scanner and parser of the com‑ piler translate the source program into a �ile with VMASM mnemonics. By using another set of keywords the ST scanner also processes IL, where ad hoc tempo‑ rary variables replace CR register. The parser applies top‑down syntax‑directed translation [8]. Basic com‑ ponents of the compiler are de�ined as classes in C� language. Internal data are kept in lists. 4.3. Compilation Example
ST program for switching OUT on‑and‑off in 3+2 se‑ conds cycle while IN is a active, and VMASM transla‑ tion of the �irst TON1 call are shown in Fig. 3. A warning light may be triggered in this way.
VOLUME N°44 2019 2019 VOLUME 13,13, N°
initializes TON1.PT (4 bytes, TIME) with #B80B0000, i.e. 2000 ms (little endian form). Final CALB calls code of TON1 block from IEC_61131 library. Value testing and jumps could be avoided by writing the Boolean ex‑ pression in function form as AND(IN,TON2.Q)). Translation of TON2 call is similar. OUT statement is translated in the same way as the expression for TON1.IN. Optimization of the mnemonic translation to minimize the number of temporary variables precedes generation of binary code.
5. Binary Code and Virtual Machine
5.1. Assembling the Mnemonics To generate the binary code, VMASM instructions are de�ined in the LCF �ile by means of digital identi‑ �iers vmcode composed of group ig and type it sub‑ identi�iers as shown in Fig. 4a. In case of functions ig indicates name, so ADD, MUL, AND, EQ, etc., whereas it refers to data type, BOOL, INT, TIME and so on. In this way type‑speci�ic functions such as ADD:INT, AND:BOOL and others may be expressed in digital form. Fig. 4b (upper part) shows de�inition of AND:BOOL where ig=08 denotes AND and it=*0 indicates BOOL (0) and varying number of inputs (*). All VMASM procedures belong to one group ig=1C with it choosing speci�ic procedure. �e�inition of MCD that initializes data memory beginning from DST (des‑ tination) relative label (:rdlab) with imm1 bytes (im‑ mediate value) from the source imm2 of varying type (*) is also in Fig. 4b (lower part). ig �rou� �de�t���er
it t��e �de�t���er
(a)
TON1(IN:=IN AND NOT TON2.Q, PT:=T#3s); TON2(IN:=IN AND TON1.Q, PT:=T#2s); OUT := IN AND NOT TON1.Q; JZ IN, :?WARN?AN10 NOT ?LR?AN11, TON2.Q JZ ?LR?AN11, :?WARN?AN10 MCD TON1.IN, #01, #01 JMP :?WARN?EA14 :?WARN?AN10 MCD TON1.IN, #01, #00 :?WARN?EA14 MCD TON1.PT, #04, #B80B0000 CALB TON1, :?IEC_61131.TON?CODE
Fig. 3. ST code and part of VMASM translation
76
76
The compiler translates Boolean expressions in‑ volving in�ix operations like IN AND TON2.Q by me‑ ans of value testing and jumps. Hence the VMASM translation begins with jump JZ to :?WARN?AN10 la‑ bel if IN is zero. If not, NOT of TON2.Q is evaluated in ?LR?AN11 temporary variable, followed by anot‑ her JZ to the same label if the variable is zero. If not, MCD instruction sets 1 byte (�irst #01) at TON1.IN to 1 (second #01), followed by unconditional JMP to :?WARN?EA14. At the :?WARN?AN10 label another MCD sets TON1.IN to 0 (#00). The third MCD at :?WARN?EA14 Articles
<function name="AND" vmcode="08*0" return="BOOL"> <args> <arg no="*" name="arg*" type="BOOL"/> </args> <comment>Binary AND of BOOL operands</comment> </function> <sysproc name="MCD" vmcode="1C15"> <args> <arg no="0" name="DST" type=":rdlabel"/> <arg no="1" name="imm1" type=":imm.BYTE"/> <arg no="2" name="imm2" type=":imm.*"/> </args> <comment>Initializes data at arg0 address with area size arg1 and source pattern arg2.</comment> </sysproc> (b)
Fig. 4. a) Structure of vmcode, b) specification of function and procedure Having the �ile with VMASM mnemonics all pro‑ gram modules are consolidated and assembled into bi‑ nary code by replacing: ‑ instruction mnemonics by digital identi�iers vmcode (ig and it) ‑ names of variables by addresses in data memory
Journal Systems Journalof ofAutomation, Automation,Mobile MobileRobotics Roboticsand andIntelligent Intelligent Systems
‑ labels by addresses in code memory. Binary code assembled from the �irst four VMASM instructions in Fig. 3 looks as follows: 1C02 0510 1C02 1C15
0000 0200 0200 0800
B900 2100 B900 0101
The vmcodes of JZ, NOT and MCD instructions are 1C02, 0510, 1C15, respectively. IN is allocated to address 0 in data memory, TON2.Q to 21 (hexadecimal), tem‑ porary ?LR?AN11 to 2, and TON1.IN to 8. The label :?WARN?AN15 indicates B9 in the code memory. 5.2. Virtual Machine
General architecture of CPDev virtual machine in‑ volving code and data memories, instruction proces‑ sing module, and target platform interface is shown in Fig. 5. The module fetches instructions from code me‑ mory, executes them acquiring operands from data or code memories (variables or constants) and in case of functions stores the results in data memory. Procedu‑ res change internal state of VM. Target platform inter‑ face in the generic VM consists of speci�ications of low‑ level functions �illed in by appropriate code while por‑ ting. VM components are implemented as 16 or 32‑bit according to AddressSize. Code stack
Instruction processing module
Data stack
Memory-to-memory
Code memory
Target platform interface
Data memory
Prototypes of low-level functions
Fig. 5. General architecture of the virtual machine Generic part of VM is written in standard C, so suits any general purpose CPU. Note that the VMs mentio‑ ned in Section 2 are mostly written in C. The VM ope‑ rates as an interpreter executing successive lines of code. Overloaded functions such as AND, MUL, GT, etc. accept all meaningful data types. Variable number of operands is handled by a for loop. A procedure indi‑ cated by ig=1C is selected by switch(it) command. Speci�ications of low‑level functions included into the generic VM are expressed by function prototypes (empty), independent of CPU and hardware solution. Initial set of prototypes has been presented in [54] ba‑ sed on experience with multifunction instruments [7] and industrial controllers. Some of the prototypes are listed below: ‑ VMP_LoadConfiguration – loads task parameters (cycle, number of POUs, etc.), binary code and allo‑ cates memory for data ‑ VMP_PreRunConfiguration – initializes hardware and stores the initial state (time, parameters)
VOLUME VOLUME 13,13, N°N°44
2019 2019
‑ VMP_PreCycle – updates program variables from external inputs before calculations, stores initial state of system clock ‑ VMP_PostCycle – updates external outputs with re‑ sults of calculations; if time is left triggers tests or other activities
‑ VMP_CurrentTime – returns current value of system clock to determine TIME variables. In case of cycle time overrun, the VMP_PostCycle sets a �lag in the VM status. The VM checks the �lag and ta‑ kes appropriate steps (failure LED, safe outputs, etc.). By assumption, the code that �ills the prototypes is written in C/C++ by implementation engineer and consolidated with the generic VM. Some other har‑ dware functions may be added. As indicated in the introduction the use of VM leng‑ thens the execution cycle as compared to direct trans‑ lation to machine code or to C/C++. To evaluate the overhead, execution times of standard function blocks such as �lip‑�lops, edge detectors, counters and timers have been measured, at �irst implemented in ST and then in C by means of so‑called native blocks (Sec. 7.1). By average, the VM solution has turned out about four times slower than translation to C/C++ [16], [53]. So the use of native blocks reduces the overhead consi‑ derably. 5.3. Runtime for Complex Software
The generic VM described above executes single project involving one task. However, deployment of a few VMs in a multi‑core FPGA has been tested al‑ ready [18], leading the way to development of multi‑ project runtime needed for a supervisory computer or software processing station. To make implementation relatively simple it has been assumed that operating system of the computer will take care of scheduling or multitasking of the VMs. This concept has been recently transferred into runtime for industrial PC with Windows Embedded operating system. The runtime is called WinController and can execute complex software composed of pro‑ jects run by corresponding VMs with different or the same cycles. The VMs exchange data through global variables. Field devices, namely PLCs, PACs, I/O cards etc. are interfaced by means of special I/O and Data adapters. The WinController is implemented as a ser‑ vice in the Windows service system functionality. It runs nonstop in the background and starts automati‑ cally when the computer is switched on. The service is con�igured by WinController Ma‑ nager with relevant part of main window shown in Fig. 6. Besides Service status, I/O and Data adapters, the window includes two VMs running AUTOPILOT and HMI projects. Data exchange is con�igured by I/O mappings.
6. Graphical Translators and HMI Tool 6.1. LD and FBD Translators to ST
Although LD and FBD diagrams look different, CP‑ Dev editors are similar partly due to the same A* Articles
77
77
Journal Journal of of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems
VOLUME N°44 2019 2019 VOLUME 13,13, N°
Fig. 8. LD diagram and ST translation Fig. 6. Configuration window of WinController Manager 6.2. SFC Translation path�inding algorithm used for automatic connections [19]. Translation replaces the connections by auxili‑ ary local variables to display ’live” diagrams in the online mode. Actually the vertices of corresponding AOV graph represent diagram elements whereas vir‑ tual nodes denote connections. Each line of ST pro‑ gram re�lects the node. Three‑rung LD version of the running example fol‑ lowed by ST translation of the �irst rung is shown in Fig. 7. The auxiliary variables beginning with out_contact_ are declared automatically in VAR ... END_VAR section. Besides the connections the varia‑ bles represent branch points and outputs of function blocks or functions.
SFC diagram for the running example is shown in Fig. 9. Actions set or reset the OUT output. The diagrams composed of steps, transitions, jumps, se‑ quence selections and simultaneous sequences are kept in memory as trees. The �irst three elements cre‑ ate the last two. Actions bound to steps are de�ined by quali�iers such as N Normal/Non‑stored, S Set, R Reset, L time Limited, D Delayed and other [22].
Fig. 9. SFC diagram for the running example
Fig. 7. FBD diagram and ST translation
78
78
FBD diagram and corresponding translation are shown in Fig. 8. �ere the pre�ix out_ indicates auxili‑ ary variables. Except for such variables the translation does not differ from the ST original in Fig. 3. To make the automatically created connections looking more or less like drawn ”by hand” the original A* algorithm has been extended to penalize path cros‑ sings and direction changes [13]. Note also that after each correction of the diagram the algorithm calcula‑ tes all paths anew, so they may be a little different than before. Articles
Translation of SFC to ST is supported by two system function blocks, SFC_STEP and SFC_ACTION_CONTROL, connected as in Fig. 10. The �irst one determines step activity according to the next_step_flag. The outputs X, T indicate activity and time elapsed since activation (see Fig. 9). By means of the A output the second block triggers action (code) bound to the step according to given quali�ier. This is provided by connecting the output X of the �irst block to the input of the second block indicated by the quali�ier (N, S, R, L, D, ...). ST translation of SFC diagram begins with declarations of the instances of SFC_STEP and SFC_ACTION_CONTROL, and next_step_flags. PROGRAM section consists of three parts: ‑ activation of step blocks by next_step_flags ‑ execution of actions bound to active steps
Journal Journal of of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems
VOLUME 2019 VOLUME 13,13, N°N°4 4 2019
next_step_flag IN
SFC STEP X
T
N S R L D ...
SFC ACTION CONTROL A
IF A THEN action_code END_IF
Fig. 10. Execution of step‐action pair ‑ setting next_step_flags by evaluation of transi‑ tion conditions. The �irst two parts correspond to Fig. 10 while the last one creates the tree structure according to SFC diagram. Simultaneous sequence may set/reset a few next_step_flags by a single preceding transition. As explained in [4], SFC diagrams may exhibit unreachable and unsafe behaviour. Unreachability is partially restricted by the editor but the designer’s task is to avoid unsafe diagrams. 6.3. Human Machine Interface
Small control systems based on PLCs and PACs of‑ ten include graphical operator panels. The panels may be integrated with controllers, as in case of Horner, Unitronix, Beckhoff and others. HMI software for such panels consists of two parts related to visualization and behaviour. Visualization involves a set of displays built from graphic objects se‑ lected from libraries. Behaviour is determined by HMI programs that select displays and specify appearance of the objects. To do so, relevant variables are exchan‑ ged between the controller and panel. The variables are bound to attributes of the objects during con�i‑ guration. If control and HMI programs belong to the same project they access common global variables to exchange data. Features outlined above are provided in the envi‑ ronment by CPVis tool involving a dedicated graphic editor for de�ining displays, adding objects, moving them, etc. Attributes of the objects are determined by global variables or constants. XML data of the project converted into binary code are used in the runtime en‑ vironment where VM and CPVis runtime jointly pro‑ vide control and HMI functions. Example of display for power management at a ship is shown in Fig. 11. By splitting the HMI software into device‑ dependent and device‑independent parts proposed in [30], visualizations created by CPVis can be dis‑ played by different devices, i.e. TFT or LCD panels, monitors or tablets. The device‑dependent parts are speci�ied by common prototypes of low‑level drawing functions called drawing primitives. Codes of graphic objects call the prototypes by name only, so they are device‑independent, as depicted in Fig. 12. Two drawing primitives concerning a rectangle are shown
Fig. 11. TFT display for power management at a ship [43] below: ‑ DrawRectangle: position, size, border color and width ‑ FillRectangle: position, size, �ill color. By assigning a variable to FillRectangle’s size the two primitives may create a bargraph. DrawArc, FillPie, FillTriangle, ProcessValue are examples of other primitives. Device independent
Device dependent
Fig. 12. Structure of the HMI tool Implementation of CPVis for particular display de‑ vice requires �illing the drawing primitives with ap‑ propriate code using the mechanisms and library spe‑ ci�ic for the device, for instance �amTex, �LCD or �DI�. Custom graphic objects are created by writing the code in C with calls of drawing primitives. Such ob‑ jects can be displayed by another device provided that the drawing primitives are prepared for it. Notice that the concept of drawing primitives is similar to the pro‑ totype functions speci�ied for the target platform in‑ terface (Sec. 5.2).
7. Communication and Implementations
7.1. Structure of Communication Data By the original assumptions, the implementa‑ tion engineers themselves develop con�iguration tools while porting the environment to particular platforms (along with target platform interfaces). To support this the compiler‑generated XML �ile (DCP extension), besides binary code and VMASM mnemonics (for de‑ bugging), also contains relevant data on global varia‑ bles. The data are kept in VAR entities, such as those Articles
79
79
Journal Journal of of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems
in Fig. 13 for IN and OUT variables from the running example. The successive entries denote: ‑ LName – local name inside current variable scope
‑ PName – physical name including project name scope
‑ Addr – hexadecimal address (16‑bit version in Fig. 13) ‑ AddrType – address type; :gdlabel means global data label ‑ Size – variable size in bytes ‑ Type – variable type
‑ PType – physical type; $Default scope involves sy‑ stem elements
‑ VarFlags – �lags related to variable. Variables imported from libraries have LIBRARY.VARIABLE as physical name to avoid con‑ �licts. �esides elementary types, AddrTypes include data label relative to current POU (:rdlabel), global or relative labels to code (:gclabel, :rclabel), array and structures indicated by ?ARRAY?NUM and ?STRUCT?NUM (unique NUM). Flags involve: 1 – retain, 4 – constant, 8 – variable with address de�ined by AT, 0x4000 – global, 0x800000 – data channel. Other �lags may be de�ined. <VAR LName="IN" PName="RevSwitchST.IN" Addr="0000" AdrType="gdlabel" Size="1" Type="BOOL" PType="$DEFAULT.BOOL" VarFlags="00004000" /> <VAR LName="OUT" PName="RevSwitchST.OUT" Addr="0001"
VOLUME N°44 2019 2019 VOLUME 13,13, N°
7.2. Characteristic Implementations As may be concluded from above, the implementa‑ tion engineer can port CPDev environment to particu‑ lar platform after development of: ‑ target platform interface ‑ communication and I/O con�iguration tool
‑ deployment module to upload the binary code and con�iguration data into the controller. Upgrading existing devices such as dedicated con‑ trollers, distributed I/O units or data loggers with IEC 61131‑3 capabilities has been the main purpose of the implementations so far. Therefore the engineers could use major portions of low‑level software while porting. Deployment has involved Ymodem for serial, ftp or tftp for Ethernet and GSM/GPRS for wireless. Power management, propulsion control, heading control and other subsystems belonging to Mega‑ Guard ship automation and navigation system from Praxis [43] are most signi�icant implementations of CPDev. Each subsystem involves a control processor, I/O unit (units) and TFT touch‑panel (as in Fig. 11), all of them equipped with ARM CPUs. CAN is applied for communication with I/O and Ethernet (redundant) for TFT. The processor for controlling a setup com‑ posed of diesel engine, electric generator and circuit breaker is shown in Fig. 14. Praxis software is writ‑ ten in ST. TFT displays consist of components from user‑de�ined libraries. NMEA serial/Ethernet protocol is used by marine electronic devices.
AdrType="gdlabel" Size="1" Type="BOOL" PType="$DEFAULT.BOOL" VarFlags="00004000" />
Fig. 13. <VAR> entities in the DCP file
80
80
Depending on software solution, local or physical names and addresses may be used by con�iguration tools to bind variables to interfaces. The original pa‑ per [7] presented CPCon tool to interface remote I/Os by means of physical data. Local data suf�iced to con‑ �igure SCADA InTouch for supervisory PC. An OPC server can also be designed given XML DCP �ile. In fact, simple server adapted from publicly avai‑ lable LightOPC [33] is built‑in into the CPDev softcon‑ troller. �esides the DCP‑based con�iguration tool a few va‑ riables can be interfaced to dedicated external ports by using native block concept (as in Java Native Inter‑ face or .NET Platform Invoke). It was originally descri‑ bed in [54] and applied for GPS module with NMEA se‑ rial communication. Native blocks are function blocks written in C/C++ but, unlike typical function blocks, they are deployed on the target platform as compo‑ nents of the VM, similarly as the target platform inter‑ face. To some extent they resemble hardware blocks from multifunction instruments. Software algorithms implemented as C/C++ native blocks are executed much faster than blocks written in IEC 61131‑3 lan‑ guages (Sec. 5.2). Articles
Fig. 14. Power management processor [43] Substation automation and medium voltage Grid control is provided by Remote Telecontrol Units (RTUs) from iGrid T&D [24] shown in Fig. 15. RTUs consist of ARM processor, I/O board and communi‑ cation interfaces for a number protocols including
Journal Journal of of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems
IEC 60870 and 61850 dedicated for power systems. Documentation generation with complicated FBD and LD diagrams is important.
Fig. 15. Remote Telecontrol Unit [24] Mini‑DCS system from LUMEL described in [7] was the �irst domestic implementation of CPDev �also found in Philippines [25]). StTr‑PLC controller from NiT [37] for control of communal facilities distributed over large area, such as water supplies, heating stati‑ ons or sewage pumps, is another one. StTr‑PLCs are monitored by GSM/GPRS integrated communication platform also used for mobile applications. Recently developed PLC1 controller from BartCom [3] with lo‑ cal or distributed I/Os �Modbus, Pro�ibus) is dedicated for intelligent homes and general automation.
8. Conclusions
Technical aspects of current features of CPDev multi‑platform engineering environment have been presented. The runtime is based on virtual machine executing intermediate VMASM code into which ST and IL programs are compiled. Programs written in graphical languages are translated into ST. Parame‑ trization of the compiler and VM for commonly used 16‑ and 32‑bit addressing and for eventual restriction of data types is provided by the Library Con�igura‑ tion File. Implementations on diverse hardware plat‑ forms are supported by prototypes of low‑level functi‑ ons introduced into the generic VM. HMI projects split into device‑independent and device‑dependent parts may be displayed by different devices. General pur‑ pose �ML �ile with compiler‑generated data on glo‑ bal variables support con�iguration of communicati‑ ons and I/Os. Native blocks may considerably decre‑ ase the VM overhead. Implementations have shown that equipment manufacturers themselves are able to port the environment to relevant devices. Object‑oriented programming and moving the multi‑project runtime into multi‑core processors, with each core executing a single project, are currently under investigation.
VOLUME 2019 VOLUME 13,13, N°N°4 4 2019
AUTHORS
Dariusz Rzońca∗ – Department of Computer and Con‑ trol Engineering, Rzeszow University of Technology, ul. W. Pola 2, 35‑959 Rzeszow, Poland, e‑mail: dr‑ zonca@kia.prz.edu.pl. Jan Sadolewski – Department of Computer and Control Engineering, Rzeszow University of Techno‑ logy, ul. W. Pola 2, 35‑959 Rzeszow, Poland, e‑mail: js@kia.prz.edu.pl. Andrzej Stec – Department of Computer and Con‑ trol Engineering, Rzeszow University of Technology, ul. W. Pola 2, 35‑959 Rzeszow, Poland, e‑mail: as‑ tec@kia.prz.edu.pl. Zbigniew Świder – Department of Computer and Control Engineering, Rzeszow University of Techno‑ logy, ul. W. Pola 2, 35‑959 Rzeszow, Poland, e‑mail: swiderzb@kia.prz.edu.pl. Bartosz Trybus – Department of Computer and Con‑ trol Engineering, Rzeszow University of Technology, ul. W. Pola 2, 35‑959 Rzeszow, Poland, e‑mail: btry‑ bus@kia.prz.edu.pl. Leszek Trybus – Department of Computer and Con‑ trol Engineering, Rzeszow University of Technology, ul. W. Pola 2, 35‑959 Rzeszow, Poland, e‑mail: ltry‑ bus@kia.prz.edu.pl. ∗
Corresponding author
ACKNOWLEDGEMENTS Help from Marcin Jamro a few years ago is ackno‑ wledged. This project is �inanced by the Minister of Science and Higher Education of the Republic of Poland within the ”Regional Initiative of Excellence” program for ye‑ ars 2019 – 2022. Project number 027/RID/2018/19, total amount granted 11 999 900 PLN.
REFERENCES
[1] 3S‑Smart Software Solutions GmbH. “CODESYS”. www.codesys.com. Accessed on: 2020‑02‑28.
[2] Axel S.r.l. “Logiclab”. www.axelsoftware.it/ en/. Accessed on: 2020‑02‑28. [3] Bart‑Com. www.bart-com.pl. 2020‑02‑28.
Accessed on:
[4] N. Bauer, R. Huuck, B. Lukoschus, and S. Engell. “A Unifying Semantics for Sequential Function Charts”. In: H. Ehrig, W. Damm, J. Desel, M. Große‑ Rhode, W. Reif, E. Schnieder, and E. Westkä mper, eds., �ntegration of Soft�are Speci�ication �echni‑ ques for Applications in Engineering: Priority Pro‑ gram SoftSpez of the German Research Founda‑ tion (DFG), Final Report, Lecture Notes in Com‑ puter Science, 400–418. Springer, Berlin, Heidel‑ berg, 2004, 10.1007/978‑3‑540‑27863‑4_22. [5] S. Cavalieri, G. Puglisi, M. S. Scroppo, and L. Galvagno, “Moving IEC 61131‑3 applicati‑ ons to a computing framework based on CLR Articles
81
81
Journal Journal of of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems
Virtual Machine”. In: 2016 IEEE 21st In‑ ternational Conference on Emerging Technolo‑ gies and Factory Automation (ETFA), 2016, 1–8, 10.1109/ETFA.2016.7733632.
[6] M. Chmiel and E. Hrynkiewicz, “Concurrent ope‑ ration of processors in the bit‑byte CPU of a PLC”, Control and Cybernetics, vol. 39, no. 2, 2010, 559– 579.
[7] J. Cisek, W. Mikluszka, Z. Swider, and L. Try‑ bus, “A Low‑Cost DCS with Multifunction Instru‑ ments and CAN Bus1”, IFAC Proceedings Volumes, vol. 34, no. 29, 2001, 64–69, 10.1016/S1474‑ 6670(17)32794‑5. [8] K. D. Cooper and L. Torczon, Engineering a Compiler, Morgan Kaufmann: Boston, 2012, 10.1016/C2009‑0‑27982‑7.
[9] COPA‑DATA France. “STRATON–PLC”. www. straton-plc.com. Accessed on: 2020‑02‑28.
[10] W. Dai, V. Vyatkin, J. H. Christensen, and V. N. Dubinin, “Bridging Service‑Oriented Architec‑ ture and IEC 61499 for Flexibility and In‑ teroperability”, IEEE Transactions on Industrial Informatics, vol. 11, no. 3, 2015, 771–781, 10.1109/TII.2015.2423495.
[11] G. De Tommasi and A. Pironti, “An educational open‑source tool for the design of IEC 61131‑3 compliant automation software”. In: Automation and Motion 2008 International Symposium on Po‑ wer Electronics, Electrical Drives, 2008, 486–491, 10.1109/SPEEDHAM.2008.4581144. [12] G. Fen and W. Ning, “A Transformation Algorithm of Ladder Diagram into Instruction List Based on AOV Digraph and Binary Tree”. In: TENCON 2006 ‑ 2006 IEEE Region 10 Conference, 2006, 1– 4, 10.1109/TENCON.2006.343937.
[13] E. Ferreira, R. Paulo, C. D. Da, and P. Henriques, “Integration of the ST language in a model‑based engineering environment for control systems: An approach for compiler implementation”, Com‑ puter Science and Information Systems, vol. 5, no. 2, 2008, 87–101, 10.2298/CSIS0802087F. [14] J.‑Y. Fiset, Human‑machine interface design for process control, Instrumentation, Systems, and Automation Society: Research Triangle Park, NC, 2009. [15] P. Gaj, J. Jasperneite, and M. Felser, “Compu‑ ter Communication Within Industrial Distribu‑ ted Environment—a Survey”, IEEE Transactions on Industrial Informatics, vol. 9, no. 1, 2013, 182– 189, 10.1109/TII.2012.2209668. [16] GEB Automation. “GEB IDE”. www. gebautomation.com. Accessed on: 2020‑02‑28.
[17] P. Gsellmann, M. Melik‑Merkumians, and G. Schit‑ ter, “Comparison of Code Measures of IEC 61131–3 and 61499 Standards for Typical Au‑ tomation Applications”. In: 2018 IEEE 23rd In‑ ternational Conference on Emerging Technolo‑ 82
82
Articles
VOLUME N°44 2019 2019 VOLUME 13,13, N°
gies and Factory Automation (ETFA), vol. 1, 2018, 1047–1050, 10.1109/ETFA.2018.8502464.
[18] Z. Hajduk, B. Trybus, and J. Sadolewski, “Archi‑ tecture of FPGA Embedded Multiprocessor Pro‑ grammable Controller”, IEEE Transactions on In‑ dustrial Electronics, vol. 62, no. 5, 2015, 2952– 2961, 10.1109/TIE.2014.2362888. [19] P. E. Hart, N. J. Nilsson, and B. Raphael, “A For‑ mal Basis for the Heuristic Determination of Mi‑ nimum Cost Paths”, IEEE Transactions on Systems Science and Cybernetics, vol. 4, no. 2, 1968, 100– 107, 10.1109/TSSC.1968.300136.
[20] L. Huang, W. Liu, and Z. Liu, “Algorithm of trans‑ formation from PLC ladder diagram to structu‑ red text”. In: 2009 9th International Conference on Electronic Measurement Instruments, 2009, 4– 778–4–782, 10.1109/ICEMI.2009.5274701. [21] ICS Triplex. “ISaGRAF”. Accessed on: 2020‑02‑28.
www.isagraf.com.
[22] IEC. “IEC 61131‑3 – Programmable controllers – Part 3: Programming languages”, 2013. [23] IEC. “IEC 61499 – Function Blocks”, 2012.
[24] iGrid T&D. www.igrid-td.com. Accessed on: 2020‑02‑28. [25] Instrument Science Systems, Inc. www.issi. com.ph/. Accessed on: 2020‑02‑28.
[26] H. I. Inzunza Villagó mez, B. Pé rez Arce, S. I. Herná ndez Ruiz, and J. A. Ló pez Corella, “De‑ sign and implementation of a development en‑ vironment on ladder diagram (HT‑PLC) for Ar‑ duino with Ethernet connection”. In: 2018 IEEE International Conference on Automation/XXIII Congress of the Chilean Association of Automa‑ tic Control (ICA‑ACCA), 2018, 1–6, 10.1109/ICA‑ ACCA.2018.8609850. [27] M. Jamro, “POU‑Oriented Unit Testing of IEC 61131‑3 Control Software”, IEEE Transactions on Industrial Informatics, vol. 11, no. 5, 2015, 1119– 1129, 10.1109/TII.2015.2469257.
[28] M. Jamro and D. Rzonca, “Agile and hierarchical round‑trip engineering of IEC 61131‑3 control software”, Computers in Industry, vol. 96, 2018, 1–9, 10.1016/j.compind.2018.01.004. [29] M. Jamro, D. Rzonca, and W. Rząsa, “Testing com‑ munication tasks in distributed control systems with SysML and Timed Colored Petri Nets mo‑ del”, Computers in Industry, vol. 71, 2015, 77–87, 10.1016/j.compind.2015.03.007. [30] M. Jamro and B. Trybus, “IEC 61131‑3 program‑ mable human machine interfaces for control de‑ vices”. In: 2013 6th International Conference on Human System Interactions (HSI), 2013, 48–55, 10.1109/HSI.2013.6577801. [31] H. S. Kim, J. Y. Lee, and W. H. Kwon, “A com‑ piler design for IEC 1131‑3 standard langua‑ ges of programmable logic controllers”. In:
Journal Journal of of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems
SICE ’99. Proceedings of the 38th SICE An‑ nual Conference. International Session Papers (IEEE Cat. No.99TH8456), 1999, 1155–1160, 10.1109/SICE.1999.788715.
[32] Y. Lee, J. Jeong, and Y. Son, “Design and imple‑ mentation of the secure compiler and virtual ma‑ chine for developing secure IoT services”, Fu‑ ture Generation Computer Systems, vol. 76, 2017, 350–357, 10.1016/j.future.2016.03.014.
[33] Light OPC Server. www.ipi.ac.ru. Accessed on: 2020‑02‑28. [34] D. Massonie, C. Hacker, and T. Sowa, “Modeling Graphical and Speech User Interfaces with Wid‑ gets and Spidgets”. In: Speech Communication; 11. ITG Symposium, 2014, 1–4. [35] A. Milik, “Multiple‑Core PLC CPU Implementa‑ tion and Programming”, Journal of Circuits, Sy‑ stems and Computers, vol. 27, no. 10, 2018, 1850162, 10.1142/S0218126618501621.
[36] A. Milik and E. Hrynkiewicz, “Distributed PLC Ba‑ sed on Multicore CPUs ‑ Architecture and Pro‑ gramming”, IFAC‑PapersOnLine, vol. 49, no. 25, 2016, 1–7, 10.1016/j.ifacol.2016.12.001.
[37] Nauka i Technika. www.nit.pl. Accessed on: 2020‑02‑28. [38] nxtControl GmbH. www.nxtcontrol.com/. Accessed on: 2020‑02‑28.
[39] M. Okabe, “Development of processor directly executing IEC 61131‑3 language”. In: 2008 SICE Annual Conference, 2008, 2215–2218, 10.1109/SICE.2008.4655032. [40] OMG. “OMG Systems Modeling Language, V1.3”, 2012.
[41] L. B. Palma, V. Brito, J. Rosas, and P. Gil, “WEB PLC simulator for ST programming”. In: 2017 4th Ex‑ periment@International Conference (exp.at’17), 2017, 303–308, 10.1109/EXPAT.2017.7984410.
[42] C. G. Penteado, E. D. Moreno, and F. D. Pereira, “A microcontroller multicore in FPGAs: detailed architecture and case studies of embedded cri‑ tical applications”, International Journal of Grid and Utility Computing, vol. 8, no. 3, 2017, 169, 10.1504/IJGUC.2017.087815. [43] Praxis Automation Technology B.V. www. praxis-automation.nl. Accessed on: 2020‑ 02‑28.
[44] L. Pé rez Cá ceres, F. Pagnozzi, A. Franzin, and T. Stü tzle, “Automatic Con�iguration of GCC Using Irace”. In: E. Lutton, P. Legrand, P. Par‑ rend, N. Monmarché , and M. Schoenauer, eds., Arti�icial Evolution, Cham, 2018, 202–216, 10.1007/978‑3‑319‑78133‑4_15. [45] J. Quiroga and F. Ortin, “SSA Transformations to Facilitate Type Inference in Dynamically Typed Code”, The Computer Journal, vol. 60, no. 9, 2017, 1300–1315, 10.1093/comjnl/bxw108.
VOLUME 2019 VOLUME 13,13, N°N°4 4 2019
[46] J. Richter, CLR via C#, Microsoft Press: Redmond, Washington, 2012.
[47] D. Rzoń ca, J. Sadolewski, A. Stec, Z. S� wider, B. Try‑ bus, and L. Trybus, “Mini‑DCS system program‑ ming in IEC 61131‑3 structured text”, Journal of Automation Mobile Robotics and Intelligent Sys‑ tems, vol. 2, no. 3, 2008, 48–54. [48] S. Shin, M. Kwon, and S. Rho, “Whimori CDK: A Control Program Development Kit”. In: Engineering and Information 2009 Internatio‑ nal Conference on Computing, 2009, 115–118, 10.1109/ICC.2009.33. [49] H. Simon and S. Kowalewski, “Static analy‑ sis of Sequential Function Charts using ab‑ stract interpretation”. In: 2016 IEEE 21st In‑ ternational Conference on Emerging Technolo‑ gies and Factory Automation (ETFA), 2016, 1–4, 10.1109/ETFA.2016.7733648.
[50] T. Skripcak, P. Tanuska, U. Konrad, and N. Schmeisser, “Toward Nonconventional Human–Machine Interfaces for Supervisory Plant Process Monitoring”, IEEE Transactions on Human‑Machine Systems, vol. 43, no. 5, 2013, 437–450, 10.1109/THMS.2013.2279006.
[51] VDI/VDE 3699 Process control using display screens, 2015. [52] K. Thramboulidis, “A cyber–physical system‑ based approach for industrial automation sys‑ tems”, Computers in Industry, vol. 72, 2015, 92– 102, 10.1016/j.compind.2015.04.006.
[53] E. Tisserant, L. Bessard, and M. de Sousa, “An Open Source IEC 61131‑3 Integrated Develop‑ ment Environment”. In: 2007 5th IEEE Internati‑ onal Conference on Industrial Informatics, vol. 1, 2007, 183–187, 10.1109/INDIN.2007.4384753. [54] B. Trybus, “Development and Implementation of IEC 61131‑3 Virtual Machine”, Theoretical and Applied Informatics, vol. 23, no. 1, 2011, 10.2478/v10179‑011‑0002‑z. [55] B. Venners, Inside the Java Virtual Machine, McGraw‑Hill: New York, 1997.
[56] B. Vogel‑Heuser, D. Schü tz, T. Frank, and C. Legat, “Model‑driven engineering of Manufacturing Au‑ tomation Software Projects – A SysML‑based ap‑ proach”, Mechatronics, vol. 24, no. 7, 2014, 883– 897, 10.1016/j.mechatronics.2014.05.003. [57] M. Zhang, Y. Lu, and T. Xia, “The Design and Im‑ plementation of Virtual Machine System in Em‑ bedded SoftPLC System”. In: 2013 International Conference on Computer Sciences and Applicati‑ ons, 2013, 775–778, 10.1109/CSA.2013.185.
[58] C. Zhou and H. Chen, “Development of a PLC Vir‑ tual Machine Orienting IEC 61131‑3 Standard”. In: 2009 International Conference on Measuring Technology and Mechatronics Automation, vol. 3, 2009, 374–379, 10.1109/ICMTMA.2009.422. [59] R. Zurawski, ed., Industrial Communication Technology Handbook, Industrial information Articles
83
83
Journal Journal of of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems
technology series, CRC Press, Taylor & Francis Group: Boca Raton London New York, 2015.
84
84
Articles
VOLUME N°44 2019 2019 VOLUME 13,13, N°
Journal Journal of of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems
VOLUME 2019 VOLUME 13,13, N°N°4 4 2019
APPLICATION OF UNINORMS TO AGGREGATE UNCERTAINTY FROM MANY CLASSIFIERS Submitted: 29th November 2019; accepted: 30th January 2020
PaweĹ&#x201A; DrygaĹ&#x203A;, Jan G. Bazan, Piotr Pusz, Maksymilian Knap DOI: 10.14313/JAMRIS/4â&#x20AC;?2019/41 Abstract: In this contribution we want to present the concept of uncertainty area of classifiers and an algorithm that uses uninorms to minimize the area of uncertainty in the preâ&#x20AC;? diction of new objects by complex classifiers. Keywords: aggregation function, decision making, clasâ&#x20AC;? sifier, uncertainty area
1. Introduction The main task of the classiďż˝ication constituting one of the important methods of data mining is the creaâ&#x20AC;&#x2018; tion of models, called classiďż˝iers (also classifying algoâ&#x20AC;&#x2018; rithms or decision algorithms), describing dependenâ&#x20AC;&#x2018; cies between the given class (category) of objects and their characteristics. ďż˝iscovered classiďż˝ication models are then used to classify new objects of the unknown class membership (see e.g., [27]). We will consider a problem of approximation of concepts (classes) based on a ďż˝inite set of observations containing examples of positive and negative concepts. This ďż˝inite set of obserâ&#x20AC;&#x2018; vations may be represented using data tables. In this representation individual observations correspond to rows of a given data table and attributes to columns of a given data table. In this paper, we consider deciâ&#x20AC;&#x2018; sion tables of the form T = (đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6; đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6; đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6; in Pawlakâ&#x20AC;&#x2122;s sense (cf. [23]) for representation of data tables, where đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6; is a set of objects (rows) in the data table, đ??´đ??´ is a set of attributes or columns in the data table, and đ?&#x2018;&#x2018;đ?&#x2018;&#x2018; is a disâ&#x20AC;&#x2018; tinguished attribute from the set đ??´đ??´ called a decision attribute (in this paper, we consider problems for the case of a 2â&#x20AC;&#x2018;class classiďż˝ication, e.g., for decision classes YES and NO or for decision classes 0 and 1, etc.). The classiďż˝ier assigns to the object a certain weight (classiďż˝ication coefďż˝icient) to classify the object. For a set range of the threshold parameter đ?&#x2018;Ąđ?&#x2018;Ą đ?&#x2018;Ą đ?&#x2018;Ąđ?&#x2018;Ąđ?&#x2018;Ą đ?&#x2018;Ąđ?&#x2018;Ą, if the classiďż˝ication weight of the test object obtained from the classiďż˝ier is greater than t, the object is classiďż˝ied into the main class (e.g., YES). However, if the weight is less than or eďż˝ual to t, then the object is classiďż˝ied into a subordinate class (e.g., NO). However, for some neighborhood threshold đ?&#x2018;Ąđ?&#x2018;Ą very small differences in the classiďż˝ication weight can lead to opposing decisions. In order to avoid the incorrect classiďż˝ication, we propose to introduce an uncertainty area, which if the classiďż˝ier returns the classiďż˝ication weight from a certain neigâ&#x20AC;&#x2018; hborhood of threshold, will lead to abstain from the decision. When classifying objects, we can construct different classiďż˝iers. Often the decisions obtained difâ&#x20AC;&#x2018; fer for a some elements. Therefore, a conďż˝lict appears
between the classiďż˝iers that operate on the basis of difâ&#x20AC;&#x2018; ferent sources or parameters, which must be resolved in order to ďż˝inally classify the test object. For this purâ&#x20AC;&#x2018; pose we suggest aggregation of values obtained by the individual classiďż˝iers using uninorms. ďż˝s a result, we build a new compound classiďż˝ier, which additionally reduces the measure of uncertainty area. The paper is structured as follows. In Section 2, noâ&#x20AC;&#x2018; tions connected with aggregation operators are recalâ&#x20AC;&#x2018; led. In Section 3, the motivation to consider new verâ&#x20AC;&#x2018; sions of classiďż˝ier are provided as well as a description of them is given.
2. Agregations Operators
Firstly, we recall de�inition of an aggregation function. More details can be found in [5, 7, 8, 19, 28]
ďż˝e�������� 1 (cf. [6]). A function đ??´đ??´ đ??´ đ??´đ??´đ??´đ??´đ??´đ?&#x2018;&#x203A;đ?&#x2018;&#x203A; â&#x2020;&#x2019; [0,1], đ?&#x2018;&#x203A;đ?&#x2018;&#x203A; đ?&#x2018;&#x203A; đ?&#x2018;&#x203A;, đ?&#x2018;&#x203A;đ?&#x2018;&#x203A; đ?&#x2018;&#x203A; đ?&#x2018;&#x203A;, which is increasing in each variable, i.e. (â&#x2C6;&#x20AC;1â&#x2030;¤đ?&#x2018;&#x2013;đ?&#x2018;&#x2013;đ?&#x2018;&#x2013;đ?&#x2018;&#x2013;đ?&#x2018;&#x2013; đ?&#x2018; đ?&#x2018; đ?&#x2018;&#x2013;đ?&#x2018;&#x2013; â&#x2030;¤ đ?&#x2018;Ąđ?&#x2018;Ąđ?&#x2018;&#x2013;đ?&#x2018;&#x2013; ) â&#x2021;&#x2019; đ??´đ??´đ??´đ??´đ??´1 , â&#x20AC;Ś , đ?&#x2018; đ?&#x2018; đ?&#x2018;&#x203A;đ?&#x2018;&#x203A; ) â&#x2030;¤ đ??´đ??´đ??´đ??´đ??´1 , â&#x20AC;Ś , đ?&#x2018;Ąđ?&#x2018;Ąđ?&#x2018;&#x203A;đ?&#x2018;&#x203A; ),
(1)
for all đ?&#x2018; đ?&#x2018; 1 , â&#x20AC;Ś , đ?&#x2018; đ?&#x2018; đ?&#x2018;&#x203A;đ?&#x2018;&#x203A; , đ?&#x2018;Ąđ?&#x2018;Ą1 , â&#x20AC;Ś , đ?&#x2018;Ąđ?&#x2018;Ąđ?&#x2018;&#x203A;đ?&#x2018;&#x203A; â&#x2C6;&#x2C6;[0,1] is called an aggregaâ&#x20AC;&#x2018; tion function (aggregation operator) if đ??´đ??´đ??´đ??´đ??´đ??´đ??´đ??´đ??´đ??´đ??´, đ??´đ??´đ??´đ??´đ??´đ??´đ??´đ??´đ??´đ??´đ??´. ďż˝e�������� ďż˝ ( [6]). Let đ?&#x2018;&#x203A;đ?&#x2018;&#x203A; đ?&#x2018;&#x203A; đ?&#x2018;&#x203A;. đ??´đ??´ đ??´ Rđ?&#x2018;&#x203A;đ?&#x2018;&#x203A; â&#x2020;&#x2019; R is a mean (average function) if it is increasing and idempotent, i.e. â&#x2C6;&#x20AC;đ?&#x2018; đ?&#x2018; đ?&#x2018; đ?&#x2018; đ?&#x2018; đ?&#x2018; Rđ?&#x2018;&#x203A;đ?&#x2018;&#x203A; (â&#x2C6;&#x20AC;1â&#x2030;¤đ?&#x2018;&#x2DC;đ?&#x2018;&#x2DC;đ?&#x2018;&#x2DC;đ?&#x2018;&#x2DC;đ?&#x2018;&#x2DC; đ?&#x2018; đ?&#x2018; đ?&#x2018;&#x2DC;đ?&#x2018;&#x2DC; â&#x2030;¤ đ?&#x2018;Ąđ?&#x2018;Ąđ?&#x2018;&#x2DC;đ?&#x2018;&#x2DC; ) â&#x2021;&#x2019; đ??´đ??´đ??´đ??´đ??´1 , ..., đ?&#x2018; đ?&#x2018; đ?&#x2018;&#x203A;đ?&#x2018;&#x203A; ) â&#x2030;¤ đ??´đ??´đ??´đ??´đ??´1 , ..., đ?&#x2018;Ąđ?&#x2018;Ąđ?&#x2018;&#x203A;đ?&#x2018;&#x203A; ),
and
â&#x2C6;&#x20AC;đ?&#x2018;Ąđ?&#x2018;Ąđ?&#x2018;ĄR đ??´đ??´đ??´đ??´đ??´đ??´đ??´đ??´đ??´đ??´đ??´đ??´đ??´đ??´đ??´đ??´đ??´
Lemma 1. For every mean đ??´đ??´ we have
â&#x2C6;&#x20AC;đ?&#x2018;Ąđ?&#x2018;Ą đ?&#x2018;Ą Rđ?&#x2018;&#x203A;đ?&#x2018;&#x203A; min đ?&#x2018;Ąđ?&#x2018;Ąđ?&#x2018;&#x2DC;đ?&#x2018;&#x2DC; â&#x2030;¤ đ??´đ??´đ??´đ??´đ??´1 , ..., đ?&#x2018;Ąđ?&#x2018;Ąđ?&#x2018;&#x203A;đ?&#x2018;&#x203A; ) â&#x2030;¤ max đ?&#x2018;Ąđ?&#x2018;Ąđ?&#x2018;&#x2DC;đ?&#x2018;&#x2DC; . 1â&#x2030;¤đ?&#x2018;&#x2DC;đ?&#x2018;&#x2DC;đ?&#x2018;&#x2DC;đ?&#x2018;&#x2DC;đ?&#x2018;&#x2DC;
1â&#x2030;¤đ?&#x2018;&#x2DC;đ?&#x2018;&#x2DC;đ?&#x2018;&#x2DC;đ?&#x2018;&#x2DC;đ?&#x2018;&#x2DC;
(2)
From the above lemma we see that the mean can be restricted to any interval. Our domain of interest is the interval [0,1]. In this case, the mean is the aggregation function.
Example 1. Let đ?&#x153;&#x2018;đ?&#x153;&#x2018; đ?&#x153;&#x2018;đ?&#x153;&#x2018;đ?&#x153;&#x2018;đ?&#x153;&#x2018;đ?&#x153;&#x2018;đ?&#x153;&#x2018;đ?&#x153;&#x2018;đ?&#x153;&#x2018;đ?&#x153;&#x2018;đ?&#x153;&#x2018;đ?&#x153;&#x2018;đ?&#x153;&#x2018; be an increasing bijection and đ?&#x2018;Ąđ?&#x2018;Ąđ?&#x2018;Ą đ?&#x2018;Ąđ?&#x2018;Ą đ?&#x2018;Ą đ?&#x2018;Ąđ?&#x2018;Ąđ?&#x2018;Ą đ?&#x2018;Ąđ?&#x2018;Ąđ?&#x2018;&#x203A;đ?&#x2018;&#x203A; . We remind here two imâ&#x20AC;&#x2018; portant examples of aggregation function: the quasiâ&#x20AC;&#x2018; arithmetic mean (cf. [1]) đ??´đ??´đ??´đ??´đ??´1 , ..., đ?&#x2018;Ąđ?&#x2018;Ąđ?&#x2018;&#x203A;đ?&#x2018;&#x203A; ) = đ?&#x153;&#x2018;đ?&#x153;&#x2018;
â&#x2C6;&#x2019;1
đ?&#x2018;&#x203A;đ?&#x2018;&#x203A;
1 ( ďż˝ đ?&#x153;&#x2018;đ?&#x153;&#x2018;đ?&#x153;&#x2018;đ?&#x153;&#x2018;đ?&#x153;&#x2018;đ?&#x2018;&#x2DC;đ?&#x2018;&#x2DC; )), đ?&#x2018;&#x203A;đ?&#x2018;&#x203A; đ?&#x2018;&#x2DC;đ?&#x2018;&#x2DC;đ?&#x2018;&#x2DC;đ?&#x2018;&#x2DC;
and the generalized weighted average (cf. [6]) đ??´đ??´đ??´đ??´đ??´1 , ..., đ?&#x2018;Ąđ?&#x2018;Ąđ?&#x2018;&#x203A;đ?&#x2018;&#x203A; ) = đ?&#x153;&#x2018;đ?&#x153;&#x2018;
â&#x2C6;&#x2019;1
đ?&#x2018;&#x203A;đ?&#x2018;&#x203A;
( ďż˝ đ?&#x2018;¤đ?&#x2018;¤đ?&#x2018;&#x2DC;đ?&#x2018;&#x2DC; đ?&#x153;&#x2018;đ?&#x153;&#x2018;đ?&#x153;&#x2018;đ?&#x153;&#x2018;đ?&#x153;&#x2018;đ?&#x2018;&#x2DC;đ?&#x2018;&#x2DC; )). đ?&#x2018;&#x2DC;đ?&#x2018;&#x2DC;đ?&#x2018;&#x2DC;đ?&#x2018;&#x2DC;
85
85
Journal Journal of of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems
Uninorms are a special kind of aggregation functiâ&#x20AC;&#x2018; ons that generalise both tâ&#x20AC;&#x2018;norms and tâ&#x20AC;&#x2018;conorms (see [18, 21]). Uninorms are increasing, commutative and associative binary operators on the unit interval haâ&#x20AC;&#x2018; ving a neutral element đ?&#x2018;&#x2019;đ?&#x2018;&#x2019; đ?&#x2018;&#x2019; đ?&#x2018;&#x2019;đ?&#x2018;&#x2019;đ?&#x2018;&#x2019; đ?&#x2018;&#x2019;đ?&#x2018;&#x2019;. They appear for the ďż˝irst time using the term uninorm in [29] (although the very related operators called Dombiâ&#x20AC;&#x2122;s operators were already studied in [11]) with the idea of alloâ&#x20AC;&#x2018; wing certain kind of aggregation operators combining the maximum and the minimum, depending on an eleâ&#x20AC;&#x2018; ment đ?&#x2018;&#x2019;đ?&#x2018;&#x2019; đ?&#x2018;&#x2019; đ?&#x2018;&#x2019;đ?&#x2018;&#x2019;đ?&#x2018;&#x2019; đ?&#x2018;&#x2019;đ?&#x2018;&#x2019;. This idea was deeper studied in [18], where the structure of such operators was analysed and two ďż˝irst classes of uninorms were introduced: uninorms in đ?&#x2019;°đ?&#x2019;°min and đ?&#x2019;°đ?&#x2019;°max , and representable uniâ&#x20AC;&#x2018; norms (extremely related with Dombiâ&#x20AC;&#x2122;s operators inâ&#x20AC;&#x2018; troduced in [11]). We will assume the basic theory of tâ&#x20AC;&#x2018;norms and tâ&#x20AC;&#x2018;conorms. The deďż˝initions, notations and results on them can be found in [2,21]. We will just give in this section some basic facts about uninorms. More details can be found in [10, 14, 18, 20, 25].
ďż˝e������oďż˝ 3 ( [29]). Operation đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6; đ?&#x2018;&#x2C6; đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;2 â&#x2020;&#x2019; [0,1] is called a uninorm if it is commutative, associative, increâ&#x20AC;&#x2018; asing and has a neutral element đ?&#x2018;&#x2019;đ?&#x2018;&#x2019; đ?&#x2018;&#x2019; đ?&#x2018;&#x2019;đ?&#x2018;&#x2019;đ?&#x2018;&#x2019; đ?&#x2018;&#x2019;đ?&#x2018;&#x2019;. A uninorm with neutral element đ?&#x2018;&#x2019;đ?&#x2018;&#x2019; đ?&#x2018;&#x2019; đ?&#x2018;&#x2019; is called a triangular norm and a uninorm with neutral element đ?&#x2018;&#x2019;đ?&#x2018;&#x2019; đ?&#x2018;&#x2019; đ?&#x2018;&#x2019; is called a triangular conorm.
the general structure of the uninorm can be repreâ&#x20AC;&#x2018; sented by the following theorems (cf. Figure 1).
Theorem 1 ( [18]). If a uninorm đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6; has a neutral eleâ&#x20AC;&#x2018; ment đ?&#x2018;&#x2019;đ?&#x2018;&#x2019; đ?&#x2018;&#x2019; đ?&#x2018;&#x2019;đ?&#x2018;&#x2019;đ?&#x2018;&#x2019; đ?&#x2018;&#x2019;đ?&#x2018;&#x2019;, then there exist a triangular norm đ?&#x2018;&#x2021;đ?&#x2018;&#x2021; and a triangular conorm đ?&#x2018;&#x2020;đ?&#x2018;&#x2020; such that đ?&#x2018;Ľđ?&#x2018;Ľ đ?&#x2018;Śđ?&#x2018;Ś
đ?&#x2018;&#x2019;đ?&#x2018;&#x2019;đ?&#x2018;&#x2019;đ?&#x2018;&#x2019;đ?&#x2018;&#x2019; , ) if (đ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľ2 , đ?&#x2018;&#x2019;đ?&#x2018;&#x2019; đ?&#x2018;&#x2019;đ?&#x2018;&#x2019; đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6; đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6; ďż˝ đ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľ đ?&#x2018;Śđ?&#x2018;Śđ?&#x2018;Śđ?&#x2018;Śđ?&#x2018;Ś đ?&#x2018;&#x2019;đ?&#x2018;&#x2019; đ?&#x2018;&#x2019; đ?&#x2018;&#x2019;đ?&#x2018;&#x2019; đ?&#x2018;&#x2019; đ?&#x2018;&#x2019;đ?&#x2018;&#x2019;đ?&#x2018;&#x2019;đ?&#x2018;&#x2019;đ?&#x2018;&#x2019;đ?&#x2018;&#x2019; , ) if (đ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľ2 , 1â&#x2C6;&#x2019;đ?&#x2018;&#x2019;đ?&#x2018;&#x2019; 1â&#x2C6;&#x2019;đ?&#x2018;&#x2019;đ?&#x2018;&#x2019;
Lemma 2 (cf. [18]). If đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6; is increasing and has a neutral element đ?&#x2018;&#x2019;đ?&#x2018;&#x2019; đ?&#x2018;&#x2019; đ?&#x2018;&#x2019;đ?&#x2018;&#x2019;đ?&#x2018;&#x2019; đ?&#x2018;&#x2019;đ?&#x2018;&#x2019;, then min â&#x2030;¤ đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6; đ?&#x2018;&#x2C6; max in đ??´đ??´đ??´đ??´đ??´đ??´đ??´đ??´đ??´đ??´đ??´đ??´đ??´đ??´đ??´đ??´đ??´đ??´đ??´đ??´đ??´đ??´đ??´đ??´đ??´đ??´đ??´đ??´đ??´đ??´đ??´đ??´đ??´đ??´đ??´
Furthermore, if đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6; is associative, then đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6; đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6; {0,1}. 1
đ?&#x2018;&#x2019;đ?&#x2018;&#x2019;
0
minâ&#x2030;¤đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;max
đ?&#x2018;&#x2020;đ?&#x2018;&#x2020;â&#x2C6;&#x2014;
đ?&#x2018;&#x2021;đ?&#x2018;&#x2021;â&#x2C6;&#x2014;
minâ&#x2030;¤đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;max
đ?&#x2018;&#x2019;đ?&#x2018;&#x2019;
Fig. 1. The structure of uninorms
86
86
â&#x20AC;˘ Representable uninorms, those that have addiâ&#x20AC;&#x2018; tive generators. They were ďż˝irstly introduced in [18] and then they were characterized as those uninorms that are continuous in [0,1]2 ⧾ {(0,1),(1,0)} in [24] and also as those uninorms that are strictly increasing and continuous in the open unit square in [17]. â&#x20AC;˘ Uninorms continuous in the open unit square (0,1)2 , that were characterized in [12,20] and that cleâ&#x20AC;&#x2018; arly includes the representables ones. â&#x20AC;˘ Idempotent uninorms, those such that đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6; đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6; đ?&#x2018;Ľđ?&#x2018;Ľ for all đ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľ. Their characterization was given in [25]. â&#x20AC;˘ Locally internal uninorms, those such that đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6; đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6; đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6; for all (đ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľ. This class has been studied in [12, 13, 15] and a recent characterizaâ&#x20AC;&#x2018; tion of uninorms in this class having continuous unâ&#x20AC;&#x2018; derlying operators has been given in [14]. This class includes all idempotent uninorms. â&#x20AC;˘ Uninorms with continuous underlying operators. This class is characterized via the ordinal sum conâ&#x20AC;&#x2018; struction of Clifford [22]. Again it is clear that this class includes all the previous ones except for the case of uninorms in đ?&#x2019;°đ?&#x2019;°min and đ?&#x2019;°đ?&#x2019;°max and for the case of locally internal uninorms. In what follows we recall some results about the structure of several classes of uninorms.
Theorem 2. ( [18]) Let đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6; đ?&#x2018;&#x2C6; đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;2 â&#x2020;&#x2019; [0,1] be a uniâ&#x20AC;&#x2018; norm with neutral element đ?&#x2018;&#x2019;đ?&#x2018;&#x2019; đ?&#x2018;&#x2019; đ?&#x2018;&#x2019;đ?&#x2018;&#x2019;đ?&#x2018;&#x2019; đ?&#x2018;&#x2019;đ?&#x2018;&#x2019;. Then, the sections đ?&#x2018;Ľđ?&#x2018;Ľ đ?&#x2018;Ľ đ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľ and đ?&#x2018;Ľđ?&#x2018;Ľ đ?&#x2018;Ľ đ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľ are continuous at each point except perhaps at đ?&#x2018;&#x2019;đ?&#x2018;&#x2019; if and only if đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6; is given by one of the following formulas: (a) If đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;, then đ?&#x2018;Ľđ?&#x2018;Ľ đ?&#x2018;Śđ?&#x2018;Ś đ?&#x2018;&#x2019;đ?&#x2018;&#x2019; đ?&#x2018;&#x2019;đ?&#x2018;&#x2019;
đ?&#x2018;&#x2019;đ?&#x2018;&#x2019;đ?&#x2018;&#x2019;đ?&#x2018;&#x2019; ďż˝ , ďż˝
đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2019;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2019; đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2019; đ?&#x2018;&#x2019; ďż˝đ?&#x2018;&#x2019;đ?&#x2018;&#x2019; đ?&#x2018;&#x2019; đ?&#x2018;&#x2019;đ?&#x2018;&#x2019; đ?&#x2018;&#x2019;
1
đ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľ đ?&#x2018;Śđ?&#x2018;Śđ?&#x2018;Śđ?&#x2018;Śđ?&#x2018;Ś , đ?&#x2018;&#x2019;đ?&#x2018;&#x2019;đ?&#x2018;&#x2019;đ?&#x2018;&#x2019;đ?&#x2018;&#x2019; ďż˝ ďż˝ 1â&#x2C6;&#x2019;đ?&#x2018;&#x2019;đ?&#x2018;&#x2019; 1â&#x2C6;&#x2019;đ?&#x2018;&#x2019;đ?&#x2018;&#x2019;
min(đ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľ
if đ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľ
if đ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľ if đ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľ
(b) If đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;, then the same structure holds, chanâ&#x20AC;&#x2018; ging minimum by maximum in đ??´đ??´đ??´đ??´đ??´đ??´.
Example 2. Let đ?&#x2018;&#x2019;đ?&#x2018;&#x2019; đ?&#x2018;&#x2019; đ?&#x2018;&#x2019;đ?&#x2018;&#x2019;đ?&#x2018;&#x2019; đ?&#x2018;&#x2019;đ?&#x2018;&#x2019;. The following operations are uninorms satisfying the conditions of Theorem 2. đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;1 (đ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľ ďż˝
The most studied classes of uninorms are: â&#x20AC;˘ Uninorms in đ?&#x2019;°đ?&#x2019;°min (respectively đ?&#x2019;°đ?&#x2019;°max ), those giâ&#x20AC;&#x2018; ven by minimum (respectively maximum) in đ??´đ??´đ??´đ??´đ??´đ??´, that were characterized in [18]. Articles
VOLUME N°44 2019 2019 VOLUME 13,13, N°
đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;2 (đ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľ ďż˝
min(đ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľ if đ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľ max(đ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľ đ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľ
max(đ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľ min(đ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľ
if đ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľ đ?&#x2018;Ľ đ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľ đ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľ
The whole characterization of idempotent uniâ&#x20AC;&#x2018; norms was deďż˝initively given in [25] as follows.
Theorem 3. ( [25]) Consider đ?&#x2018;&#x2019;đ?&#x2018;&#x2019; đ?&#x2018;&#x2019; đ?&#x2018;&#x2019;đ?&#x2018;&#x2019;đ?&#x2018;&#x2019; đ?&#x2018;&#x2019;đ?&#x2018;&#x2019;. The following items are equivalent:
(i) đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6; is an idempotent uninorm with neutral element đ?&#x2018;&#x2019;đ?&#x2018;&#x2019;.
(ii) There exists a decreasing function đ?&#x2018;&#x201D;đ?&#x2018;&#x201D; đ?&#x2018;&#x201D;đ?&#x2018;&#x201D;đ?&#x2018;&#x201D;đ?&#x2018;&#x201D;đ?&#x2018;&#x201D;đ?&#x2018;&#x201D;đ?&#x2018;&#x201D;đ?&#x2018;&#x201D;đ?&#x2018;&#x201D;đ?&#x2018;&#x201D;đ?&#x2018;&#x201D;đ?&#x2018;&#x201D; with ďż˝ixed point đ?&#x2018;&#x2019;đ?&#x2018;&#x2019;, which is Idâ&#x20AC;&#x2018;symmetrical, such that
Journal Journal of of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems
đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6; is given as follows:
â&#x17D;§min(đ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľ if đ?&#x2018;Śđ?&#x2018;Ś đ?&#x2018;Ś đ?&#x2018;Śđ?&#x2018;Śđ?&#x2018;Śđ?&#x2018;Śđ?&#x2018;Śđ?&#x2018;Ś or (đ?&#x2018;Śđ?&#x2018;Śđ?&#x2018;Ś đ?&#x2018;Śđ?&#x2018;Śđ?&#x2018;Śđ?&#x2018;Śđ?&#x2018;Śđ?&#x2018;Ś and đ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľ2 (đ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľ â&#x17D;Ş đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6; đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6; đ?&#x2018;&#x2C6; max(đ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľ if đ?&#x2018;Śđ?&#x2018;Ś đ?&#x2018;Ś đ?&#x2018;Śđ?&#x2018;Śđ?&#x2018;Śđ?&#x2018;Śđ?&#x2018;Śđ?&#x2018;Ś or â&#x17D;¨ (đ?&#x2018;Śđ?&#x2018;Śđ?&#x2018;Ś đ?&#x2018;Śđ?&#x2018;Śđ?&#x2018;Śđ?&#x2018;Śđ?&#x2018;Śđ?&#x2018;Ś and đ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľ2 (đ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľ â&#x17D;Ş if đ?&#x2018;Śđ?&#x2018;Śđ?&#x2018;Ś đ?&#x2018;Śđ?&#x2018;Śđ?&#x2018;Śđ?&#x2018;Śđ?&#x2018;Śđ?&#x2018;Ś and đ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľ2 (đ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľ â&#x17D;Šđ?&#x2018;Ľđ?&#x2018;Ľ or đ?&#x2018;Śđ?&#x2018;Ś
being commutative on the set of points (đ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľ such that đ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľ2 (đ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľ.
Example 3. The operations đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;1 and đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;2 from Example 2 are idempotent uninorms.
As it turns out, there are no continuous uninorms with the neutral element đ?&#x2018;&#x2019;đ?&#x2018;&#x2019; đ?&#x2018;&#x2019; đ?&#x2018;&#x2019;đ?&#x2018;&#x2019;đ?&#x2018;&#x2019; đ?&#x2018;&#x2019;đ?&#x2018;&#x2019;. Therefore, there is considered continuity on some subset.
Theorem 4 ( [18]). Let đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6; đ?&#x2018;&#x2C6; đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6; đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;2 â&#x2020;&#x2019; [0,1] be a binary operation and đ?&#x2018;&#x2019;đ?&#x2018;&#x2019; đ?&#x2018;&#x2019;đ?&#x2018;&#x2019;đ?&#x2018;&#x2019;đ?&#x2018;&#x2019; đ?&#x2018;&#x2019;đ?&#x2018;&#x2019;. The following statements are equivalent: (i) đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6; is a uninorm with neutral element đ?&#x2018;&#x2019;đ?&#x2018;&#x2019; that is strictly increasing on ]0,1[2 and continuous on [0,1]2 ⧾ {(0,1),(1,0)}.
(ii) There exists a strictly increasing bijection đ?&#x2018;˘đ?&#x2018;˘ đ?&#x2018;˘đ?&#x2018;˘đ?&#x2018;˘đ?&#x2018;˘đ?&#x2018;˘đ?&#x2018;˘đ?&#x2018;˘ [â&#x2C6;&#x2019;â&#x2C6;&#x17E;, +â&#x2C6;&#x17E;] with đ?&#x2018;˘đ?&#x2018;˘đ?&#x2018;˘đ?&#x2018;˘đ?&#x2018;˘đ?&#x2018;˘đ?&#x2018;˘đ?&#x2018;˘ such that for all (đ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľ [0,1]2 it holds that đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6; đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6; đ?&#x2018;&#x2C6; đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;â&#x2C6;&#x2019;1 (đ?&#x2018;˘đ?&#x2018;˘đ?&#x2018;˘đ?&#x2018;˘đ?&#x2018;˘đ?&#x2018;˘đ?&#x2018;˘ đ?&#x2018;˘đ?&#x2018;˘đ?&#x2018;˘đ?&#x2018;˘đ?&#x2018;˘đ?&#x2018;˘đ?&#x2018;˘, where in case of a conjunctive uninorm đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;, we adopt the convention (+â&#x2C6;&#x17E;) + (â&#x2C6;&#x2019;â&#x2C6;&#x17E;)= â&#x2C6;&#x2019;â&#x2C6;&#x17E;, while in case of a disjunctive uninorm, we adopt the convention (+â&#x2C6;&#x17E;) + (â&#x2C6;&#x2019;â&#x2C6;&#x17E;)= +â&#x2C6;&#x17E;. If this representation holds, then the function đ?&#x2018;˘đ?&#x2018;˘ is uniâ&#x20AC;&#x2018; quely determined by đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6; up to a positive multiplicative constant, and it is called an additive generator of the uninorm đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;. Example 4. Let đ?&#x2018;&#x2019;đ?&#x2018;&#x2019; đ?&#x2018;&#x2019; đ?&#x2018;&#x2019;đ?&#x2018;&#x2019;đ?&#x2018;&#x2019; đ?&#x2018;&#x2019;đ?&#x2018;&#x2019;. The following operations are uninorms satisfying the conditions of above Theoâ&#x20AC;&#x2018; rem. 0
đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;3 đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6; đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6; đ?&#x2018;&#x2C6; ďż˝
2 1 ďż˝ â&#x2C6;&#x2019;1ďż˝ đ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľ đ?&#x2018;&#x2019;đ?&#x2018;&#x2019; 2 1 1 ďż˝ â&#x2C6;&#x2019;1ďż˝ đ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľďż˝ â&#x2C6;&#x2019;1ďż˝(1â&#x2C6;&#x2019;đ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľ đ?&#x2018;&#x2019;đ?&#x2018;&#x2019; đ?&#x2018;&#x2019;đ?&#x2018;&#x2019;
1
đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;4 đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6; đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6; đ?&#x2018;&#x2C6; ďż˝
1 đ?&#x2018;&#x2019;đ?&#x2018;&#x2019;
2
1 đ?&#x2018;&#x2019;đ?&#x2018;&#x2019;
2
ďż˝ â&#x2C6;&#x2019;1ďż˝ đ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľ 1 đ?&#x2018;&#x2019;đ?&#x2018;&#x2019;
ďż˝ â&#x2C6;&#x2019;1ďż˝ đ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľďż˝ â&#x2C6;&#x2019;1ďż˝(1â&#x2C6;&#x2019;đ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľ
3. Classifiers
if đ?&#x2018;Ľđ?&#x2018;Ľ đ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľ đ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľ otherwise,
if đ?&#x2018;Ľđ?&#x2018;Ľ đ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľ đ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľđ?&#x2018;Ľ otherwise.
ďż˝uring classiďż˝ication, the classiďż˝ier assigns a cerâ&#x20AC;&#x2018; tain classiďż˝ication weight to the object.  For a set range of the threshold parameters đ?&#x2018;Ąđ?&#x2018;Ą đ?&#x2018;Ąđ?&#x2018;Ąđ?&#x2018;Ąđ?&#x2018;Ąđ?&#x2018;Ąđ?&#x2018;Ą, the test objects are tested in such a way that if the classiďż˝ication weight of the test object obtained from the classiďż˝ier is greater than đ?&#x2018;Ąđ?&#x2018;Ą, the object is classiďż˝ied into the main class (e.g., YES). However, if the weight is less than or equal to đ?&#x2018;Ąđ?&#x2018;Ą, then the object is classiďż˝ied into a subordinate class (e.g., NO). In this way, we obtain the decision value for the test object, which may be correct (consistent with
VOLUME 2019 VOLUME 13,13, N°N°4 4 2019
the actual decision in the test table) or incorrect (we make a mistake in the classiďż˝ication). To calculate the global classiďż˝ication quality of a giâ&#x20AC;&#x2018; ven classiďż˝ier with the ďż˝ixed parameter đ?&#x2018;Ąđ?&#x2018;Ą we use the accuracy of the classiďż˝ication which is the quotient of the number of correct classiďż˝ications to the number of all classiďż˝ications. Accuracy calculated for the test obâ&#x20AC;&#x2018; jects from the main class is called sensitivity, and the accuracy calculated for the test objects from a suborâ&#x20AC;&#x2018; dinate class we call speciďż˝icity. If the sensitivity is unsatisfactory,  e.g., in mediâ&#x20AC;&#x2018; cine when trying to predict the occurrence of a diseâ&#x20AC;&#x2018; ase of a patient, it may turn out that the sensitivity of the classiďż˝ication to the main class â&#x20AC;?sickâ&#x20AC;? is too low, we can balance between sensitivity and speciďż˝icity, i.e. inâ&#x20AC;&#x2018; creasing sensitivity at the expense of decreasing speâ&#x20AC;&#x2018; ciďż˝icity. This approach leads to the concept of the ROC curve, where each point of the ROC curve corresponds to one setting of the classiďż˝ierďż˝s performance (the paâ&#x20AC;&#x2018; rameter đ?&#x2018;Ąđ?&#x2018;Ą). ROC shows the dependence of sensitivity on error of the ďż˝irst type (one minus speciďż˝icity) during calibraâ&#x20AC;&#x2018; tion of the classiďż˝ier (at various threshold settings). For classiďż˝iers with this property (sensitivity and speciďż˝icity regulation), the AUC parameter was used to assess their quality. AUC is the measure of the quality of a classiďż˝ier which is the area under the ROC curve (cf. [16, 26]). The greater is the AUC value the better is the classiďż˝ier. 3.1. Uncertainty Area
As it was mentioned above, in the method of sensiâ&#x20AC;&#x2018; tivity and speciďż˝icity regulation, the sensitivity of the classiďż˝ier can be increased at the expense of the deâ&#x20AC;&#x2018; crease in speciďż˝icity. This is not always an acceptable situation. It may happen that in the case of two deciâ&#x20AC;&#x2018; sion classes main (eg. YES) and subordinate (eg. NO) instead of obtaining a higher sensitivity for the main class and reducing the speciďż˝icity for the subordinate class, we would prefer that the classiďż˝ier does not clasâ&#x20AC;&#x2018; sify some cases by agreeing to reduce the coverage of the entire test data. In other words, in the case of classifying test obâ&#x20AC;&#x2018; jects, the soâ&#x20AC;&#x2018;called area of  uncertainty for which we abstain from the decision because we are not sure enough about it. Thanks to this, the classiďż˝ier may make fewer mistakes while classifying, but from time to time, instead of the decision value, the classiďż˝ier reâ&#x20AC;&#x2018; turns â&#x20AC;?I do not make decisionsâ&#x20AC;? or â&#x20AC;?I do not knowâ&#x20AC;?. Since the most errors of classiďż˝ication are made when the classiďż˝ication weight is close to the threshold parameter, we will refrain from the decision for this area. In this situation, the aforementioned ROC curve generation concept can be modiďż˝ied by introducing the uncertainty area. For this purpose, instead of simple threshold parameter đ?&#x2018;Ąđ?&#x2018;Ą, we consider parameter đ?&#x153;&#x20AC;đ?&#x153;&#x20AC; such that đ?&#x153;&#x20AC;đ?&#x153;&#x20AC; đ?&#x153;&#x20AC;đ?&#x153;&#x20AC;đ?&#x153;&#x20AC;đ?&#x153;&#x20AC; min(đ?&#x2018;Ąđ?&#x2018;Ąđ?&#x2018;Ąđ?&#x2018;Ąđ?&#x2018;Ą đ?&#x2018;Ąđ?&#x2018;Ąđ?&#x2018;Ąđ?&#x2018;Ą. For the set value of parameâ&#x20AC;&#x2018; ters đ?&#x2018;Ąđ?&#x2018;Ą and đ?&#x153;&#x20AC;đ?&#x153;&#x20AC;, classiďż˝ication of the test objects is perforâ&#x20AC;&#x2018; med in such a way that if the classiďż˝ication weight of the test object obtained from the classiďż˝ier is greater than đ?&#x2018;Ąđ?&#x2018;Ą đ?&#x2018;Ąđ?&#x2018;Ąđ?&#x2018;Ą , then the object is classiďż˝ied into the main class (eg YES). On the other hand, if the classiďż˝ication Articles
87
87
Journal Journal of of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems
weight is less than or equal to đ?&#x2018;Ąđ?&#x2018;Ą đ?&#x2018;Ą đ?&#x2018;Ąđ?&#x2018;Ą, then the object is classiďż˝ied into a subordinate class (e.g., ďż˝O).
weight
1
đ?&#x2018;Ąđ?&#x2018;Ąđ?&#x2018;Ąđ?&#x2018;Ąđ?&#x2018;Ą đ?&#x2018;Ąđ?&#x2018;Ą
đ?&#x2018;Ąđ?&#x2018;Ąđ?&#x2018;Ąđ?&#x2018;Ąđ?&#x2018;Ą
VOLUME N°44 2019 2019 VOLUME 13,13, N°
Algorithm 1: Classiďż˝ication of a test object by the đ?&#x2018;&#x20AC;đ?&#x2018;&#x20AC; classiďż˝ier Input:
1) training data set represented by decision table T = (đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6; đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6; đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;,
2) collection đ??śđ??ś1 , ..., đ??śđ??śđ?&#x2018;&#x161;đ?&#x2018;&#x161; of classiďż˝iers,
3) test object đ?&#x2018;˘đ?&#x2018;˘,
4) aggregation đ?&#x2018;&#x20AC;đ?&#x2018;&#x20AC;,
14
5) threshold parameters đ?&#x2018;Ąđ?&#x2018;Ą and đ?&#x153;&#x20AC;đ?&#x153;&#x20AC;. Output: The membership of the object đ?&#x2018;˘đ?&#x2018;˘ to the â&#x20AC;?main classâ&#x20AC;? or to the â&#x20AC;?subordinate classâ&#x20AC;? or â&#x20AC;?no decisionâ&#x20AC;? begin for đ?&#x2018;&#x2013;đ?&#x2018;&#x2013; đ?&#x2018;&#x2013;đ?&#x2018;&#x2013;đ?&#x2018;&#x2013; to đ?&#x2018;&#x161;đ?&#x2018;&#x161; do Compute a certain weight (â&#x20AC;?main classâ&#x20AC;? membership probability) for the given test object đ?&#x2018;˘đ?&#x2018;˘ using the classiďż˝ier đ??śđ??śđ?&#x2018;&#x2013;đ?&#x2018;&#x2013; and assign it to đ?&#x2018;?đ?&#x2018;?đ?&#x2018;&#x2013;đ?&#x2018;&#x2013; end ďż˝etermine the ďż˝inal weight đ?&#x2018;?đ?&#x2018;? for the object đ?&#x2018;˘đ?&#x2018;˘ by aggregating (with a use of the mean đ?&#x2018;&#x20AC;đ?&#x2018;&#x20AC; e.g., arithmetic mean) the weights đ?&#x2018;?đ?&#x2018;?1 ,...,đ?&#x2018;?đ?&#x2018;?đ?&#x2018;&#x161;đ?&#x2018;&#x161; . if đ?&#x2018;?đ?&#x2018;? đ?&#x2018;? đ?&#x2018;?đ?&#x2018;?đ?&#x2018;?đ?&#x2018;?đ?&#x2018;? then return đ?&#x2018;˘đ?&#x2018;˘ belongs to the â&#x20AC;?main classâ&#x20AC;? else if đ?&#x2018;?đ?&#x2018;? đ?&#x2018;? đ?&#x2018;?đ?&#x2018;?đ?&#x2018;?đ?&#x2018;?đ?&#x2018;? then return đ?&#x2018;˘đ?&#x2018;˘ belongs to the â&#x20AC;?subordinate classâ&#x20AC;? else return we abstain from the decision end end
15
end
Fig. 2. The uncertainty area
1 2 3
In other cases, the object is classiďż˝ied into the soâ&#x20AC;&#x2018; called uncertainty area (see Figure 2). Similarly as beâ&#x20AC;&#x2018; fore, for the calculation of the global classiďż˝ication quaâ&#x20AC;&#x2018; lity of the given classiďż˝ier with the parameters đ?&#x2018;Ąđ?&#x2018;Ą and đ?&#x153;&#x20AC;đ?&#x153;&#x20AC;, we use classiďż˝ication accuracy for objects from the main class called sensitivity and accuracy calculated for the test objects from a subordinate class (speciâ&#x20AC;&#x2018; ďż˝icity). In addition, for each experiment we obtain a third parameter, which we call a measure of the uncerâ&#x20AC;&#x2018; tainty of classiďż˝ication, which is a quotient of the numâ&#x20AC;&#x2018; ber of test objects classiďż˝ied to the uncertainty area and the number of all test objects. 3.2. Two Versions of the Classifier
88
88
When classifying objects, we can construct diffeâ&#x20AC;&#x2018; rent classiďż˝iers (based on different systems or based on different data sources, e.g., using several diagnosâ&#x20AC;&#x2018; tic devices â&#x20AC;&#x2018; see [3, 4, 9]). Often the decisions obtained differ for a certain class of test elements. Therefore, a conďż˝lict appears between the classiďż˝iers that opeâ&#x20AC;&#x2018; rate on the basis of different sources or parameters, which must be resolved in order to ďż˝inally classify the test object. To get a ďż˝inal decision, we should create a new classiďż˝ier that will take into account previous reâ&#x20AC;&#x2018; sults. For this purpose we suggest aggregation of vaâ&#x20AC;&#x2018; lues obtained by the individual classiďż˝iers. As a result, we build a new compound classiďż˝ier. In this article, we suggest aggregating of the classiâ&#x20AC;&#x2018; ďż˝ication weights obtained by individual classiďż˝iers, and we propose two algorithms. The ďż˝irst is when we use the means, denoted by the Algorithm đ?&#x2018;&#x20AC;đ?&#x2018;&#x20AC; (Algorithm 1) and, in fact, we get a comâ&#x20AC;&#x2018; plex classiďż˝ier. Unfortunately, if we apply a quality assessment method that takes into account the uncertainty area, then it turns out that the measures of the uncertainty area for the đ?&#x2018;&#x20AC;đ?&#x2018;&#x20AC; classiďż˝ier is very high. This is due to the Lemma 1, because for example, for two classiďż˝iers with classiďż˝ication weights đ?&#x2018;?đ?&#x2018;?1 and đ?&#x2018;?đ?&#x2018;?2 that classify an object to the uncertainty area (weights đ?&#x2018;?đ?&#x2018;?1 , đ?&#x2018;?đ?&#x2018;?2 belong to the interval [đ?&#x2018;Ąđ?&#x2018;Ą đ?&#x2018;Ą đ?&#x2018;Ąđ?&#x2018;Ąđ?&#x2018;Ą đ?&#x2018;Ąđ?&#x2018;Ą đ?&#x2018;Ą đ?&#x2018;Ąđ?&#x2018;Ąđ?&#x2018;Ą) our classiâ&#x20AC;&#x2018; ďż˝ier đ?&#x2018;&#x20AC;đ?&#x2018;&#x20AC; will classify the object to the uncertainty area (the weight đ?&#x2018;?đ?&#x2018;? of the aggregated ďż˝ classiďż˝ication will belong to the same interval). In addition, if only one Articles
4 5
6 7 8 9 10 11 12 13
of the weights will belong to the interval [đ?&#x2018;Ąđ?&#x2018;Ą đ?&#x2018;Ą đ?&#x2018;Ąđ?&#x2018;Ąđ?&#x2018;Ą đ?&#x2018;Ąđ?&#x2018;Ą đ?&#x2018;Ą đ?&#x2018;Ąđ?&#x2018;Ąđ?&#x2018;Ą, the weight of the ďż˝inal classiďż˝ication may belong to that interval. Thus, by creating a classiďż˝ier in this way, we increâ&#x20AC;&#x2018; ase the measure of the uncertainty area, not necessaâ&#x20AC;&#x2018; rily signiďż˝icantly increasing the accuracy of classiďż˝icaâ&#x20AC;&#x2018; tion of the new classiďż˝ier đ?&#x2018;&#x20AC;đ?&#x2018;&#x20AC; in relation to the accuracy of aggregated classiďż˝iers. Therefore, in this paper, we propose also another method for aggregating classiďż˝iers based on the soâ&#x20AC;&#x2018; called a neutral element being a value from the interâ&#x20AC;&#x2018; val (0, 1). This method will be denoted by Algorithm đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6; (Algorithm 2). We assume here that the neutral element đ?&#x2018;&#x2019;đ?&#x2018;&#x2019; of uniâ&#x20AC;&#x2018; norm đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6; will be equal to the threshold parameter đ?&#x2018;Ąđ?&#x2018;Ą. Then, using Lemma 2 and Theorem 1, for example, for two classiďż˝iers with classiďż˝ication weights đ?&#x2018;?đ?&#x2018;?1 and đ?&#x2018;?đ?&#x2018;?2 belonging to the interval [0, đ?&#x2018;Ąđ?&#x2018;Ąđ?&#x2018;Ą our classiďż˝ier đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6; asâ&#x20AC;&#x2018; signs the classiďż˝ication weight of the object, which is less than or equal to the min(đ?&#x2018;?đ?&#x2018;?1 , đ?&#x2018;?đ?&#x2018;?2 ). That is, if at least one of the weights đ?&#x2018;?đ?&#x2018;?1 , đ?&#x2018;?đ?&#x2018;?2 is less than đ?&#x2018;Ąđ?&#x2018;Ą đ?&#x2018;Ą đ?&#x2018;Ąđ?&#x2018;Ą then the obâ&#x20AC;&#x2018; ject will be classiďż˝ied to the subordinate class. In other
Journal Systems Journalof ofAutomation, Automation,Mobile MobileRobotics Roboticsand andIntelligent Intelligent Systems
Algorithm 2: Classiďż˝ication of a test object by the đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6; classiďż˝ier Input:
1) training data set represented by decision table T = (đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6; đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6; đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;, 2) collection đ??śđ??ś1 , ..., đ??śđ??śđ?&#x2018;&#x161;đ?&#x2018;&#x161; of classiďż˝iers,
3) test object đ?&#x2018;˘đ?&#x2018;˘, 4) uninorm đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6;,
14
5) threshold parameters đ?&#x2018;Ąđ?&#x2018;Ą and đ?&#x153;&#x20AC;đ?&#x153;&#x20AC;. Output: The membership of the object đ?&#x2018;˘đ?&#x2018;˘ to the â&#x20AC;?main classâ&#x20AC;? or to the â&#x20AC;?subordinate classâ&#x20AC;? or â&#x20AC;?no decisionâ&#x20AC;? begin for đ?&#x2018;&#x2013;đ?&#x2018;&#x2013; đ?&#x2018;&#x2013;đ?&#x2018;&#x2013; đ?&#x2018;&#x2013; to đ?&#x2018;&#x161;đ?&#x2018;&#x161; do Compute a certain weight (â&#x20AC;?main classâ&#x20AC;? membership probability) for the given test object đ?&#x2018;˘đ?&#x2018;˘ using the classiďż˝ier đ??śđ??śđ?&#x2018;&#x2013;đ?&#x2018;&#x2013; and assign it to đ?&#x2018;?đ?&#x2018;?đ?&#x2018;&#x2013;đ?&#x2018;&#x2013; end Determine the ďż˝inal weight đ?&#x2018;?đ?&#x2018;? for the object đ?&#x2018;˘đ?&#x2018;˘ by aggregating (with a use of the uninorm đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6; e.g., representable uninorm ) the weights đ?&#x2018;?đ?&#x2018;?1 ,...,đ?&#x2018;?đ?&#x2018;?đ?&#x2018;&#x161;đ?&#x2018;&#x161; . if đ?&#x2018;?đ?&#x2018;? đ?&#x2018;? đ?&#x2018;?đ?&#x2018;? đ?&#x2018;? đ?&#x2018;?đ?&#x2018;? then return đ?&#x2018;˘đ?&#x2018;˘ belongs to the â&#x20AC;?main classâ&#x20AC;? else if đ?&#x2018;?đ?&#x2018;? đ?&#x2018;? đ?&#x2018;?đ?&#x2018;? đ?&#x2018;? đ?&#x2018;?đ?&#x2018;? then return đ?&#x2018;˘đ?&#x2018;˘ belongs to the â&#x20AC;?subordinate classâ&#x20AC;? else return we abstain from the decision end end
15
end
1 2 3
4 5
6 7 8 9 10 11 12 13
cases, the object can be classiďż˝ied to the subordinate class or to the uncertainty area. If the weights đ?&#x2018;?đ?&#x2018;?1 and đ?&#x2018;?đ?&#x2018;?2 belong to the interval [đ?&#x2018;Ąđ?&#x2018;Ąđ?&#x2018;Ąđ?&#x2018;Ąđ?&#x2018;Ą, then the classiďż˝ier đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6; assigns the classiďż˝ication weight of the object, which is greater than or equal to the max(đ?&#x2018;?đ?&#x2018;?1 , đ?&#x2018;?đ?&#x2018;?2 ). This means that if at least one of the weigâ&#x20AC;&#x2018; hts đ?&#x2018;?đ?&#x2018;?1 , đ?&#x2018;?đ?&#x2018;?2 is greater than đ?&#x2018;Ąđ?&#x2018;Ą đ?&#x2018;Ąđ?&#x2018;Ąđ?&#x2018;Ą, then this object will be assigned to the main class. In other cases, the object can be classiďż˝ied to the main class or to the uncertainty area. In both cases the degree of membership of the obâ&#x20AC;&#x2018; ject to the main class or subordinate class is increased. If the object is classiďż˝ied to the uncertainty area with weights đ?&#x2018;?đ?&#x2018;?1 and đ?&#x2018;?đ?&#x2018;?2 , such that đ?&#x2018;?đ?&#x2018;?1 < đ?&#x2018;Ąđ?&#x2018;Ą đ?&#x2018;Ąđ?&#x2018;Ąđ?&#x2018;Ą2 , then the classiďż˝ier đ?&#x2018;&#x2C6;đ?&#x2018;&#x2C6; will classify the object to the uncertainty area.
4. Conclusion
In this paper, we presented the concept of uncerâ&#x20AC;&#x2018; tainty area and the algorithm allowing the creation of a new classiďż˝ier based on the known classiďż˝iers, which should signiďż˝icantly reduce the measure of uncertainty
VOLUME VOLUME 13,13, N°N°44
2019 2019
area. The next step will be to implement the algorithm and test the quality of the created classi�ier based on the actual data.
AUTHORS
PaweĹ&#x201A; DrygaĹ&#x203A;â&#x2C6;&#x2014; â&#x20AC;&#x201C; University of RzeszoĚ w, College of Natural Sciences, Institute of Information Technology, ul. Pigonia 1, 35â&#x20AC;&#x2018;310 RzeszoĚ w, Poland, eâ&#x20AC;&#x2018;mail: pawelâ&#x20AC;&#x2018; drs@ur.edu.pl. Jan G. Bazan â&#x20AC;&#x201C; University of RzeszoĚ w, College of Naâ&#x20AC;&#x2018; tural Sciences, Institute of Information Technology, ul. Pigonia 1, 35â&#x20AC;&#x2018;310 RzeszoĚ w, Poland, eâ&#x20AC;&#x2018;mail: baâ&#x20AC;&#x2018; zan@ur.edu.pl. Piotr Pusz â&#x20AC;&#x201C; University of RzeszoĚ w, College of Natural Sciences, Institute of Information Technoâ&#x20AC;&#x2018; logy, ul. Pigonia 1, 35â&#x20AC;&#x2018;310 RzeszoĚ w, Poland, eâ&#x20AC;&#x2018;mail: ppusz@ur.edu.pl. Maksymilian Knap â&#x20AC;&#x201C; University of RzeszoĚ w, College of Natural Sciences, Institute of Information Technoâ&#x20AC;&#x2018; logy, ul. Pigonia 1, 35â&#x20AC;&#x2018;310 RzeszoĚ w, Poland, eâ&#x20AC;&#x2018;mail: mknap@ur.edu.pl. â&#x2C6;&#x2014;
Corresponding author
ACKNOWLEDGEMENTS This work was partially supported by the Centre for Innovation and Transfer of Natural Sciences and Engiâ&#x20AC;&#x2018; neering Knowledge in RzeszoĚ w, through Project Numâ&#x20AC;&#x2018; ber RPPK.01.03.00â&#x20AC;&#x2018;18â&#x20AC;&#x2018;001/10.
REFERENCES
[1] J. AczeĚ l, Lectures on Functional Equations and Their Applications, Academic Press: New York, 1966.
[2] C. Alsina, M. J. Frank, and B. Schweizer, Asâ&#x20AC;&#x2018; sociative Functions: Triangular Norms and Copulas, ďż˝orld Scientiďż˝ic: New Jersey, 2006, 10.1142/6036.
[3] J. G. Bazan. â&#x20AC;&#x153;Hierarchical Classiďż˝iers for Comâ&#x20AC;&#x2018; plex Spatioâ&#x20AC;&#x2018;temporal Conceptsâ&#x20AC;?. In: J. F. Peters, A. Skowron, and H. RybinĚ ski, eds., Transactions on Rough Sets IX, Lecture Notes in Computer Science, 474â&#x20AC;&#x201C;750. Springer, Berlin, Heidelberg, 2008. [4] J. G. Bazan and M. Szczuka, â&#x20AC;&#x153;The Rough Set Exploâ&#x20AC;&#x2018; ration Systemâ&#x20AC;?. In: J. F. Peters and A. Skowron, eds., Transactions on Rough Sets III, Berlin, Heiâ&#x20AC;&#x2018; delberg, 2005, 37â&#x20AC;&#x201C;56, 10.1007/11427834_2. [5] G. Beliakov, A. Pradera, and T. Calvo, Aggregaâ&#x20AC;&#x2018; tion Functions: A Guide for Practitioners, Sprinâ&#x20AC;&#x2018; ger: Berlin, New York, 2008.
[6] T. Calvo, A. KolesaĚ rovaĚ , M. KomornÄąĚ kovaĚ , and R. Mesiar. â&#x20AC;&#x153;Aggregation Operators: Properties, Classes and Construction Methodsâ&#x20AC;?. In: T. Calvo, G. Mayor, and R. Mesiar, eds., Aggregation Opeâ&#x20AC;&#x2018; rators: New Trends and Applications, Studies in Fuzziness and Soft Computing, 3â&#x20AC;&#x201C;104. Physicaâ&#x20AC;&#x2018; Verlag HD, Heidelberg, 2002. Articles
89
89
Journal Journal of of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems
[7] T. Calvo and G. Mayor, “Remarks on two types of extended aggregation functions”, Tatra Moun‑ tains Mathematical Publications, vol. 16, 1999, 235–253.
[8] T. Calvo, G. Mayor, and R. Mesiar, eds., Aggrega‑ tion Operators: New Trends and Applications, Stu‑ dies in Fuzziness and Soft Computing, Physica‑ Verlag Heidelberg, 2002, 10.1007/978‑3‑7908‑ 1787‑4.
[9] T. Cover and P. Hart, “Nearest neighbor pat‑ tern classi�ication”, IEEE Transactions on Infor‑ mation Theory, vol. 13, no. 1, 1967, 21–27, 10.1109/TIT.1967.1053964, Conference Name: IEEE Transactions on Information Theory.
[10] B. De Baets, “Idempotent uninorms”, Euro‑ pean Journal of Operational Research, vol. 118, no. 3, 1999, 631–642, 10.1016/S0377‑ 2217(98)00325‑7. [11] J. Dombi, “Basic concepts for a theory of evalu‑ ation: The aggregative operator”, European Jour‑ nal of Operational Research, vol. 10, no. 3, 1982, 282–293, 10.1016/0377‑2217(82)90227‑2. [12] P. Drygaś , “On the structure of continuous uni‑ norms”, Kybernetika, vol. 43, no. 2, 2007, 183– 196.
[13] P. Drygaś , “On properties of uninorms with un‑ derlying t‑norm and t‑conorm given as ordinal sums”, Fuzzy Sets and Systems, vol. 161, no. 2, 2010, 149–157, 10.1016/j.fss.2009.09.017.
[14] P. Drygaś , D. Ruiz‑Aguilera, and J. Torrens, “A characterization of a class of uninorms with continuous underlying operators”, Fuzzy Sets and Systems, vol. 287, 2016, 137–153, 10.1016/j.fss.2015.07.015. [15] P. Drygas, “On monotonic operations which are locally internal on some subset of their dom‑ ain”. In: New Dimensions in Fuzzy Logic and Rela‑ ted Technologies, Proceedings of the 5th EUSFLAT Conference, 2007, 185–191. [16] T. Fawcett, “An introduction to ROC analysis”, Pattern Recognition Letters, vol. 27, no. 8, 2006, 861–874, 10.1016/j.patrec.2005.10.010.
[17] J. Fodor and B. De Baets, “A single‑point cha‑ racterization of representable uninorms”, Fuzzy Sets and Systems, vol. 202, 2012, 89–99, 10.1016/j.fss.2011.12.001. [18] J. C. Fodor, R. R. Yager, and A. Rybalov, “Struc‑ ture of Uninorms”, International Journal of Uncertainty, Fuzziness and Knowledge‑ Based Systems, vol. 5, no. 4, 1997, 411–427, 10.1142/S0218488597000312. [19] M. Grabisch, J.‑L. Marichal, R. Mesiar, and E. Pap, “”Aggregation Functions””. In: 2008 6th International Symposium on Intelli‑ gent Systems and Informatics, 2008, 1–7, 10.1109/SISY.2008.4664901. 90
90
[20] S. Hu and Z. Li, “The structure of continuous uni‑ norms”, Fuzzy Sets and Systems, vol. 124, no. 1, Articles
VOLUME N°44 2019 2019 VOLUME 13,13, N°
2001, 43–52, 10.1016/S0165‑0114(00)00044‑ 0.
[21] E. P. Klement, R. Mesiar, and E. Pap, Triangu‑ lar Norms, volume 8 of Trends in Logic, Springer Netherlands: Dordrecht, 2000, 10.1007/978‑94‑ 015‑9540‑7. [22] A. Mesiarová ‑Zemá nková , “Characterization of uninorms with continuous underlying t‑ norm and t‑conorm by means of the ordinal sum construction”, International Journal of Approximate Reasoning, vol. 83, 2017, 176–192, 10.1016/j.ijar.2017.01.007. [23] Z. Pawlak and A. Skowron, “Rudiments of rough sets”, Information Sciences, vol. 177, no. 1, 2007, 3–27, 10.1016/j.ins.2006.06.003.
[24] D. Ruiz and J. Torrens, “Distributivity and conditional distributivity of a uninorm and a continuous t‑conorm”, IEEE Transactions on Fuzzy Systems, vol. 14, no. 2, 2006, 180–190, 10.1109/TFUZZ.2005.864087.
[25] D. Ruiz‑Aguilera, J. Torrens, B. De Baets, and J. Fodor, “Some Remarks on the Characteriza‑ tion of Idempotent Uninorms”. In: E. Hü ller‑ meier, R. Kruse, and F. Hoffmann, eds., Com‑ putational Intelligence for Knowledge‑Based Sys‑ tems Design, Berlin, Heidelberg, 2010, 425–434, 10.1007/978‑3‑642‑14049‑5_44.
[26] J. A. Swets, “Measuring the accuracy of diagnostic systems”, Science, vol. 240, no. 4857, 1988, 1285– 1293, 10.1126/science.3287615. [27] C. C. Taylor, D. J. Spiegelhalter, and D. Michie, eds., Machine Learning, Neural and Statistical Classi�i‑ cation, Ellis Horwood Ltd.: New York, 1994.
[28] V. Torra and Y. Narukawa, Modeling Decisions: Information Fusion and Aggregation Operators, Cognitive Technologies, Springer: Berlin, Heidel‑ berg: Berlin, Heidelberg, 2007, 10.1007/978‑3‑ 540‑68791‑7.
[29] R. R. Yager and A. Rybalov, “Uninorm ag‑ gregation operators”, Fuzzy Sets and Systems, vol. 80, no. 1, 1996, 111–120, 10.1016/0165‑ 0114(95)00133‑6.
Journal of Automation, Mobile Robotics and Intelligent Systems
VOLUME 13,
N° 4
2019
Some New Robotization Problems Related to the Introduction of Collaborative Robots into Industrial Practice Submitted: 9th January 2019; accepted: 30th January 2020
Zbigniew Pilat, Wojciech Klimasara, Marek Pachuta, Marcin Słowikowski
DOI: 10.14313/JAMRIS/4-2019/42 Abstract: The appearance of collaborative robots is a natural stage of development of industrial robotics. Research aimed at enabling the work of robots and people in the common area (collaborative workspace) have been conducted for several decades. However, only recently the research results are implemented in industrial conditions. This is mainly due to the introduction of lightweight robots of low payload, which are equipped with special construction solutions and functions in the control area. Thanks to this, these devices can safely collaborate with people in a common workspace. The application of a collaborative robot in the robotization project of an industrial process creates new technical problems and more. The robots themselves, as well as robotic installations, must continue to comply with the applicable safety requirements, in accordance with the relevant directives. Introduction to the use of collaborative robots also creates new legal problems. The article presents the current state of development of collaborative robots in the technical, legal and standardization aspect. The presented material is based mainly on information from the Polish market. Keywords: collaborative robots, robotization, industrial applications, safety, law, standards
1. Introduction In the development of industrial robotics, from the beginning, there have been two directions. On the one hand, people try to build devices that will replace humans. On the other hand, the concepts of devices that will work along with them. One of the most important priorities of the designers of the first robotic installations was to move a human away from the danger area [1]. The human was replaced by an industrial robot whose work area was protected by special fences equipped with lock and sensory systems. These protections prevented the robot from contacting people in special installation operating modes. The concepts of introducing robots into the human environment became real only in the 1980s [2]. This was mainly connected with the development of mobile systems that allowed robots to move. A separate group was created, called “service robots”. From
the beginning, it was planned that they will be used by people who have no technical preparation. At the same time, tasks set before these robots require operation in the vicinity of people, often in direct contact with them. So they have to communicate with people by sending/receiving information in real time. They must also be safe in contact with a human, so that they do not a threat to him. Today, manufacturers offer a whole range of service robots that comply with these requirements, both for commercial and personal use. The dynamics of sales growth in this segment is much higher than in industrial robots, and most forecasts present that this trend will continue in the coming years. The warm welcoming of service robots by users, the rapid development of this group of devices and the steady increase in the value of their sales as well as the systematic expansion of markets, all this affected the attempts to transfer some solutions to industrial robotics. It can be said that the development of service robots has contributed to the realization of the idea of robots collaborating with humans in industrial environments. Of course, the main reason for works aiming to introduce these concepts into practice and use in industrial conditions was the real need reported by the users. In many production areas, there are processes and operations today that can be carried out faster, cheaper, achieving the highest quality, with closer collaboration between a human and a robot. However, the implementation of such common work stations requires new technical solutions. Robots with special features are designed, that make these devices much safer than traditional industrial robots. These are the robots with small payload (lifting capacities), whose bodies (casings) have no sharp edges, and are sometimes covered with soft material. Their control implements special functions and mechanisms, e.g. enabling collision detection or limiting the movement dynamics in special situations. A new group called “Collaborative robots” or “Cobots” is created. The term “Cobot” was proposed for the first time by an intern in the 1990s postdoctoral internship at Northwestern University (Illinois, USA). His idea of a new definition was a response to a competition announced in a research laboratory to come up with a better name for the device on which he was working. It was a device supporting a man in moving objects and manipulating them. Research directors J. E. Colgate and M. Peshkin filed a patent application
91
Journal of Automation, Mobile Robotics and Intelligent Systems
in 1997 for such a device and called it “Cobot” (US Patent 5,952,796). At that time, they also published several articles presenting the idea of robot-human collaboration [3]. To this day, they constitute an inspiration for many new projects regarding collaborative robots. At the same time, technical and organizational solutions that will enable the sharing of workspace by human and by traditional industrial robot are developed. For this purpose, robots are equipped with sensory systems that recognize human presence in their working range and the potential possibility of collision. New methods of robot programming are developed (e.g. lead-through programming, also called hand guiding of the tool) and new, more effective ways of human-robot communication are introduced. The collaboration of humans and industrial robots in its working area creates new opportunities, but also poses new threats. A chance to limit those threats is to increase the level of knowledge about safety in robotic installations among employees of production plants. Dissemination of knowledge and good practices in this regard among current and potentially future industrial robot users, from pupils through students, teachers/coaches, production workers to representatives of technical and management staff is as important as the introduction of new technical solutions improving safety.
2. State of the Art – Robot Manufacturers
92
Although the idea of robots collaborating directly with humans is known from the beginning of robotics, for its practical implementation it was necessary to wait until the beginning of the 21st century. The precursor was Universal Robots, founded in 2005 in Denmark. Four years later, the company introduced the UR5 model into the market – advertised as the world’s first collaborative robot. Initially, it met with many skeptical forecasts as to the chances of developing this new concept. However, the market success of UR robots quickly resolved all doubts. Following the Danes, all world’s major robot manufacturers began to present their proposals of models adapted to collaboration with humans. The UR5 robot has 6 degrees of freedom, a payload of 5 kg and a working range of 85 cm, with a total weight of 18 kg. From the very beginning, it was very well accepted in the market, mainly by small companies. The first products were sold in Denmark and Germany, but the company quickly expanded its activity to other European countries. In 2012, UR entered the Chinese market and since 2013 it has been present in the USA. In the same year, the premiere of the new UR10 model with a payload of 10 kg and a range of 130 cm took place. In 2015, Universal Robots launched UR3 – its lightest model, with a payload of 3 kg. At the same time, it was the first model specifically designed for human-robot collaboration, mainly while performing light assembly tasks. In 2015, Universal Robots was bought by the Teradyne corporation, a supplier of, among others, equipment for inArticles
VOLUME 13,
N° 4
2019
dustrial installations of automated production. All 3 robot models are still offered under the Universal Robots brand. KUKA offers a LBR iiwa collaborative robot, designed mainly for assembly works of light details requiring high precision. Its manipulator has 7 degrees of freedom (rotary joints) and a kinematic system modelled on the human arm. KUKA LBR iiwa is available in two versions with a payload of 7 and 14 kg and a range of 800 and 820 mm, respectively. Modern KUKA Sunrise Cabinet control of the LBR iiwa robot enables quick start up of the robot itself as well as the entire robotic installation. It also provides effective communication between the operator and the robot itself and its very simple operation and programming. These robots have extensive sensory equipment. They have built-in sensitive torque sensors in all seven axes. Thanks to them, the LBR iiwa robot faultlessly and quickly recognize collisions, and the control system immediately takes appropriate action, for example, reduces force and speed. These sensors allow to perform the most delicate assembly tasks without damaging the elements. In its technical brochures, KUKA emphasizes that LBR iiwa robots have very good precision of movement, their construction ensures positioning repeatability of ± 0,1 mm. KUKA LBR iiwa robot manipulators have rounded shapes, without sharp edges, which improves human safety in case of possible contact. The smaller model weighs 22 kg, and a larger 30 kg. They can be mounted on the floor, on the wall or on the ceiling. FANUC presented the first collaborative robot in 2015. It was a CR-35iA model, with a payload of 35 kg, making it, at that time, the world’s strongest robot of this type. In the first half of 2016, FANUC presented another, smaller model of the collaborative robot – CR-7iA with a payload of 7 kg. Manipulators of both models are painted in a characteristic green color. They have built-in intelligent sensors that immediately stop the robot after touching a human or other object. Depending on the needs, the collaborative robots can be equipped with a 2D or 3D vision sensor that allows detecting the positions of the workpieces. The first two-arm collaborative robot was Baxter, presented in 2012 by Rethink Robotics. Its manipulator (body) consists of a head, a body (torso) and two arms. On the robot’s head there is a sonar with a 360° scan range and a camera that can be used to detect objects, including people around the robot. Each of the arms has 7 degrees of freedom. Both arms are able to work independently carrying out various activities or collaborate in one task. The force/torque sensors are installed in the manipulator’s joints. Baxter has implemented functions that allow recalculation of the trajectory and generation of a new path, in the case of changes in its environment, which make it impossibly implementation of previously learned movements in collision-free way. Thanks to this function, the robot can react to objects appearing in its operating area, including a human entering this area. In 2015, Rethink Robotics launched the Saw-
Journal of Automation, Mobile Robotics and Intelligent Systems
yer™ model on the market, smaller, faster, designed for high precision tasks. The company also offered a special Intera® software package. It integrates the user interface and modules facilitating and accelerating the robot’s installation and adapting it to new tasks, which in turn allows to reduce the costs of robots implementation. Unfortunately, in 2018, Rethink Robotics declared bankruptcy. All patents and trademarks were acquired by the Hahn group [14], a global specialist in the field of automation and robotics, until recently one of the distributors of Baxter and Sawyer robots. Part of the Rethink Robotics staff (more than 20 employees), was employed by Universal Robots in its American branch located in the same city, in Boston. For many years, this is the first case of the collapse of the robot manufacturer. The fact that it was a collaborative robots specialist shows that the market for these devices is not as easy as it was sometimes foreseen. In 2015, Kawasaki launched the “duAro” model on the American market – a robot designed to collaborate with a human, the manipulator of which has two arms in the form of SCARA modules. Each arm works independently, but their movement can be synchronized in a simple way, because they are controlled from a common controller. This allows the robot to be used in works requiring careful two-hand carrying of delicate parts and subassemblies (e.g. electronics), precision assembly or quality control. The construction, dimensions and method of programming have been designed so that you can easily replace manual work without unnecessary modifications of the workplace and without the need to use covers. The ABB has developed a two-arm YuMi® robot – IRB 14000 which collaborates with a human and is intended for assembly of small parts (payload 500g, repeatability +/- 0,02 mm). The robot’s arms are flexible, which limits the possibility of injuries when in contact with a human. The ABB additionally offers a specifically designed set of grippers for this robot and a system for locating parts, using a camera integrated with the gripper.
VOLUME 13,
N° 4
2019
to install YuMi® on existing workplace where people are currently working. An example of the workplace with this robot has been presented by ABB at the fair since 2014 (Fig. 1). Two-arms robot was also presented by Epson. At the Automatica 2018 fair it showed the WorkSense W-01 model (Fig. 2). The payload of each robot’s arm is 3 kg. By synchronizing the movements of both arms, the robot can lift an object weighing as much as 6 kg.
Fig. 2. WorkSense W-01 – Epson’s dual-arm robot (Epson Automatica 2018 Press Release) In recent years, many companies have been introducing collaborative robots to their offer. They are both: large producers of industrial robots, in addition to the abovementioned ABB, KUKA, Fanuc also Yaskawa, Kawasaki, Nachi, Stäubli, Denso and other, as well as the new, often start-up type, companies that are trying to enter the market. At the same time, manufacturers of standard industrial robots put a lot of effort so that their products could work in a shared area with humans, like a typical collaborative robot.
3. From a Standard Industrial Robot to a Collaborative Robot
Fig. 1. Workplace with YuMi® – IRB 14000 robot during Automatica 2014 fair The range of the robot arms (590 mm) is similar to the range of human arms. The robot is relatively lightweight, weighs approx. 38 kg. This is important when planning the space in the production plant and allows
In many centers, works are being carried out on equipping traditional constructions of industrial robots with such an equipment that ensures safe human work in their vicinity. These works are aimed at equipping robots with senses similar to those in which nature provided living organisms, including humans. The implementation of solutions that perform the functions of appropriate receptors and feedback signal generators will allow in the future to perform new forms of human-robot communication, which will directly result in improving the safety of people who will interact with robots. This applies to both solutions in the field of industrial and service robotics. Articles
93
Journal of Automation, Mobile Robotics and Intelligent Systems
Among the areas of research related to artificial senses, the tactile human-robot interactions take a significant place. Work in this field is carried out in many centers, most often with the support of governmental or international programs. Different concepts are being developed, however, studies often end at the stage of laboratory tests [10], [11]. Only few results of research projects are further developed towards their practical use. One of such solutions is the so-called “artificial skin” in which the robot are “dressed”. The role of such “artificial skin” may be, for example, to feel the touch of a foreign object and to stop the robot’s movement quickly after determining the occurrence of this touch. Work on such a solution to this problem was already carried out at KUKA ten years ago. However, they did not end with the creation of a commercial product. KUKA moved towards the creation of its own cobot, LBR iiwa, which has been offered for several years on the market. At the same time similar research was conducted by the Austrian company Blue Danube Robotics GmbH from Vienna [13]. In this case, the work on “artificial skin” for traditional industrial robots was from the beginning focused on obtaining a commercial product. As a result, this company, numbering less than 30 employees, has created an innovative technology under the name of AIRSKIN. The product is a soft, pressure-sensitive so-called “Safety skin”.
VOLUME 13,
N° 4
2019
supply air under pressure to each pad by a separate duct. The sensor’s task is to continuously measure the pressure inside the pad. Any change in the value of this pressure caused by the deformation of the coating by touching it is immediately recognized. This state is passed to the robot safety system with two secure channels and should cause safe stopping of the robot. The response time of the safety system in the “safety skin” is 9 ms. The forces that occur during the collision of a robot equipped with a “safety skin” with an obstacle encountered – a human, are suppressed by the softness of this skin. The braking distance of the robot after the receiving of the collision signal should be absorbed by the thickness of the “safety skin”. This is why the parameter of this thickness is important, which must already be taken into account when designing this skin for a given robot/application. Each part (pad) is equipped with a connection module (ACM-Airskin Connection Module) used for connection with other pads, or robot safety system.
Fig. 3. AIRSKIN pad’s mounted on the robot wrist in the PIAP lab
94
The coating made from AIRSKIN consists of several parts connected together (so-called pads), which are mounted directly on the robot arms (Fig. 3). It can be electrically connected in series simultaneously up to 15 parts (pads) placed on the robot. These parts are made of thermoplastic polyurethane (TPU) and are designed individually for each type of robot and each of its arm, as well as for arms tooling (EoAT-End of Arm Tooling), such as grippers or flanges. The company also offers several standard pad models, for selfuse by the applicator. Connections are made using external cables. Each part (pad) is equipped with an electronic safety system consisting of a micropump, a filter, a pressure sensor and an electronic control system. Air under slight positive pressure (about 400 Pa) is pumped into each part. It is therefore not required to Articles
Fig. 4. AIRSKIN presentation during Automatica 2018 fair AIRSKIN “safety skin” by Blue Danube Robotics GmbH has obtained a safety certificate according to ISO 13849 PLe / Cat3 and is officially offered on commercial terms. The company has developed coating for several models of popular industrial robots, including ABB, UR, KUKA (Fig.4).
4. Legal Status Real problems of human-robot coexistence began to increase along with the progress of computer science and the development of artificial intelligence. An
Journal of Automation, Mobile Robotics and Intelligent Systems
intelligent robot becomes a device with a large range of autonomy, able to react or modify its actions based on the information received and the recognized state of the environment, without human intervention. This type of behavior is also characteristic of collaborative robots operating in production installations. It will also be important for robots that will work in domestic and public environments. These independent, sometimes creative activities will be the source of new legal problems. It should be noted that the very understanding of the word “robot” changes in front of our eyes. The robot is a car without a driver moving along a public road, a flying, unmanned transport vehicle or a device shaped like a human serving him in a hotel or restaurant. The emergence of these new, generally intelligent robots raises further legal and ethical problems. There is a need to determine the principles of creation and operation of robots, as well as the responsibility for the effects of their activities. The problem is how to establish legal principles, and within them the limits of robot autonomy, as well as the principles and ways in which compliance with these limits should be controlled, and next how to proceed when these borders are broken. The fundamental principles of civil law will have to be fundamentally changed. It seems highly probable that it will be necessary to create a new category of law, which will be addressed to a new entity called – “autonomous robot” or simply “robot”. Already, the first works are being undertaken to develop legal norms specifically for these devices that would regulate robots’ behavior and enable the assessment of situations created by them or in connection with their activity. Both scientists, lawyers and politicians are convinced of the necessity of these pre-emptive actions. At the beginning of 2015, a group of Members of the European Parliament (EP) called for the development of robots’ rights and prepared a report to this end, in which the definitions, scope and principles of regulation of robots, ethical standards and principles of responsibility for accidents involving robots, including, for example, automatic vehicles were proposed. The EP law committee adopted a report made by a group of Members on this matter, followed by an appropriate resolution containing recommendations for the European Commission on civil law provisions on robotics [4]. The adopted Resolution is only the beginning of possible legislative work. Their further fate, however, is not certain. Already a few months after the adoption of the said resolution, an open letter from the group of experts in the field of artificial intelligence and robotics appeared in the electronic media [9]. Generally, they negate the sense of the work that the European Commission intends to initiate. The signatories of this letter express their concerns about the negative consequences that may arise from the creation and application of a special law for robots. Special concerns are raised regarding the creation of the legal status of an “electronic person” for autonomous, self-learning robots. The authors of the letter believe that the legal status of a robot cannot arise either from nature or from legal provisions.
VOLUME 13,
N° 4
2019
It seems that creating a law regarding robots in this form or another will eventually become a necessity. Its development is and will be a difficult task, requiring the inclusion of many issues not only in the field of the field of device design and construction, manufacturing engineering and economics, but above all in the field of ethics, medicine and sociology. It seems certain that this law cannot be simply transferred from legal regulations applicable to persons.
5. Applicable Standardization Regulations European Union legislation on the safety of products, which are included in the New Approach Directives, apply to large groups of products. One of such groups are machines, and matters related to their safety are regulated by the Machinery Directive MD [5]. Her provisions regarding the so-called essential requirements have a significant impact on the projects of automated and robotic industrial installations. It also takes into account the electrical requirements for machines. The feature of devices powered by electricity is the ability to emit signals that disrupt electromagnetic fields, which are interference for other devices. At the same time, these devices are sensitive to signals (disturbances) emitted by other devices. Therefore, robotic installations are also subject to the so-called Directives on electromagnetic compatibility EMC [6]. These two MD and EMC directives are a basic set for the assessment of equipment and entire robotic and automated installations. Depending on the specific application area, other directives may also be considered, regarding, for example, pyrotechnic articles (2013/29/EU), medical devices (93/42/EEC), devices used in a potentially explosive atmosphere (ATEX 2014/34/EU). The essential requirements of EU directives (and regulations) are detailed in the so-called harmonized standards. Standards are not part of European law and their application is voluntary, but fulfilling the requirements of these standards gives the certainty that a particular machine complies with the New Approach Directives (principle of presumption). In the light of EU law on the safety of products, robots belong to machines, but constitute a separate group (category). So, on the one hand, there is a group of norms strictly referring to robots, but on the other, there are also provisions in many general machine standards that including robots. There are also standards dedicated to robotic systems. Therefore, the “machine” word that follows is also inclusive of robots and robotic systems. Below are the most important norms strictly related to robots and environments in which they work. • ISO 10218-1:2011 Robots and robotic devices – Safety requirements for industrial robots – Part 1: Robots. • ISO 10218-2:2011 Robots and robotic devices – Safety requirements for industrial robots – Part 2: Robot systems and integration. Articles
95
Journal of Automation, Mobile Robotics and Intelligent Systems
• ISO 12100 Safety of machinery – General principles for design – Risk assessment and risk reduction. • ISO 13850:2016-03 Safety of machinery – Emergency stop function – Principles for design. • ISO 13855:2010 Safety of machinery – Positioning of safeguards with respect to the approach speeds of parts of the human body. • IEC 60204-1:2010 Safety of machinery – Electrical equipment of machines – Part 1: General requirements. • IEC 62046: Safety of machinery – Application of protective equipment to detect the presence of persons. This is an IEC standard that defines the requirements for the selection, location, configuration and commissioning of protective equipment designed to detect temporary or continuous presence of persons, in order to protect those persons against dangerous part (dangerous parts) of the machine in industrial applications. • EN ISO 11161:2007/A1:2010E, Safety of machinery – Integrated production systems – Basic requirements In 2016, the ISO/TS 15066 Robots and robotic devices – Collaborative robots document was published. It has the status of so-called technical specification, and therefore a set of guidelines and recommendations relating to collaborative robots. It can be said that they complement the requirements given in ISO 10218-1 and ISO 10218-2 standards for industrial systems of collaborative robots and their working environment. Although this document is not a standard, it seems that its application is justified during the design, construction, implementation and operation of installations with collaborative robots. It is currently the only document recognized by ISO, which covers these problems. The importance which the international standardization organization gives to the problems of robotics, including the safety in human-robot contact is confirmed by the fact that in 2016 a new technical committee ISO/TC299 [12] dedicated exclusively to robotics was established. It is currently responsible for ongoing work, including further processing of the ISO/TS 15066 document and initiating new directions in the robotics area. It should be clearly emphasized that the implementation of collaborative robots, by assumption safer in contact with humans, does not change the required procedures for assessing robotic installations in terms of safety. A person placing a robotic installation on the market (sale, commissioning) must carry out an analysis of the existing risk and provide appropriate measures to reduce hazards. It is only for a safe machine or application that an EC Declaration of Conformity can be issued and the marking can be applied [7].
96
Articles
VOLUME 13,
N° 4
2019
6. Conclusion The number of industrial robots working in industrial plants is constantly growing. This means that each day more and more people, employees of these plants have contact with robots. Until recently, it was mainly a contact through tight, solid fences that secured robotic cells. Currently, due to interest in the concepts of collaborative robots, ideas of robots that will collaborate side-by-side with humans and will not be fenced or specially secured appear more often. This is certainly a path towards the natural integration of the human worker and robot worker. It must be remembered, however, that the most important is the safety of man, above all the one working in close proximity to the robot. Currently, there is a kind of chaos on the market in the areas of collaborative robots. This applies to both documents and devices offered by manufacturers, applied solutions and implemented applications. In the field of standardization, the ISO/TS 15066 “Robots and robotic devices – Collaborative robots” is an attempt to toward ordering there matters. However, it should be remembered that this document has at the moment a status of technical specification, so these are recommendations that determine the technical requirements that a robot collaborating with a human should fulfill. However, this is not an obligatory document to be used. At the same time, there is a competitive battle on the market of robot and robotic systems manufacturers for a new area related to collaborative robots. In many cases, the information presented on the websites and in the press as sponsored articles is of a marketing nature. Numerous advantages and benefits of using these solutions are indicated, and no significant restrictions are mentioned [8]. These observations concern both the Polish market, known to authors from daily activities in automation and robotics, and, more broadly, the European Union market. In the common EU market, the same laws and standards apply in all countries. Therefore, the final conclusions of the considerations presented concern the entire common economic area. The basic issue for which future users of collaborative robots should be clear about is the need to carry out the normal procedure for assessing whole robotic installation in terms of safety and issuing the EC declaration of conformity. The special problem on the Polish market is the limited access to standardization documents, especially those introduced to the PN (Polish Norms) collection by the method of recognition and published in the original language version: these standards are very expensive in general, especially for small businesses, sometimes the argument that potential users of these documents certainly know English perfectly
Journal of Automation, Mobile Robotics and Intelligent Systems
well and understand what they mean is a misunderstanding. It is of the utmost importance to disseminate reliable information on collaborative robots, including good practices and examples of their successful applications, used in various technologies and industry sectors. Polish Committee for Standardization does not bring such activities. This is an appropriate task for scientific and research units, also for universities, but how often it happens in such situations, there is the problem of financing. It is very likely that similar troubles are in other EU countries, especially among the so-called new members.
ACKNOWLEDGEMENTS
The article is based on the results of the IV stage of the multi-annual program “Improving safety and working conditions”, financed in 2017-2019 in the field of scientific research and development works by the Polish Ministry of Science and Higher Education/ National Center for Research and Development. Program coordinator: Central Institute for Labour Protection – National Research Institute.
AUTHORS
Zbigniew Pilat* – ŁUKASIEWICZ Research Network – Industrial Research Institute for Automation and Measurements PIAP, Al. Jerozolimskie 202, 02-486 Warsaw, Poland, E-mail: zpilat@piap.pl.
Wojciech Klimasara – ŁUKASIEWICZ Research Network – Industrial Research Institute for Automation and Measurements PIAP, Al. Jerozolimskie 202, 02486 Warsaw, Poland. Marek Pachuta – ŁUKASIEWICZ Research Network – Industrial Research Institute for Automation and Measurements PIAP, Al. Jerozolimskie 202, 02-486 Warsaw, Poland.
Marcin Słowikowski – ŁUKASIEWICZ Research Network – Industrial Research Institute for Automation and Measurements PIAP, Al. Jerozolimskie 202, 02486 Warsaw, Poland. * Corresponding author
REFERENCES
[1] M. Hägele, K. Nilsson, J. N. Pires,“Industrial Robotics”. In: B. Siciliano, O. Khatib (eds), Springer Handbook of Robotics, 2008, 963–986 10.1007/978-3-540-30301-5_43. [2] J. F. Engelberger, Robotics in service, MIT Press, 1989. [3] M. Peshkin, J. E. Colgate, “Cobots”, Industrial Robot: An International Journal, vol. 26, no. 5, 1999, 335–341 10.1108/01439919910283722. [4] “European Parliament resolution of 16 February 2017 with recommendations to the
VOLUME 13,
N° 4
2019
Commission on Civil Law Rules on Robotics (2015/2103(INL))”. European Parliament, www. europarl.europa.eu/doceo/document/TA-82017-0051_EN.html. Accessed on: 2020-03-03. [5] “Directive 2006/42/EC of the European Parliament and of the Council of 17 May 2006 on machinery (Machinery Directive)”. European Parliament, https://eur-lex.europa.eu/LexUriServ/ LexUriServ.do?uri=OJ:L:2006:157:0024:0086: EN:PDF. Accessed on: 2020-03-03. [6] “Directive 2014/30/EU of the European Parliament and of the Council of 26 February 2014 on the harmonisation of the laws of the Member States relating to electromagnetic compatibility (EMC Directive) ”. European Parliament, https://eur-lex.europa.eu/legal-content/EN/ TXT/PDF/?uri=CELEX:32014L0030&from=EN. Accessed on: 2020-03-03. [7] M. Głowicki, “Coboty – zagadnienia bezpieczeństwa przy integracji robotów współpracujących (eng. Cobots – security issues in the integration of collaborative robots)”, Napędy i Sterowanie, vol. 19, no. 4, 2017, 30–34 (in Polish). [8] Z. Pilat, W. Klimasara, M. Pachuta, M. Słowikowski, M. Smater, J. Zieliński, “Possibilities of practical introduction of colaborative robots in various manufacturing technologies implemented in an industrial environment”, Pomiary Automatyka Robotyka, vol. 22, no. 1, 2018, 59–65 10.14313/PAR_227/59. [9] “Open letter to the European Commission”. www.robotics-openletter.eu. Accessed on: 202003-03. [10] “Perfect skin: more touchy-feely robots”. European Commission, https://cordis.europa.eu/ article/id/91025-feature-stories-perfect-skinmore-touchyfeely-robots. Accessed on: 202003-03. [11] “Kaspar the social robot”. University of Hertfordshire, www.herts.ac.uk/kaspar. Accessed on: 2020-03-03. [12] “About [ISO/TC 299 Robotics]”. International Organization for Standardization, https://committee.iso.org/home/tc299. Accessed on: 202003-03. [13] “Home”. Blue Danube Robotics GmbH, www.bluedanuberobotics.com. Accessed on: 2020-0303. [14] “About us”. HAHN Robotics, www.hahnrobotics. com/en/about-us. Accessed on: 2020-03-03.
Articles
97
VOLUME 13, N° 4, 2019 www.jamris.org (ONLINE)
Journal of Automation, Mobile Robotics and Intelligent Systems pISSN 1897-8649 (PRINT) / eISSN 2080-2145
Publisher: ŁUKASIEWICZ Research Network – Industrial Research Institute for Automation and Measurements PIAP
WWW.JAMRIS.ORG • pISSN 1897-8649 (PRINT) / eISSN 2080-2145 (ONLINE) • VOLUME 13, N° 4, 2019 Indexed in SCOPUS