Journal of Automation, Mobile Robotics and Intelligent Systems, vol. 13, no. 1 (2019)

Page 1

www.jamris.org 2019 VOLUME 13, N° 1, (ONLINE)

/ eISSN 2080-2145 (PRINT)

pISSN 1897-8649 Journal of Automation, Mobile Robotics and Intelligent Systems

Publisher: ŁUKASIEWICZ Research Network – Industrial Research Institute for Automation and Measurements PIAP

WWW.JAMRIS.ORG • pISSN 1897-8649 (PRINT) / eISSN 2080-2145 (ONLINE) • VOLUME 13, N° 1, 2019 Indexed in SCOPUS


Journal of Automation, Mobile Robotics and Intelligent Systems A peer-reviewed quarterly focusing on new achievements in the following fields: •  Fundamentals of automation and robotics  •  Applied automatics  •  Mobile robots control  •  Distributed systems  •  Navigation •  Mechatronic systems in robotics  •  Sensors and actuators  •  Data transmission  •  Biomechatronics  •  Mobile computing Editor-in-Chief

Typesetting

Janusz Kacprzyk (Polish Academy of Sciences, PIAP, Poland)

PanDawer, www.pandawer.pl

Advisory Board

Webmaster

Dimitar Filev (Research & Advenced Engineering, Ford Motor Company, USA) Kaoru Hirota (Japan Society for the Promotion of Science, Beijing Office) Witold Pedrycz (ECERF, University of Alberta, Canada)

Piotr Ryszawa, PIAP

Editorial Office ŁUKASIEWICZ Research Network

Ministry of Science – Industrial Research Institute for Automation and Measurements PIAP and Higher Education

Co-Editors Roman Szewczyk (PIAP, Warsaw University of Technology) Oscar Castillo (Tijuana Institute of Technology, Mexico) Marek Zaremba (University of Quebec, Canada)

Al. Jerozolimskie 202, 02-486 Warsaw, Poland tel. +48-22-8740109, e-mail: office@jamris.org

Republic of Poland

The reference version of the journal is e-version. Printed in 100 copies.

Executive Editor

Ministry

Articles areofreviewed, Science excluding advertisements and descriptions of products.

Katarzyna Rzeplinska-Rykała, e-mail: office@jamris.org Ministry (PIAP, Poland) ´ of Science

and Higher

If in doubtEducation about the proper edition of contributions, for copyright and reprint permissionsRepublic please contact the Executive Editor. of Poland

and Higher Education

Associate Editor

Republic of Poland

Piotr Skrzypczynski ´ (Poznan ´ University of Technology, Poland)

Statistical Editor ´ Małgorzata Kaliczynska (PIAP, Poland)

Ministry of Science and Higher Education

Ministry of Science and Higher Education

Republic of Poland

Republic of Poland

Editorial Board:

Publishing of “Journal of Automation, Mobile Robotics and Intelligent Systems” – the task financed under contract 907/P-DUN/2019 from funds of the Ministry of Science and Higher Education of the Republic of Poland allocated to science dissemination activities.

Mark Last (Ben-Gurion University, Israel) Anthony Maciejewski (Colorado State University, USA) Krzysztof Malinowski (Warsaw University of Technology, Poland) Andrzej Masłowski (Warsaw University of Technology, Poland) Patricia Melin (Tijuana Institute of Technology, Mexico) Fazel Naghdy (University of Wollongong, Australia) Zbigniew Nahorski (Polish Academy of Sciences, Poland) Nadia Nedjah (State University of Rio de Janeiro, Brazil) Dmitry A. Novikov (Institute of Control Sciences, Russian Academy of Sciences, Moscow, Russia) Duc Truong Pham (Birmingham University, UK) Lech Polkowski (University of Warmia and Mazury, Olsztyn, Poland) Alain Pruski (University of Metz, France) Rita Ribeiro (UNINOVA, Instituto de Desenvolvimento de Novas Tecnologias, Caparica, Portugal) Imre Rudas (Óbuda University, Hungary) Leszek Rutkowski (Czestochowa University of Technology, Poland) Alessandro Saffiotti (Örebro University, Sweden) Klaus Schilling (Julius-Maximilians-University Wuerzburg, Germany) Vassil Sgurev (Bulgarian Academy of Sciences, Department of Intelligent Systems, Bulgaria) Helena Szczerbicka (Leibniz Universität, Hannover, Germany) Ryszard Tadeusiewicz (AGH University of Science and Technology in Cracow, Poland) Stanisław Tarasiewicz (University of Laval, Canada) Piotr Tatjewski (Warsaw University of Technology, Poland) Rene Wamkeue (University of Quebec, Canada) Janusz Zalewski (Florida Gulf Coast University, USA) ´ Teresa Zielinska (Warsaw University of Technology, Poland)

Chairman - Janusz Kacprzyk (Polish Academy of Sciences, PIAP, Poland) Plamen Angelov (Lancaster University, UK) Adam Borkowski (Polish Academy of Sciences, Poland) Wolfgang Borutzky (Fachhochschule Bonn-Rhein-Sieg, Germany) Bice Cavallo (University of Naples Federico II, Napoli, Italy) Chin Chen Chang (Feng Chia University, Taiwan) Jorge Manuel Miranda Dias (University of Coimbra, Portugal) Andries Engelbrecht (University of Pretoria, Republic of South Africa) Pablo Estévez (University of Chile) Bogdan Gabrys (Bournemouth University, UK) Fernando Gomide (University of Campinas, São Paulo, Brazil) Aboul Ella Hassanien (Cairo University, Egypt) Joachim Hertzberg (Osnabrück University, Germany) Evangelos V. Hristoforou (National Technical University of Athens, Greece) Ryszard Jachowicz (Warsaw University of Technology, Poland) Tadeusz Kaczorek (Bialystok University of Technology, Poland) Nikola Kasabov (Auckland University of Technology, New Zealand) ´ Marian P. Kazmierkowski (Warsaw University of Technology, Poland) Laszlo T. Kóczy (Szechenyi Istvan University, Gyor and Budapest University of Technology and Economics, Hungary) Józef Korbicz (University of Zielona Góra, Poland) Krzysztof Kozłowski (Poznan University of Technology, Poland) Eckart Kramer (Fachhochschule Eberswalde, Germany) Rudolf Kruse (Otto-von-Guericke-Universität, Magdeburg, Germany) Ching-Teng Lin (National Chiao-Tung University, Taiwan) Piotr Kulczycki (AGH University of Science and Technology, Cracow, Poland) Andrew Kusiak (University of Iowa, USA)

Publisher: ŁUKASIEWICZ Research Network – Industrial Research Institute for Automation and Measurements PIAP   All rights reserved © Articles

1


Journal of Automation, Mobile Robotics and Intelligent Systems Volume 13, N° 1, 2019 DOI: 10.14313/JAMRIS_1-2019

Contents 3

65

Self-Supervised Learning of Motion-Induced Acoustic Noise Awareness in Social Robots João Andrade, Pedro Santana, Alexandre P. Almeida DOI: 10.14313/JAMRIS_1-2019/1

One DOF Robot Manipulator Control Through Type-2 Fuzzy Robust Adaptive Controller Amir Naderolasli, Abbas Chatraei DOI: 10.14313/JAMRIS_1-2019/7

Damage Recovery for Simulated Modular Robots Through Joint Evolution of Morphologies and Controllers Djouher Akrour, NourEddine Djedi DOI: 10.14313/JAMRIS_1-2019/2

Preface to Special Issue of the Journal of Automation, Mobile Robotics and Intelligent Systems on Recent Advances in Information Technology II DOI: 10.14313/JAMRIS_1-2019/8

Modern Measures of Risk Reduction in Industrial Processes Jan Maciej Kościelny, Michał Syfert, Bartłomiej Fajdek DOI: 10.14313/JAMRIS_1-2019/3

The Design of Digital Audio Filter System used in Tomatis Method Stimulation Krzysztof Jóźwiak, Michał Bujacz, Aleksandra Królak DOI: 10.14313/JAMRIS_1-2019/9

Design and Analysis of a Soft Pneumatic Actuator to Develop Modular Soft Robotic Systems Ahmad Mahmood Tahir, Matteo Zoppi DOI: 10.14313/JAMRIS_1-2019/4

New Approach to Typified Microservice Composition and Discovery Nikita Gerasimov DOI: 10.14313/JAMRIS_1-2019/10

Development and Optimization of an Automated Irrigation System Lanre Daniyan, Ezechi Nwachukwu, Ilesanmi Daniyan, Okere Bonaventure DOI: 10.14313/JAMRIS_1-2019/5

Terrain classification using static and dynamic texture features by UAV downwash effect João Pedro Carvalho, José Manuel Fonseca, and André Damas Mora DOI: 10.14313/JAMRIS_1-2019/11

15

20

30

37

46

2

A Statistical Approach to Simulate Instances of Archeological Findings Fragments Fabrizio Renno, Antonio Lanzotti, Stefano Papa DOI: 10.14313/JAMRIS_1-2019/6 Articles

71

73

79

84


Journal of Journal of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

VOLUME 2019 VOLUME 13,13, N°N°1 1 2019

S���-S��������� �������� �� M�����-I������ A������� ����� A�������� �� S����� R����� �ub�i�ed: 20th February 2019; accepted: 2nd April 2019

João Andrade, Pedro Santana, Alexandre P. Almeida DOI: 10.14313/JAMRIS_1-2019/1 Abstract: With the growing presence of robots in human populated environments, it becomes necessary to render their presence natural, rather than invasive. To do that, robots need to make sure the acous�c noise induced by their mo�on does not disturb people nearby. �n this line, this paper proposes a method that allows the robot to learn how to control the amount of noise it produces, taking into account the environmental context and the robot’s mechanical characteris�cs. �oncretely, the robot adapts its mo�on to a speed that allows it to produce less noise than the environment’s background noise and, hence, avoiding to disturb nearby humans. For that, before execu�ng any given task in the environment, the robot learns how much acous�c noise it produces at di�erent speeds in that environment by gathering acous�c informa�on through a microphone. The proposed method was successfully validated on various environments with various background noises. �n addi�on, a ��� sensor was installed on the robot in order to test the robot’s ability to trigger the noise-aware speed control procedure when a person enters the sensor’s field of view. The use of a such a simple sensor aims at demonstra�ng the ability of the proposed system to be deployed in minimalis�c robots, such as micro unmanned aerial vehicles. Keywords: Social �obots, �cous�c �oise, �o�on �ontrol, Self-Supervised Learning

�� ��trod�c�o� Robot safe navigation in human-populated environments is one of the most studied topics in the �ield of robotics since its early days, having reached a point in which self-driving cars became a reality [21]. To be well accepted in environments like of�ices, households, and factories, robots need to navigate among people in a predictable, non-disturbing way. Sociallyaware robot navigation is a relatively new �ield that aims at exactly solving this problem by including in the robot’s navigation system explicit knowledge of human behaviour, including cultural preferences [14, 25]. Social awareness in robots should include geometric aspects of motion planning (e.g., avoid invading personal spaces) but also more subjective aspects related to the acoustic impact robots have over the environment. This is in line with current knowledge about the relevance of adequate noise prevention and mitigation strategies for public health [6]. Hence, depending on the context in which the robot is immersed in,

e.g., library versus cafeteria, the robot should be allowed to induce more or less acoustic noise in the environment. To control the amount of acoustic noise induced in the environment, robots may change their motion accordingly. To that purpose, robots should be provided with a forward model that could predict how much noise will be induced in the environment if a given speed is chosen. With that model the robot should be able to select the speed that induces an acoustic noise level that better trades-off the navigation goal and the comfort of the humans sharing the same environment. This paper proposes a method that allows robots to learn and use these forward models in a way that the robot induces a limited level of acoustic noise trading-off some noise level with the desired speed criteria. Fig. 1 illustrates a typical use-case of the proposed system. In the proposed system, learning takes place by having the robot performing a set of prede�ined motor actions to actively induce acoustic noise in the environment. The outcome of these controlled interactions is a set of context-action-sensation tuples that the robot accumulates in an associative memory to learn how to predict its motion-induced noise, given a motor action and an environment context. With the knowledge acquired with this self-supervised active learning strategy, the robot can then select, at each moment, the maximum velocity possible (up-to a reference desired velocity) that induces less acoustic noise than the background’s environment acoustic noise. Fig. 2 illustrates the basic principles of operation of the proposed system. To validate the proposed method, a small-sized ROS-enabled [24] research-oriented wheeled robot, TurtleBot2, has been equipped with a microphone and a simple PIR sensor capable of binary detection of a person in its �ield of view. The simplicity of the sensory apparatus aims at matching the one that would be available if a small-sized robot would be considered, such as a micro unmanned aerial vehicle. With this approach we intend to demonstrate that the proposed method could be used to such small sized robots, which are expected to populate our environments and possibly organised as swarms (refer to [7] for a survey on swarm robotics). In fact, self-supervised learning in micro air vehicles has been demonstrated in a different context [29]. This article is an extended and improved version of a previously published conference paper [1], providing a more detailed description of the proposed system alongside a deeper analysis of the experimental

3

3


Journal Journal of of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

VOLUME N°11 2019 2019 VOLUME 13,13, N°

Fig. 1. A cartoon representa�on o� the proposed method’s use case. �oth top-le� and bo�om-le� images represent the ini�al state where the robot is idle at an environment where there are some people having conversa�ons. The di�erence is that at the top images, the robot does not use any noise controller, while in the bo�om images, the robot uses the proposed noise controller. At the top-right image, the robot moved near humans while making considerable noise, causing discom�ort and �orcing people to speak higher to con�nue the conversa�ons. At the bo�om-right image, because the robot is moving slower, it does not make more noise than the people and e�ecutes its task while people con�nue to talk to each other normally. The squared object with two semi circle at the sides is the robot. The humans are represented by circle with a more �a�en circle, somewhat similar to a plus signal. The dialogue balloons represent the conversa�ons and the bigger the balloon and the le�ers, the higher the people are talking. The do�ed lines represent the path o� the robot and the semi circles nearby represent the noise the robot is producing, where the higher quan�ty o� semi circles, the more noise

Fig. 2. A cartoon representa�on o� the proposed method’s major steps. The red dot at the robot’s middle compartment represents the microphone. The blue curved lines represent the robot’s speed. The more and �cker the lines, the �aster the robot is moving. The musical notes represent the environment’s background noise. The more and bigger notes, the higher the volume. At the top-le� image, the robot is not using the purposed method, and so, the robot is moving at any speed without considera�on o� the noise around it. �hen using the system, the robot needs to be idle and listen to the background noise, as depicted at the top-right image. Then, the lower the background noise �bo�om-le� image�, the slower the robot moves. �� there is a high background noise �bo�om-right image�, the robot is allowed to move �aster 4

4


Journal of Journal of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

results. This paper is organized as follows. Section 2 describes the related work. Section 3 gives an introduction to some theory about sound that inspired this work. Section 4 describes the proposed method and how it can be implemented. Section 5 describes the developed system. In section 6, a set of experimental results are presented. Finally, conclusions and future research avenues are presented in section 7.

2. Related Work

The ability for robots to navigate safely in humanpopulated environments has been extended in recent years to also encompass human safety, which means that these robots’ navigation systems need to be socially-aware. In fact, socially-aware robot navigation has been demonstrated in of�ice environments, houses, and museums [14, 25] and more recently in factories [17, 20]. In addition to ensure the safety of people and goods nearby robots, it is also important to foster comfort in human-robot interactions, as prescribed in the theory of proxemics for human-human interactions [9]. This theory predicts that comfort is a function of the distance between the interacting agents, as well as their relationships, cultures, and intentions. Contemporary socially-aware robot navigation has included these concepts to promote more natural human-robot interactions [16, 25, 27]. Acoustic pollution induced by robots also affects human comfort. A strategy to reduce the acoustic impact robots may have in people-populated environments is to compel these robots to select paths that move closer to sound sources present in the environment [18]. By doing this, the robot masks its acoustic signature with the ones of the sound sources distributed throughout the environment. To handle several acoustic noise sources, acoustic maps can be created and updated by the robot by actively moving in the environment [12, 19]. These maps indicate the location of the several sound sources, which can then be used to hide the robot’s acoustic signature. Another application based on the sound captured by a robot’s microphones is the detection of obstacles that are outside of its �ield of vision [13]. This paper contributes to the state of the art by proposing a solution that does not require the explicit mapping of sound sources and, thus, reducing computational complexity and learning time. The wide variety of environments and robot mechanical structures render dif�icult designing by hand a set of rules that helps the robot to control its acoustic signature in a way that people do not feed disturbed by its presence. An alternative to the hand crafting of these rules is to allow the robot to learn them in a selfsupervised way as a function of the environment and motion speed. Self-supervised learning has been attracting considerable attention, in particular in safe navigation domain, which requires the robot to autonomously learn classi�iers for terrain assessment from images and point clouds [3, 4, 10, 22, 23, 29, 30]. In general, in this

VOLUME 2019 VOLUME 13,13, N°N°1 1 2019

previous work the robot is asked to learn what perceptual features better predict a given robot-terrain interaction, provided ground-truth labels produced by an active perception process. For instance, by manipulating an object, the robot is able to obtain ground-truth regarding how traversable that object is [4]. The learned associative mapping can then be used to predict future robot-terrain interactions, given sensory feedback. In this paper we address a similar problem: to learn the acoustic noise induced by the robot in a given environment by engaging in pre-de�ined motor actions to generate suf�icient ground-truth labels for the learning process to take place. With a strong connection to the ideas of active perception [2,5], the self-supervised learning concept follows the affordance principle studied by Gibson for the animal kingdom [8]. The concept of affordances links the ability of a subject, through its actions, to the features of the environment and, so, to learn an affordance the agent needs to interact with the environment. This idea has been deeply studied in humans [15, 26] and more recently in robotics [11] including for safe navigation purposes [28]. In this paper we address the problem of learning what acoustic noise level is afforded by the environment, given its and the robot’s characteristics.

�. �rel����ar�e� o� ��o�����

From a physics perspective, sound is a vibration that typically propagates as an audible wave of pressure. This wave propagates through a transmission medium such as a gas, liquid or solid. Our human ears detect changes in sound pressure. It is well known that the sound level decreases non-linearly as the distance from the sound source increases. Moreover, the characteristics of the environment, e.g., the design of the room (shape, furnishing, surface �inishes etc.) in�luences the extent to which the sound level decreases along with the distance. Sound pressure level (SPL) or acoustic pressure level is a logarithmic measure of the effective pressure of a sound relative to a reference value. Sound pressure level, denoted Lp and measured in dB, is de�ined by Lp = 20 log

p [dB], p0

(1)

where p is the root mean square sound pressure and p0 is the reference sound pressure (normally the lowest threshold of human hearing, 20µPa). Since microphones have a transfer factor or sensitivity given by some value in mV/Pa, in a particular context or environment, we can relate the sample amplitudes acquired by the microphone to the strength of the acoustic signal (pressure level). This can be represented by the simpli�ied (un-weighted) linear sound pressure level (LSPL) given by LSP L =

1 ∑N | xi − x̄ |, i=1 N

(2)

where N is the number of samples per second, xi is the sampled amplitude, and x̄ is the sample mean value.

5

5


Journal Journal of of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

VOLUME N°11 2019 2019 VOLUME 13,13, N°

4. Proposed Method 4.1. Sensing Since every environment is unique, the noise induced by the robot is also different when navigating in each of those environments. Therefore, the robot needs to select its speed according to the context in which it is. The set of possible contexts the robot may operate is de�ined as C = {c1 , c2 , . . .}.

(3)

[c]

(4)

The robot’s speed also affects the noise it produces. Depending on the robot’s characteristics, the robot produces different noise levels. Hence, the robot needs to learn the impact of each speed in each context. However, it is not always possible to test all speeds in all environments. Let us de�ine that the set of speeds that have been tried by the robot in a given context c ∈ C is [c]

S [c] = {s1 , s2 , . . .}.

As it will be described below, each speed in each environment is tested multiple times (for robustness purposes) by performing a set of �ixed action patterns. As a result of the �ixed action patterns noise is produced, whose magnitude is measured with the robot’s on-board microphone, resulting in a time-series associated to the context c ∈ C and speed s ∈ S [c] in question: { } X [c][s] = x[c][s] [0], x[c][s] [1], . . . x[c][s] [n[c][s] ] . (5)

4.2. Learning

Learning occurs by storing in an associative memory the average noise level, µ[c][s] , and a conservative measure of the noise level variation (dispersion), [c][s] σ+ , conservative noise level variation hereafter, observed while performing each assessed speed s ∈ S [c] in context c ∈ C: M=

{(

[c][s]

µ

[c][s] , σ+

)

, ∀c ∈ C, ∀s ∈ S

[c]

}

,

(6)

where the average noise level for speed s ∈ S [c] in context c ∈ C is given by µ

[c][s]

=

[c][s] n∑

1

n[c][s]

x

[c][s]

(7)

[i].

i=0

The conservative noise level variation measure is given by the sum of the standard deviation with the standard error of the mean, allowing to take into account the uncertainty that emerges from the sample size: [c][s]

σ+

[c][s]

= σ [c][s] + σ−

(8)

,

where the standard deviation of the noise level for speed s ∈ S [c] in context c ∈ C is given by σ [c][s] = 6

6

∑n[c][s] ( i=0

x[c][s] [i] − µ[c][s] n[c][s] − 1

)2

,

(9)

and the standard error of the mean of the noise level for speed s ∈ S [c] in context c ∈ C is given by [c][s]

σ−

σ [c][s] =√ . n[c][s]

(10)

4.3. Memory Recall

The associative memory allows the robot to know how much acoustic noise it induces in the environment at different speeds and contexts. This information is then used by the robot to adapt its speed in order to avoid producing noise whose magnitude is higher than the one of the environment’s background noise. Let us assume the robot needs to perform a given task which requires the robot to move at a given desired speed sr . Then, the robot needs to determine whether it produces less noise than the environment at speed sr and, if not, what should be its maximum speed in order to ful�il that condition. To perform this analysis the robot needs to consult its memory. Let us imagine the robot needs to know the expected conservative noise level variation if travelling at a given speed sr in a given context c. If that speed has been experienced by the robot, then its memory can be used by a direct recall process. However, if that speed has never been experienced, then the robot needs to linearly interpolate from the two closest speeds stored in memory. Formally, in those cases where sr ∈ S [c] , the conservative noise level variation is obtained with [c][sr ]

σr (c, sr ) = σ+

,

(11)

whereas in those cases where sr ̸∈ S , the conservative noise level variation is instead obtained with [c]

( ) − + σr (c, sr ) = ψ sr , s− , σ [c][s ] , s+ , σ [c][s ] ,

with:

ψ(x, x0 , y0 , x1 , y1 ) =

(12)

y0 (x1 − x) + y1 (x − x0 ) , (13) x1 − x0

where the immediately above and below speeds memorised in M [c] for for context c ∈ C are given by s+[c] = arg

s−[c] = arg

min

s∈S [c] ,s>sr

min

(s − sr ),

s∈S [c] ,sr >s

(sr − s).

(14)

(15)

The robot can also consult its memory to obtain the expected average noise level if travelling at speed sr . Instead of consulting directly the memory, the robot uses a model learned from the data stored in memory. A set of tests (see below) show that for low speeds a second degree polynomial �its well the data, where as for high speeds a simpler linear model is suf�icient. Hence, the expected average noise level is given by, µr (c, sr ) =

{

ax2 + bx + c dx + e

if sr <= sd if sr > sd

(16)


Journal of Journal of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

VOLUME 2019 VOLUME 13,13, N°N°1 1 2019

where a, . . . , e are parameters learned with regression based on a set of data points corresponding to tuples speed-noise: ) } {( D[c] = (17) s, µ[c][s] , ∀s ∈ S [c] . �.�. ����n ��ntr�l

Algorithm 1) outlines the process used by the proposed system to select which speed is sent to actuators in order to best take into account the robot’s induced noise level, its memory, and desired speed.

1 2 3 4 5 6 7 8 10

Data: desired speed, sr (input) Data: environment context, c (input) Data: speed search step, α (input) Result: Noise aware robot speed control Set robot’s speed to 0 Store environment noise level for δ seconds in E ∑ Compute: µe = e∈E e/|E| Initialise x: x ← µr (c, sr ) + σr (c, sr ) Initialise selected speed: s ← sr while x > µe ∧ s > α do Decrement selected speed: s ← s − α Update x: x ← µr (c, s) + σr9(c, s) end Set robot’s speed to s Algorithm 1: Motion controller

The algorithm receives the robot’s desired speed, sr , if noise level was not a concern. This speed is often task-oriented. The algorithm also assumes that the environment context, c, is known, for instance using vision (not the focus of this article). The output of this algorithm is, if possible, the highest speed, up to sr , the robot can move that does not produce more noise than the environment. First, the robot is asked to stop (Step 1). This way, the robot’s induced noise does not interfere with the next step. With the microphone, the robot gathers the environment’s noise levels, E, for a determined number of seconds, δ (Step 2). These noise levels are then used to calculate an average background noise level, µe (Step 3). Then, the robot predicts, in a conservative manner, the expected noise produced by the robot at speed s (Step 4). The selected speed, s, is initially set to the desired speed sr (Step 5), because the desired speed must be the highest selected speed possible. Then, a small cycle (Steps 6-�) needs to be performed to �ind the best speed. While the predicted robot’s noise is higher than the environment’s background noise and the selected speed is higher than a given speed search step α (Step 6), the selected speed is decremented (Step 7) and the predicted robot’s noise with that speed is computed (Step 8). When the robot �inds a speed which is expected to produce less noise than the environment’s, the algorithm ends and the robot actuator speed is set to the selected speed s (Step 10). If the selected speed gets lower than the speed search step, then it is assumed

Fig. 3. Diagram sho�ing the connec�ons bet�een the microphone and the Raspberry Pi 2. It is possible to see that the ’Vref’ (reference voltage) is different from the ”VDD” on the ADC in order to amplify a bit the signal the robot cannot produce less noise than the environment and the robot’s speed is set to the selected speed. Hence, the speed search step α should be set to the minimum speed possible the robot can perform the task at hand.

5. Experimental Setup

The proposed method has been devised to allow complex and minimalist robots to adjust their speed in order to control their induced noise when navigating nearby people (e.g., a micro aerial vehicle �lying in a of�ice environment). However, the system could be easily adapted in order to perform the opposite operation, that is, render the robot salient in the acoustic landscape. This could be interesting for tasks in which would be important to attract people’s attention to the robot. This section presents the instantiation of the proposed method to a small-sized ground robot, a TurtleBot 2.0, equipped with a microphone and a Raspberry pi 2 Model B for data acquisition and transmission to the robot’s main processing unit. Fig. 3 shows a digram with the connections between the microphone and the raspberry pi, whereas Fig. 4 depicts the robot used. 5.�. �i�r�p��ne p��i��n

The robot has three compartments in which the microphone can be placed (top compartment, middle compartment and bottom compartment), as depicted in Fig. 5. In addition to selecting the most appropriate compartment for microphone placement, it is necessary to verify whether it should be placed at the front or at the back of the robot, making a total of six different possible positions. In order to determine the best position, an experiment was performed. In all tested positions, the microphone is pointing downwards so

7

7


Journal Journal of of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

VOLUME N°11 2019 2019 VOLUME 13,13, N°

Fig. 4. Turtlebot 2.0. The robot used in this work

Fig. 5. �obot with its top, middle, and bo�om compartments

8

8

as to be highly in�luenced by the wheels and motors acoustic noise. Fig. 6 plots the acoustic noise level recorded by the robot’s microphone when placed in each of the six positions while travelling at 0.2 m/s. Fig. 6 shows that there was no major difference between having the microphone at the back or at the front of the robot in any compartment, although it has a little better performance at the back. Regarding the different compartments, the top compartment provides the worst performance. It is possible to differentiate the robot’s noise, but not as well as in the others compartments. Both the middle and bottom compartment are good to distinguish the robot’s noise, with the bottom having an advantage because it is more perceptible. As the robot can potentially move at higher speeds than the tested 0.2 m/s, for example in cases where there is more noise in the environment, a second experiment with the robot moving at 0.5 m/s on a concrete �loor was performed. Fig. 7 shows the results of that experiment. It is noticeable in Fig. 7 how much more the noise is saturated when the microphone is at the bottom compartment. Saturation occurs due to closeness to the motors and due to the high mechanical impact induced by the rough terrain on the robot’s structure. For this reason, the back of the middle compartment was selected as the most appropriate to place the microp-

Fig. 6. �cous�c noise level induced by the robot while moving at 0.2 m�s on top of ceramic �les, with the microphone on different compartments and located at the front and back of the robot. It is possible to check the environment’s noise right at the beginning. The spike that follows represent the robot’s mo�on onset. Then the noise stabilizes because the robot is constantly moving at the target speed. The spike that occurs at the middle represent the moment when the robot passes from one ceramic �le to another, which causes some mechanical impact that results in a higher acous�c noise. The following order represents the images se�uen�ally from top to bo�om� � � bo�om compartment, back posi�on� 2 � bo�om compartment, front posi�on� � � middle compartment, back posi�on� � � middle compartment, front posi�on� � � higher compartment, back posi�on� � � higher compartment, front posi�on. The ver�cal axis represents the acous�c noise level, and the horizontal axis represents the different samples from the microphone hone. The remainder of the article assumes the microphone to be located in this selected position.


Journal of Journal of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

VOLUME 2019 VOLUME 13,13, N°N°1 1 2019

that is not from the robot’s motion, that run is discarded and a new one is performed until all 15 similar repetitions are completed. The microphone data gathered for each context and speed compose the set of noise levels (X [c][s] ), which allows to store the set of tuples (M ) that contain the noise level average and the conservative noise level variation and understand the regression equations for each context (µr (c, sr )). 5.�. ����n ��ntr�ller

Fig. 7. �cous�c noise level induced �� the ro�ot while moving at 0.5 m/s on top of rough concrete floor, with the microphone on different compartments located at the �ac� of the ro�ot. �i�e on �g. �, the ver�cal axis represents the acous�c noise level, and the hori�ontal axis represents the different samples from the microphone. The top image plots the samples acquired with the microphone on the middle compartment and the �o�om image plots the samples acquired with the microphone on the �o�om compartment 5.2. Learning phase With the microphone position established, it is possible to gather the robot’s noise. To test if the system works in various scenarios, four different contexts (environments) were selected. Since the robot moves with wheels, the major difference between the contexts is the �loor (tiles, cement, carpet and wood). The set of tested contexts is: C = {tiles, wood, cement, carpet}.

(18)

For a robot moving alongside humans, it cannot move too fast, as that can be uncomfortable for them. For that reason, the set of speeds ranges from 0.0 m/s to 0.8 m/s in 0.1 m/s steps. Gathering data when the robot is stopped (0.0 m/s) can be important in situations where the robot’s noise is not as easily distinguishable from the environment’s noise, and so, the robot can have a reference of the environment’s noise. The set of tested speeds in each environment c ∈ C is: S [c] = {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8}. (19)

The robot moves at each speed for each context for about 2 seconds. This is repeated 15 times as to have a good amount of samples. For each context, the conditions are the same. The robot always starts at the same place, to make each repetition as similar as possible, and it needed to be as much silent as possible in the environment. Every time there is some kind of noise

Placing the microphone at a good place and learning the robot’s noise are two necessary steps to make the robot make less noise than the environment. With those two steps done, it is possible to develop the motion controller, which is what the robot uses to know what speed should use. The way this motion controller works is based on the algorithm presented in the previous section. Before starting, the robot must be idle, as to not pollute the microphone data with its noise and require an extra element on the algorithm to �ilter its noise. The selected speed (s) is �irst initiali�ed as the desired speed (sr ) and the search step (α) is set. The robot starts by gathering the environment’s noise levels (δ) for 1 second with the microphone, which is then calculated the average value (µe ) and considered the maximum acoustic noise the robot can produce. After that second, a small cycle is performed. With the help of the data gathered during the learning phase, it is estimated the average noise level (µr ) and the conservative noise level variation (σr ). If the environment noise is lower than the average noise plus the conservative noise level variation (µe < (µr + σr )), it is decremented 0.1 to s and the cycle is repeated until either µe > (µr + σr ), where the best velocity is found, or S is lower than α, where the robot is moving too slow to be able to execute a task ef�iciently or in a reasonable amount of time and it assumes it cannot hide its own acoustic noise.

6. Experimental Results 6.1. Learning Phase To better understand the different contexts, Fig. 9 shows the average noise values and expected noise level variation that the robot made for each speedcontext pair. The �igure shows that, because the concrete �loor is the hardest �loor type, the robot produces more acoustic noise than on the other contexts. Predictably, the carpet �loor, the softer �loor type, makes the robot produce lesser acoustic noise than the other contexts. As it is possible to see, up until 0.4 m/s there is a higher variation between the noise values than the higher velocities, so it makes sense to have two different regression equations, as described in Equation 16. One of the equations represent the velocities between 0 m/s and 0.4 m/s, whereas the other the velocities between 0.4 m/s and 0.8 m/s.

9

9


Journal Journal of of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

VOLUME N°11 2019 2019 VOLUME 13,13, N°

Fig. 8. �i�erent �ontexts tested� �op le�� �arpet floor� �op right� �iles floor� �o�om le�� �ement floor� �o�om right� Wood floor

Fig. 9. Learning phase results showing the robot’s a�erage noise le�el and �onser�a��e noise le�el �aria�on for the di�erent �ontexts� �ontexts from top to bo�om� �ement, �les, wood and �arpet� �he hori�ontal axis represents the speed of the robot in m/s, and the �er��al axis represents the a�erage noise of the robot� �he �er��al lines at ea�h �elo�it� represents the �onser�a��e noise le�el �aria�on �.�. ����� ����������

10

10

To validate the motion controller, a simple yet effective experiment was performed. The robot was placed in all four contexts (tiles, cement, wood and carpet) idle. Then, a set of sound clips started playing. Three different sound clips with two intensity levels (high volume and low volume) were used. Since this is a controlled experiment, the sounds were played by speakers placed in the environment: (a) a sound clip from a crowded area, to simulate an environment where there are people inducing some background

noise; (b) a vacuum cleaner, to simulate an environment where there is a constant background noise, like an air ventilation system; and (3) a jazz song to simulate an environment where the volume from the background noise is dynamic, meaning it oscillates its volume based on the song’s characteristics. While each sound clips were playing, the motion controller’s algorithm was executed to �ind the ideal velocity s. The speed search step α was set to 0.05 and the desired speed sr , to force the algorithm to search for the best velocity, was set to a high value of 2.0 m/s. Table 1 shows the results obtained from that experience. The table show that the predicted acoustic noise the robot should produce is not higher than the environment’s background noise with similar values, which means that people nearby should not be disturbed by the robot’s noise. Notice that the noise is measured by the microphone on the robot, so the farther away from the robot a person may be, the less impactful is the robot to that person. �.�. �����g �i�� � ������ ��������

To further test the system, a second experiment was conducted. This time, the robot was equipped with a PIR sensor to detect the presence of a person and it was performed on a tiled �loor. The sound clips used in this experiment are the same as in the previous experiment (crowd sound, vacuum cleaner and a jazz song), except this time, there were no volume variations and all sound clips were producing similar noise levels. The PIR installed is a Motion Sensor Mo-


Journal of of Automation, Automation, Mobile Journal Mobile Robotics Roboticsand andIntelligent IntelligentSystems Systems

VOLUME 2019 VOLUME 13,13, N°N° 1 1 2019

Tab. 1. ZĞƐƵůƚƐ ĨƌŽŵ ƚŚĞ ŵŽƟŽŶ ĐŽŶƚƌŽůůĞƌ ŝŵƉůĞŵĞŶƚĂƟŽŶ͘ dŚĞ ŚĞĂĚĞƌƐ ĂƌĞ͕ ĨƌŽŵ ůĞŌ ƚŽ ƌŝŐŚƚ͗ ŵďŝĞŶƚ ŶŽŝƐĞ͕ ŇŽŽƌ ƚLJƉĞ͕ ĞŶǀŝƌŽŶŵĞŶƚ ďĂĐŬ ŶŽŝƐĞ͕ ƉƌĞĚŝĐƚĞĚ ƌŽďŽƚ ŝŶĚƵĐĞĚ ŶŽŝƐĞ͕ ŶŽŝƐĞ ĚŝīĞƌĞŶĐĞ ĂŶĚ ŶŽŝƐĞ ƌĂƟŽ Ambient sound

Jazz High volume Jazz Low volume Crowd High volume Crowd Low volume Vacuum cleaner

Floor type

Env. back noise (δ)

Pred. rob. ind. noise (µr )

Noise difference (µr − δ)

noise ration [%] (1 − µr /σ)

concrete tiles wood carpet

14.24 14.28 13.81 12.81

14.20 14.27 13.76 12.79

-0.04 -0.01 -0.05 -0.02

0.30% 0.05% 0.34% 0.14%

concrete tiles wood carpet

15.64 13.91 12.99 11.80

15.59 13.84 12.95 11.79

-0.05 -0.07 -0.04 -0.01

0.35% 0.51% 0.28% 0.09%

concrete tiles wood carpet

concrete tiles wood carpet

concrete tiles wood carpet

16.61 16.63 16.53 16.03

16.61 16.40 16.04 15.94

15.29 16.15 15.51 14.75

dule IM120628009, which has a range of 7 m and a �ield of view of 110 degrees, and was installed on the front of the robot’s top compartment. The robot starts by moving forward at a desired speed sr =0.5 m/s until the PIR sensor detects a person. �hen a person passes by the PIR �ield of view, the robot stops and performs the algorithm of the motion controller to �ind the ideal speed s and starts moving at that that speed. Since this is a controlled experience, the person appears always at approximately 1 m in front of the robot. This test is useful to understand if the robot is capable of performing a task with this motion controller, where the objective is the robot to stop when a person is nearby and adapt its velocity as to not cause discomfort to that person. For example, an autonomous vacuum cleaner could clean an house room slower when a person is in the same division. Table 2 shows the result obtained from this experience. Similar to the previous experiment, the robot does not produce more acoustic noise than the background environment’s noise, with the average difference between the robot’s noise and the background’s noise being 1.24%, and shows that the motion controller can be integrated into a more complex system to perform different types of tasks.

7. Conclusion

Regardless of the robots activity, the acoustic noise it induces in the environment can be uncomfortable or annoying to people that might be in the same environment. Therefore, it is necessary to limit the amount of acoustic noise produced by the robot so it becomes unobtrusive.

16.59 16.62 16.51 16.01

16.59 16.38 16.02 15.93

15.25 16.13 15.50 14.74

-0.02 -0.01 -0.02 -0.02

-0.02 -0.02 -0.02 -0.01

-0.04 -0.02 -0.01 -0.01

0.10% 0.03% 0.15% 0.10%

0.11% 0.14% 0.12% 0.08%

0.27% 0.13% 0.04% 0.07%

By setting a microphone on a robot, it is possible to learn the amount of acoustic noise any robot makes while moving at any speed in any context. In this work, a system was developed to enable different kinds of robots (big, small, aerial or grounded) to adapt their motion when around humans by moving at a speed that will not produce more acoustic noise than the already present in the environment’s background, and consequentially, not cause discomfort to people because of the robot’s noise.

To that purpose, the robot needs to have a notion of how much acoustic noise it produces. This is accomplished by having a learning phase in the different contexts where the robot is expected to perform its tasks. This learning phase creates a relation between a tuple (average and conservative variation of the noise level) and the different speeds that the robot may use. This is the only necessary task needed before being able to use the developed system. The developed system is a motion controller that, at any moment, allows the robot to adapt its speed to a value that does not produce more acoustic noise than the already existing in the environment. This allows for a more acceptance of autonomous systems in our society, because by not disturbing nearby people with its noise, people can perform whatever they may be doing regardless of robots nearby.

Although the results suggest that the proposed method works in a set of disparate contexts, there are some contexts where performing a task without causing some discomfort to humans nearby is almost impossible. Quiet places, like in a library, where there is not much background noise, the robot may not be able

11

11


Journal Journal of of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

VOLUME N°11 2019 2019 VOLUME 13,13, N°

Tab. 2. Results from the second experiment, where the robot stops performing a task with the presence of a person and adapts its �elocit� to not disturb the person� �he headers are, from le� to right� �mbient sound, predicted robot induced noise, en�ironment back noise, robot selected speed, noise di�erence and noise ra�o ambient sound

pred. rob. ind. noise (µr )

env. back noise (δ)

robot selected speed (s)

noise difference (µr − δ)

noise ratio[%] (1 − µr /σ)

Jazz

0.395

14.79

14.57

-0.22

1.49%

Vac. cleaner Crowd

0.395

0.405

14.89

14.96

to execute its tasks at a reasonable speed. In those situations, the robot or the user controlling it will have that knowledge and may move at the lowest speed possible or postpone the completion of the task until there are no people nearby.

To test the proposed method, a couple of experiments were performed. The �irst experience involved four different contexts (cement �loor, tiled �loor, wooden �loor and carpet �loor) and three different sound clips: Jazz song, crowd noise and a vacuum cleaner. Each sound clip had a different purpose. The jazz song simulated an environment where there are variation in the background’s noise volume, the crowd noise simulated an environment with multiple people nearby and the vacuum cleaner simulated an environment where there is a constant background noise. The robot was placed at each context and executed the motion controller to �ind the highest speed it could move to not make more acoustic noise than the background noise. The experimental results showed that the proposed method properly handled the various situations by selecting speeds that would allow the robot to not produce more noise than the environment. It is worth noting that the noise values are from the robot’s perspective, meaning that a person hears the robot’s noise more or less depending on the distance it has from the robot. To further validate this method, a second experiment was performed. The robot, equipped with a PIR sensor, was placed in only one context and the objective was to test the ability of the robot to select a proper speed as soon as someone appeared in the PIR’s �ield of view. The obtained results show that the robot was able to not move at a speed that would make it produce more noise than the environment’s background noise.

12

12

Despite the overall positive results, the proposed method presents some limitations to be handled in future work. For example, the experimental results were obtained with the robot performing simple forward motions. Future work needs to address a more diverse set of motion primitives. Although tested in different contexts, all were indoors with �lat terrains. Future work should assess the system in a wider range of environments (e.g., rough outdoor environments). It would also be valuable to assess the system in robots with different morphologies (e.g., small unmanned aerial vehicles).

14.57

14.94

-0.32

-0.02

2.12%

0.13%

AUTHORS

João Andrade – Instituto de Telecomunicaçõ es and ISCTE-Instituto Universitá rio de Lisboa, Lisboa, PORTUGAL, e-mail: joaopcandrade@gmail.com. Pedro Santana∗ – Instituto de Telecomunicaçõ es and ISCTE-Instituto Universitá rio de Lisboa, Lisboa, PORTUGAL, e-mail: pedro.santana@iscte-iul.pt. Alexandre P. Almeida – Instituto de Telecomunicaçõ es and ISCTE-Instituto Universitá rio de Lisboa, Lisboa, PORTUGAL, e-mail: alexandre.almeida@iscteiul.pt.

Corresponding author

REFERENCES

[1] J. Andrade, P. Santana, and A. Almeida, “Motioninduced acoustic noise awareness for sociallyaware robot navigation”. In: Proceedings of the IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), 2018, 24–29, 10.1109/ICARSC.2018.8374155. [2] R. Bajcsy, “Active perception”, Proceedings of the IEEE, vol. 76, no. 8, 1988, 966–1005, 10.1109/5.5968. [3] M. Bajracharya, A. Howard, L. H. Matthies, B. Tang, and M. Turmon, “Autonomous off-road navigation with end-to-end learning for the LAGR program”, Journal of Field Robotics, vol. 26, no. 1, 2009, 3–25, 10.1002/rob.20269.

[4] J. Baleia, P. Santana, and J. Barata, “On exploiting haptic cues for self-supervised learning of depth-based robot navigation affordances”, Journal of Intelligent & Robotic Systems, vol. 80, no. 34, 2015, 455–474, 10.1007/s10846-015-01844. [5] D. H. Ballard, “Animate vision”, Arti�icial intelligence, vol. 48, no. 1, 1991, 57–86, 10.1016/00043702(91)90080-4. [6] M. Basner, W. Babisch, A. Davis, M. Brink, C. Clark, S. Janssen, and S. Stansfeld, “Auditory and non-auditory effects of noise on health”, The Lancet, vol. 383, no. 9925, 2014, 1325–1332, 10.1016/S0140-6736(13)61613-X. [7] M. Brambilla, E. Ferrante, M. Birattari, and M. Dorigo, “Swarm robotics: a review from the


Journal of Automation, Mobile Robotics and Intelligent Systems Journal of Automation, Mobile Robotics and Intelligent Systems

swarm engineering perspective”, Swarm Intelligence, vol. 7, no. 1, 2013, 1–41, 10.1007/s11721012-0075-2.

[8] J. Gibson, “The concept of affordances”, Perceiving, acting, and knowing, 1977, 67–82.

[9] E. Hall, The Hidden Dimension : man’s use of space in public and in private, Publisher is empty!, 1969, 217.

[10] H. Heidarsson and G. Sukhatme, “Obstacle detection from overhead imagery using selfsupervised learning for autonomous surface vehicles”. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2011, 3160–3165, 10.1109/IROS.2011.6094610. [11] L. Jamone, E. Ugur, A. Cangelosi, L. Fadiga, A. Bernardino, J. Piater, and J. Santos-Victor, “Affordances in psychology, neuroscience and robotics: a survey”, IEEE Transactions on Cognitive and Developmental Systems, 2016, 10.1109/TCDS.2016.2594134. [12] N. Kallakuri, J. Even, Y. Morales, C. Ishi, and N. Hagita, “Probabilistic approach for building auditory maps with a mobile microphone array”. In: Robotics and Automation (ICRA), 2013 IEEE International Conference on, 2013, 2270–2275, 10.1109/ICRA.2013.6630884. [13] N. Kallakuri, J. Even, Y. Morales, C. Ishi, and N. Hagita, “Using sound re�lections to detect moving entities out of the �ield of view”. In: Intelligent Robots and Systems (IROS), 2013 IEEE/RSJ International Conference on, 2013, 5201–5206, 10.1109/IROS.2013.6697108.

[14] T. Kruse, A. K. Pandey, R. Alami, and A. Kirsch, “Human-aware robot navigation: A survey”, Robotics and Autonomous Systems, vol. 61, no. 12, 2013, 1726–1743, 10.1016/j.robot.2013.05.007.

[15] S. Lacey, J. Hall, and K. Sathian, “Are surface properties integrated into visuohaptic object representations?”, European Journal of Neuroscience, vol. 31, no. 10, 2010, 1882–1888, 10.1111/j.1460-9568.2010.07204.x. [16] M. Luber, L. Spinello, J. Silva, and K. O. Arras, “Socially-aware robot navigation: A learning approach”. In: 2012 IEEE/RSJ international conference on Intelligent robots and systems (IROS), 2012, 902–907, 10.1109/IROS.2012.6385716.

[17] F. Marques, D. Gonçalves, J. Barata, and P. Santana, “Human-aware navigation for autonomous mobile robots for intra-factory logistics”. In: Proceedings of the 6th International Workshop on Symbiotic Interaction, 2017, 10.1007/978-3319-91593-7_9. [18] E. Martinson, “Hiding the acoustic signature of a mobile robot”. In: Intelligent Robots and Systems, 2007. IROS 2007. IEEE/RSJ International Conference on, 2007, 985–990, 10.1109/IROS.2007.4399264.

VOLUME 13, N° 1 2019 VOLUME 13, N° 1 2019

[19] E. Martinson and A. Schultz, “Auditory evidence grids”. In: Intelligent Robots and Systems, 2006 IEEE/RSJ International Conference on, 2006, 1139–1144, 10.1109/IROS.2006.281843.

[20] G. Michalos, S. Makris, P. Tsarouchi, T. Guasch, D. Kontovrakis, and G. Chryssolouris, “Design considerations for safe human-robot collaborative workplaces”, Procedia CIrP, vol. 37, 2015, 248–253, 10.1016/j.procir.2015.08.014. [21] B. Paden, M. C� a�p, S. �. Yong, D. Yershov, and E. Frazzoli, “A survey of motion planning and control techniques for self-driving urban vehicles”, IEEE Transactions on Intelligent Vehicles, vol. 1, no. 1, 2016, 33–55, 10.1109/TIV.2016.2578706. [22] E. Pinto, F. Marques, R. Mendonça, A. Lourenço, P. Santana, and J. Barata, “An autonomous surface-aerial marsupial robotic team for riverine environmental monitoring: Bene�iting from coordinated aerial, underwater, and surface level perception”. In: Robotics and Biomimetics (ROBIO), 2014 IEEE international conference on, 2014, 443–450. [23] R. Pombeiro, R. Mendonça, P. Rodrigues, F. Marques, A. Lourenço, E. Pinto, P. Santana, and J. Barata, “Water detection from downwashinduced optical �low for a multirotor uav”. In: OCEANS’15 MTS/IEEE Washington, 2015, 1–6, 10.23919/OCEANS.2015.7404458.

[24] M. Quigley, K. Conley, B. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler, and A. Y. Ng, “Ros: an opensource robot operating system”. In: ICRA workshop on open source software, vol. 3, no. 3.2, 2009, 5.

[25] J. Rios-Martinez, A. Spalanzani, and C. Laugier, “From proxemics theory to socially-aware navigation: A survey”, International Journal of Social Robotics, vol. 7, no. 2, 2015, 137–153, 10.1007/s12369-014-0251-1. [26] J. Schwenkler, “Do things look the way they feel?”, Analysis, vol. 73, no. 1, 2013, 86–96.

[27] L. Takayama and C. Pantofaru, “In�luences on proxemic behaviors in human-robot interaction”. In: IEEE/RSJ international conference on Intelligent robots and systems, 2009. IROS, 2009, 5495–5502, 10.1109/IROS.2009.5354145.

[28] E. Uğ ur and E. Şahin, “Traversability: A case study for learning and perceiving affordances in robots”, Adaptive Behavior, vol. 18, no. 3-4, 2010, 258–284, 10.1177/1059712310370625. [29] K. van Hecke, G. de Croon, L. van der Maaten, D. Hennes, and D. Izzo, “Persistent selfsupervised learning: From stereo to monocular vision for obstacle avoidance”, International Journal of Micro Air Vehicles, vol. 10, no. 2, 2018, 186–206, 10.1177/1756829318756355. [30] K. M. Wurm, H. Kretzschmar, R. Kü mmerle, C. Stachniss, and W. Burgard, “Identifying

13

13


Journal Journal of of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

vegetation from laser data in structured outdoor environments”, Robotics and Autonomous Systems, vol. 62, no. 5, 2012, 675–684, 10.1016/j.robot.2012.10.003.

14

14

VOLUME N°11 2019 2019 VOLUME 13,13, N°


Journal of Automation, Mobile Robotics and Intelligent Systems

VOLUME 13,

N° 1

2019

Damage Recovery for Simulated Modular Robots Through Joint Evolution of Morphologies and Controllers Submitted: 3rd September 2018; accepted: 22nd January 2019

Djouher Akrour, NourEddine Djedi

DOI: 10.14313/JAMRIS_1-2019/2 Abstract: In order to be fully autonomous, robots have to be resilient so that they can recover from damages and operate for a long period of time with no human assistance. To be resilient, existing approaches propose to change the robots’ behavior using a different control system when a hardware fault or damage occurs. These approaches are used for robots which have fixed morphologies. However, we cannot assume which morphology would be optimal for a given problem and which morphology allows resilience. In the present paper, we introduce a new approach that generates resilient artificial modular robots by evolving the robot morphology along with its controller. We used a multi-objective evolutionary algorithm to optimize two objectives at a time, which are the traveled distance of a damage-free robot and the traveled distance of the same robot with damaged parts. The result of preliminary experiments demonstrates that during evaluation, when robots are deliberately faced to motor failures, the evolution process can optimize and generate new morphologies for which the robot’s behavior is less affected by damage. This makes the robot capable to recover its ability to move forward. Keywords: Artificial life, Controller, Evolutionary robotics, Modular robots, Resilience

1. Introduction Building resilient robots capable of recovering from damages is a central and challenging question [1, 2]. Making such robots would make them able to sustain their ability to pursue their missions when a hardware degradation occurs, with no human assistance. In order to have such robots, researchers propose to change robots’ behavior using new control systems. Some approaches attempt to let the robot use evolutionary algorithms to learn new compensatory behavior online after detecting the damage. In this case, learning is done either on the physical robot with an embedded learning [3, 4], or using a self-modeling approach, which is based on transfers of learned behaviors between a physical simulator and the robot [2]. This approach showed its capacity to decrease the learning time. Recently, Cully et al. [1] propose to create an offline behavior repertoire.

The latter is used during the mission to perform an online search of the best suitable controller for the current situation, therefore speeding-up the adaptation-to-damage process. While these approaches are used on robots with fixed morphologies, we propose to evolve the controller along with the morphology. It has been proved that co-evolving morphologies with their behavior can produce more adaptable and evolvable robots [5, 6]. In the early 90’s, Sims [7] pioneered the field by co-evolving the morphology and the control of artificial creatures. Since then, many methodologies on evolution, learning and generating artificial creatures have been studied, either in simulation [8] or embedded in real world robots [9]. However, it remains a challenging task [6]. By evolving the morphology, we seek to generate robots’ body plans that allow resilience without changing the controller each time a damage occurs. In other words, we want to optimize and produce robots which morphologies lower the impact of damages on their locomotion. Robots from literature, snakes [4] or multi-legged [1], have critical joints or limbs that play a central role on the motion efficiency. When failure affects one of these parts, the robot capacity to move is strongly compromised. In order to overpass this limitation, we aim at evolving robots with no such parts, and instead, focusing on robots containing joints and limbs that collaborate to compensate possible failures with the lowest possible effect on their objective. With this aim in mind, we use a multi-objective evolutionary algorithm to evolve robots maximizing both, the traveled distance of a damage-free robot and the traveled distance of the same robot with damaged parts. Robots used in this work are modular. Their morphologies and controllers are encoded in genomes using oriented graphs. In order to evaluate the performance of our robots, we use Gazebo, a realistic multi-robot simulator often used to simulate large robots (humanoids, wheeled-robots, etc.). It is recognized that Gazebo uses simulated sensors that produce a data stream which closely matches data from real-world physical sensors. The simulator provides for the environment and the simulated objects numerous physical attributes that we can set up with realistic values. Results of preliminary experiments described in this paper show that by intentionally damaging the robot’s motors during the evaluation, the evolution process can generate robots that can adapt their motions each time a failure occurs.

15


Journal of Automation, Mobile Robotics and Intelligent Systems

The paper is structured as follows: in Section 2, we introduce the virtual modular robots and show how the morphology and the controller are co-generated. In section 3, we present how resilience can be part of the evolutionary process. Section 4 describes the experimental results. Section 5 contains a discussion. Finally, section 6 concludes with a summary of the work and some possible perspectives.

2. Virtual Robots

Robots used in the present work are modular [13]. They are composed of a set of cuboids modules that are linked by joints of equal sizes. With the aim of making simulations more realistic, interpenetration between neighboring modules is forbidden during runtime. In this work, robots morphologies and controllers are encoded in genomes using oriented graphs [7], where each node contains the phenotypic parameters of one module, such as, the size, the available sensors and a local controller that controls one of the joints connected to it. This representation allows modularity and symmetry in the generated morphologies. All details of the genetic representation are provided in [11]. Each robot has two levels of controllers: a local controller included in every single module that controls the connecting joints and a global controller able to control all robot’ joints to globally modify the behavior of the robot when necessary. All controllers are a Multi-Layer Perceptrons (MLP) [14] using a Hyperbolic Tangent activation function. They contain at most 3 hidden layers which can have from 3 to 10 neurons. The input layer contains 10 neurons, receiving data from sensors and from communication. The output layer contains 3 neurons, one for communication and the two others are used to compute the torque that will be applied to the joint. The result value is then normalized to fit the range of allowed torque. Communication neurons are directly connected from the output of a module to an input of another module to leave open the emergence of communication protocol within the robot. An example of the distributed control system is illustrated in Fig. 1.

VOLUME 13,

N° 1

2019

Fig. 2. A typical example of a genotype. The outer graph describes the morphology (assembly and features) while the inner graph on each node is the controller [11] 2.1. Sensors and Effectors Sensors are used to inform robots about both the external world and the internal state. In our work, sensing capabilities are restricted to joint inner state (angle, torque), module inner state (force, orientation, linear and angular velocity) and contact sensors. Effectors apply torques onto robot’s joints. The torque applied to each joint is a sinusoidal function. The first neuron of the output layer codes for the amplitude and the second for the phase.

2.2. Simulator and Realism In order to obtain robots which behavior are as close as possible to real physical robots, simulations are using Gazebo with interpenetration disabled. Gazebo is a popular framework to simulate robots. It allows the production of a data stream which closely matches data from real-world physical sensors. In addition, it provides numerous physical attributes for the environment and the simulated objects. They are set up as follow: • Module dimensions: they are ranging from 5 cm to 10 cm. • Module mass: the mass is calculated from the volume of the module and its density. Modules have the density of water. • Module moment of inertia: it determines the difficulty of making an object rotate. It is the most influential parameter on the behavior realism. • Friction: when applied on module surface, opposing frictional force can avoid robot sliding, and when applied in joints it will avoid the unrealistic vibration moves. We set joint and module surface friction respectively to 0,2 and 0,5. • Joint damping: depending on joint velocity, damping allows the energy dissipation. This can avoid bouncing movements. It is set to 0,02. • Joint torque: it is the required force that can make a joint rotate and raise all the modules attached to him without breaking up the joint. However we set a torque limitation to 1,75 Nm. • Joint velocity: the maximum velocity allowed to a joint is 5 rad/s.

3. Resilience Approach Fig. 1. The phenotype generated from the genotype shown in Fig. 2. Blue, black and red lines represent respectively the communication circuit, torque and effectors. Grey module is the root. It contains the global controller [11] 16

Articles

After technical failures, resilient robots must keep moving forward. Changing the direction of motion after damage may produce non-desirable results and lead to a task failure, such as in robot surveillance, where robots have to maintain specific trajectories. Therefore, in our opinion, efficacy and velocity of the


Journal of Automation, Mobile Robotics and Intelligent Systems

locomotion are of lower importance than keeping a given direction. Robots can endure several kinds of damages such as motor failures, missing or broken limbs (legs), etc. In order to reduce the complexity of the recovery process, we focus in this study on motor failures, i.e. when a joint is disabled. We also limit during evaluation to one joint failure.

3.1. Objectives The objective is to produce either robots which morphologies allow for resilience or robots capable of finding new locomotion strategies to recover from damage. In other words, we aim at evolving robots whose locomotion efficiency is not determined by specific joints and limbs. So the failure of any motor that makes limbs uncontrolled and inactive will not substantially affect the locomotion efficiency.

3.2. Performance Function Robots are evaluated in the environment with and without faulty joints. Each evaluation takes 10 s of a simulation time. The initial point is taken after the stabilization of the robot (after 2 s). So, the overall evaluation time of a robot depends on the number of its joints. To evolve robots on which faults will not affect the locomotion, especially the direction, robots are evaluated on two objectives: (1) maximize the traveled distance with no failure in any direction (Eq. 1) (2) maximize the traveled distance with a faulty joint in the direction given by objective (1). The second objective can be rewritten as the minimization of the distance between the point reached by the damaged robot and some target point set in the direction given by objective (1). Among all motor failures evaluations, we choose to take the highest remaining distance as presented in Eq. 2. We note that the goal of the robot is to maintain the same direction of locomotion, not to reach the goal. So the remaining distance is a measure that let us know robots that sustain the direction of locomotion and keep moving forward.

VOLUME 13,

N° 1

2019

experiment starts with a random generation of 80 robots. Experiments ran for 400 generations. Crossover and mutation rate were respectively 35% and 75%. The robots’ behavior we are aiming at requires to optimize more than one objective. Having several objectives to satisfy in a single run needs the use of Multi-objective Optimization Evolutionary Algorithm (MOEA). Instead of searching one optimal solution, we produce a set of multiple trade-off solutions from which one solution can be selected from the decision maker. In this context, we use the Non-dominated Sorting Genetic Algorithm (NSGA-II) [15], which has been successfully applied to solve multi-objective problems. We optimize then the traveled distance of a damage-free robot and the traveled distance of the same robot with damaged parts. However, we notice that it is difficult to find robots that go forward after damage and travel also a long distance. In order to overcome this problem, we introduce a penalty that makes a pressure to favor the appropriate robots’ morphologies and controllers to emerge. In other words, we reduce the resilience of robots that have a fitness less than 20 cm (traveled distance when no motor failure occurs), we multiply the value of resilience (the remaining distance) by 1.5. In addition, when we introduce damage during the evaluation, we inactivate all the joints of the robot successively and assess each motor failure separately. This is necessary because if we select only some of them, the process of evolution will favor robots whose joints selected for damage are not important. For instance joints such us neck and fingers. And this is due to the fact that the sole purpose is to satisfy the objective function even if it does not meet the expectations and the final goal of engineers. Therefore, if a motor that controls a joint of an important role fails during mission, the robot remains in place.

4. Experimental Results

Graphs in Fig. 3 illustrate the convergence curve of the evolutionary process. Plots show the median, the interquartile range (dark ruban), the min and max fitness (light ruban) of 25 independent evolutionary dis = ( f x − i x ) + ( f y − i y ) (1) runs. 2 2  Graph Fig. 3a represent the best solutions acj j rem = max  ( ox − f x ) + ( o y − f y )  (2) j: indi of the cording to the first objective (robots that can travel   faulty joint a long distance when there are no failures), while Fig. 3b represent the best solutions according to the secwhere i and f are respectively the initial and the final ond objective (robots that can travel a long distance positions of the robot during evaluation, o is the tartoward target when there are failures). Graphs of the get that must be reached by the robot with a faulty figure demonstrate the distance traveled by robots joint. It is 1 m far from the starting point (Eq. 3). In with and without failure at each generation. that way we ensure that the locomotion of the robot Fig. 3a shows that, on average, robots can travel is forward even if there are failures. for up to 70 cm, but in counterpart, when a damage  ox =i x + 1 / dis ( f x − i x )  occurs, they either remain in place or keep moving   (3) in another direction (negative distance) for about o y =i y + 1 / dis ( f y − i y )  25 cm. Concerning the solutions taken from the other extremity of Pareto Front (Fig. 3b), intact robots trav3.3. Joint Evolution el in average 25 cm and when damage occurs, robots In order to test and evaluate the efficiency of our travel in average 15 cm. However, the displacement is system, we conducted a set of experiments. Each toward target like shown in Fig. 3b. Articles

17


Journal of Automation, Mobile Robotics and Intelligent Systems

VOLUME 13,

N° 1

2019

bots that travel from 20 cm to 40 cm, when damaged, have the capacity to maintain the same direction of locomotion and almost can achieve half the distance in the same period of time.

5. Discussion

(a)

We can see that robots that travel slowly can be easily resilient, while those that go fast traveling more distance are not. These robots have difficulties to maintain the direction of displacement when damage occurs. The generated morphologies in this case are interesting and show good strategies of locomotion. Nevertheless, there is always one or two joints that play the crucial role for doing effective moves. Therefore, the robot can either remain in place or completely change the direction of locomotion. Concerning robots that go slow and travel less distance, they are able to keep the same direction of displacement after damage. We have noticed that the failure of any motor does not affect drastically the manner of locomotion. The performance, in the worst case, drops to 40%. The control system succeeded at controlling all the joints of a robot in order to cooperate when there is a faulty joint. This is because none of the joints plays a crucial role for doing the effective moves. However, in return robots travel slowly.

(b) Fig. 3. Solutions taken from extremities of Pareto front; best resilient robots (b) and the less ones (a) that can travel long distances without damage. Blue curve represent the traveled distance without faulty joints. Red curve represent the traveled distance with a faulty joint

Fig. 5. Examples of modular robots that are resilient

Fig. 4. Individuals from Pareto Front of last generation of the 25 evolutionary runs

18

Fig. 4 illustrates the last generation Pareto fronts of the 25 evolutionary runs. Robots that are in the left side are the more resilient ones. When a failure occurs, they always maintain the same direction of locomotion, whereas others do not. We can notice that roArticles

Obtained morphologies of these resilient robots (Fig. 5) look in general the same in each run. We can notice that they are snake-like but with different joint rotation axis. Having this kind of joints allows robots to develop crawling behavior. Each time one joint is inactivated, other joints can compensate and continue the displacement. Resulting robots are capable of displacement on the same line and direction with and without damage, even though their performance (velocity) can be slightly reduced. However, if these robots had more evaluation time, they may reach the same positions as when there was no damage.


Journal of Automation, Mobile Robotics and Intelligent Systems

6. Conclusion This paper seeks to investigate whether resilience can be emerged by evolving the robot’s morphology or not. We deliberately caused damage to robot motors during evaluation time. We have noticed that the evolution can generate morphologies that cannot be considerably affected by the damage. Our approach was able to generate interesting robots locomotion as well as an ability to recover from damage and keep moving forward. The same controller was then able to control the intact robot and also the robot with damage. Since robots can be affected by several kinds of damage, we will on a future work induce more extreme damages such as missing or broken limbs.

AUTHORS

Djouher Akrour* – Department of Computer Science, Biskra University, 07000, Algeria, e-mail: akrour_djouher@yahoo.fr. NourEddine Djedi – Department of Computer Science, Biskra University, 07000, Algeria, e-mail: noureddine.djedi@gmail.com. *Corresponding author

REFERENCES

[1] A. Cully, J. Clune, D. Tarapore and J.-B. Mouret, “Robots that can adapt like animals”, Nature, vol. 521, no. 7553, 2015, 503–507 DOI: 10.1038/nature14422. [2] J. Bongard, V. Zykov and H. Lipson, “Resilient Machines Through Continuous Self-Modeling”, Science, vol. 314, no. 5802, 2006, 1118–1121 DOI: 10.1126/science.1133687. [3] D. Berenson, N. Estevez and H. Lipson, “Hardware evolution of analog circuits for in-situ robotic fault-recovery”. In: 2005 NASA/DoD Conference on Evolvable Hardware (EH’05), 2005, 12–19 DOI: 10.1109/EH.2005.30. [4] S. H. Mahdavi and P. J. Bentley, “Innately adaptive robotics through embodied evolution”, Autonomous Robots, vol. 20, no. 2, 2006, 149–163 DOI: 10.1007/ s10514-006-5941-6. [5] J. C. Bongard, A. Bernatskiy, K. Livingston, N. Livingston, J. Long and M. Smith, “Evolving Robot Morphology Facilitates the Evolution of Neural Modularity and Evolvability”. In: Proceedings of the 2015 Annual Conference on Genetic and Evolutionary Computation, New York, NY, USA, 2015, 129–136 DOI: 10.1145/2739480.2754750. [6] N. Cheney, J. Bongard, V. SunSpiral and H. Lipson, “Scalable co-optimization of morphology and control in embodied machines”, Journal of The Royal Society Interface, vol. 15, no. 143, 2018 DOI: 10.1098 /rsif.2017.0937. [7] K. Sims, “Evolving Virtual Creatures”. In: Proceedings of the 21st Annual Conference on Computer Graphics and Interactive Techniques, New York, NY, USA, 1994, 15–22 DOI: 10.1145 /192161.192167.

VOLUME 13,

N° 1

2019

[8] N. Lassabe, H. Luga and Y. Duthen, “A New Step for Artificial Creatures”. In: 2007 IEEE Symposium on Artificial Life, 2007, 243–250 DOI: 10.1109/ ALIFE.2007.367803. [9] A. E. Eiben and J. Smith, “From evolutionary computation to the evolution of things”, Nature, vol. 521, no. 7553, 2015, 476–482 DOI: 10.1038/ nature14544. [10] C. C. Coello, G. B. Lamont and D. A. van Veldhuizen, Evolutionary Algorithms for Solving Multi-Objective Problems, Genetic and Evolutionary Computation Series, Springer US, 2007 DOI: 10.1007/ 978-0-387-36797-2 [11] D. Akrour, S. Cussat-Blanc, S. Sanchez, N. Djedi and H. Luga, “Joint evolution of morphologies and controllers for realistic modular robots”. In: 22nd Symposium on Artificial Life and Robotics (AROB 2017), Beppu, Japan, 2017, 57–62. [12] N. Koenig and A. Howard, “Design and use paradigms for Gazebo, an open-source multi-robot simulator”. In: 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2004, 2149– 2154 DOI: 10.1109/IROS.2004.1389727. [13] M. Yim, W. Shen, B. Salemi, D. Rus, M. Moll, H. Lipson, E. Klavins and G. S. Chirikjian, “Modular Self-Reconfigurable Robot Systems [Grand Challenges of Robotics]”, IEEE Robotics Automation Magazine, vol. 14, no. 1, 2007, 43–52 DOI: 10.1109/MRA. 2007.339623. [14] M. T. Hagan, H. B. Demuth and M. H. Beale, Neural network design, Boston: PWS Pub, 1996. [15] K. Deb, A. Pratap, S. Agarwal and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: NSGA-II”, IEEE Transactions on Evolutionary Computation, vol. 6, no. 2, 2002, 182–197 DOI: 10.1109/4235.996017.

Articles

19


Journal of Automation, Mobile Robotics and Intelligent Systems

VOLUME 13,

N° 1

2019

Modern Measures of Risk Reduction in Industrial Processes Submitted: 8th October 2018; accepted 7th February 2019

Jan Maciej Kościelny, Michał Syfert, Bartłomiej Fajdek

DOI: 10.14313/JAMRIS_1-2019/3 Abstract: The article describes standard safety-protection layers according to EN 61511 standard. Their aim is to reduce risk, thus to decrease the frequency of occurrence of threatening incidents and/or consequences of such incidents. The aim of the article was to present currently developed means of increasing process safety, which are not included in the standards. There are described: advisory diagnostic systems, fault tolerant control systems, process simulators for operators training and IT systems supporting safety. Such components can be treated as additional layers of process protection. A simple example comparing the operation of alarm and diagnostic systems as well as the example of the fault tolerant control system of the level in the drum boiler in sugar factory are given. Keywords: risk reduction, layers of protection, realtime diagnostic systems, fault tolerant control systems, process simulators

disaster, which in consequence led to developing 3 other directives on risk control and reduction. An important element of technical safety is functional safety referring to all actions in the operating cycle of systems made of electric and/ or electronic and/or electronically programmed components. International action standards aimed at ensuring safety are defined through the following standards, as regards: general rules of functional safety – EN 61508 [38], industrial processes – EN 61511 [39], machines – EN 62061 [40] and nuclear power – EN 61513 [41]. The aim of this paper is a brief characteristics of currently developed measures increasing safety of the processes, which are not included in the abovementioned standards. We presented diagnostic expert systems, fault tolerant control systems, process simulators for operators training and computer systems supporting safety. These elements can be treated as additional layers of protection for the processes.

2. Standard Layers of Protection

1. Introduction

20

Technical safety is considered in two ways, safety as a problem of prevention against serious industrial accidents caused by unreliability of components of technological installation, such as pipeline breaks, faults of the elements of control systems and human errors; another aspect is security understood as an issue of protection against intentional, unfriendly external attacks, e.g. hacker attacks on control systems [11, 2, 13], and internal sabotage actions [17]. This paper analyses only the first aspect of safety. The risk of serious industrial accidents is at the moment one of the most important threats occurring in highly developed countries [35, 33, 23, 21]. A significant element of preventing such accidents is an issue of early detection and elimination of the sources of potential risks. For all installations posing a risk to human life or health, as well as to the environment and property, existing legal regulations and technical standards introduce a requirement of ensuring appropriate level of safety, i.e. reducing the risk to the acceptable level [17, 22, 14, 27, 31]. Direct impulse for developing and adopting directives on preventing serious industrial accidents was a Seveso

The aim of safety systems is risk reduction, thus minimizing the frequency of risk posing incidents and/or the reduction of their consequences. The structure of the used safety systems is layered (Fig. 1).

Fig. 1. Typical layers of protection The first layer is process installation, which should be resistant to internal and external interferences. The second layer is a Basic Process Control System, such as DCS (Distributed Control Systems), where control and monitoring is integrated, or system made of SCADA (Supervisory Control and Data


Journal of Automation, Mobile Robotics and Intelligent Systems

Acquisition) and Programmable Logic Controllers or Programmable Automation Controllers. Third layer is a separate system of critical alarms and interventions of process operators. Safety Instrumented Systems (SIS) constitute a fourth layer. These four layers are responsible for preventing the occurrence of accidents. Fifth layer is engineering safety systems, such as safety valves, curtains, safety barriers, housing etc., which are supposed to limit the consequences of the accidents. The higher layers are internal and external procedures and technical means aimed at minimizing human and material losses.

2.1. Process Installation The first layer is process installation, which should be resistant to internal and external interferences. When designing technical installations one should aim to eliminate or reduce possible emergency scenarios. The object should be characterized by intrinsic safety [4], i.e. embedded in its construction. As an example, there are efforts to introduce new technologies of nuclear reactors, which will provide maximum work safety and minimize the effects of possible accident. Their aim is to eliminate an opportunity of a core melt-down and the release of nuclear fission products outside the reactor. Unfortunately, in most cases it is not possible to design an installation in such a way to eliminate all potential risks, thus the other layers of protection are necessary.

2.2. Basic Process Control System Second layer is Basic Process Control System in the form of DCS (Distributed Control Systems), where the control and monitoring is integrated [15, 30, 32], or system made of SCADA (Supervisory Control and Data Acquisition) and Programmable Logic Controllers or Programmable Automation Controllers. Their aim is sustaining the process in a normal condition in all their stages – start, normal exploitation and stopping. Stabilization of pressures, flows, levels, temperatures etc. usually doesn’t lower the risk, because the set values of regulation systems are selected in the area of safety states. In the case of optimal control the risk usually increases due to the fact of conducting process variable 1

Limitation 2

Economic optimum

Limitation 1 Optimal control PID control

Limitation 3 process variable 2

Fig. 2. The operation areas of PID and optimization algorithms

VOLUME 13,

N° 1

2019

processes close to safety limitations, where optimal operating points are usually located. However, it should be emphasized that regulation systems are not resistant to the faults of drivers, measuring devices and actuators. Redundancy is mostly used for drivers, which are damaged least often. If we consider only accidents caused by control systems, then, according to the data from ABB and Emerson companies and ASM (Abnormal Situation Management) consortium, about 50% of them is caused by the damage to the actuators, 40% to measuring devices and only 10% to control units. Damages to the actuators, as well as measuring circuits were the reason of serious industrial accidents, e.g. in Buncefield in England in December 2005 [43]. Breakdown of level sensor in oil storage facility caused overflow, and then an explosion. This was the biggest fire in Europe, 40 people were injured and material losses were estimated at 5 billion pounds. We have to stress one more feature of regulation systems, the operation of negative feedback loop results in masking the symptoms of faults. As an example, a leak of toxic substance from the tank with an active regulation system of a fixed value level may not be detected by the alarm system due to the increase of inflow of medium to the tank through a regulator [15]. The situation is presented in Fig. 6.

2.3. Alarm Systems and Operators Action In SCADA and DCS systems, as well as in SIS, alarm system (AS) is used for detecting abnormal and emergency states. Basic method of fault detection used in AS is limitation control [9, 16, 15, 18]. With the use of this method, exceeding of the absolute and relative (referring to the set point) limitations are detected by process variables. Limitations concern the value, and also the allowed speed of signal changes. In a well-designed AS, every alarm should be useful and significant for the operator, i.e. it should warn, inform and indicate appropriate reaction. In practice, this requirement is hardly ever fulfilled. The basic disadvantage of AS is an excess of the generated alarms. From the data gathered by EEMUA [7] it turns out that an average daily number of alarms in petrochemical industry is 1.500 and in the power industry – 2.000, whereas, according to the recommendations, it shouldn’t be higher than 144. The causes of such situation are [1, 7, 42]: • easiness of defining alarms in the design stage with simultaneous problems with removing them (experts arrangements required), • the occurrence of a large number of alarms in a short period of time in states with serious faults, • a large number of alarms resulting from a common, single cause (even over 500 alarms), • repeated alarms (process variables fluctuating close to the alarm threshold), • lack of proper mechanisms of alarm filtration in the system. Other disadvantages of AS are long delays in the detection, signalled earlier masking of the symptoms by regulation systems and the lack of automatic fault Articles

21


Journal of Automation, Mobile Robotics and Intelligent Systems

location. Process operators are then responsible for this task. Interpretation of a huge number of alarms arising in a short time is a serious problem for the operators, even more that the occurrence of each of the alarms may be caused by various reasons. Here we deal with a phenomenon of information overload, and as a result – stress. In such conditions operators are not able to formulate proper diagnosis, i.e. to recognize the existing risks. It increases the probability of improper protective reactions, the consequences of which, together with previous faults, result in serious accidents. The mechanism of such unfavourable positive feedback was the cause of numerous severe accidents in nuclear and conventional power stations or chemical plants. The excess of alarms was a cause of accident in Texaco Milford Haven in 1994. During 11 minutes preceding the explosion, 2 operators had to recognize, confirm and properly react to 275 alarms [44]. Nowadays, alarm systems are developed in order to reduce their defects. First of all, they provide mechanisms allowing for the reduction of the number of alarms, such as: filtering the alarms, alarm hiding, alarm shelving and grouping the alarms caused by a common reason. Different algorithms of alarm analysis are also introduced. However, all of these solutions are not able to eliminate all inconveniences resulting from the use of the simplest, but highly deficient method of fault detection, namely limitation control. 2.4. Safety Instrumented Systems (SIS) SIS is used for implementation of adequate functions of process safety, what allows for achieving proper level of safety integrity [36, 37]. It implements locking and automatic protection algorithms, the aim of which is bringing the process to a safe state. These signals may, for example, cut off power supply or materials inflow, block actuators in a safe position, activate cut-off valves, set safe state of operation of engines, pumps, ventilators etc. Usually, SIS operation is connected with stopping the whole process or a part of it, what results in economic losses. As regards measurements and control influence, as well as control, SIS should be functionally independent from BPCS control system. This means that security features are realized with the use of

Fig. 3. Separation of BPCS and SIS systems 22

Articles

VOLUME 13,

N° 1

2019

devices (usually operating in redundant structure) other than control tasks (Fig. 3). Integration of control and safety systems is permitted at the level of process visualization and configuration tools. In SIS we often use a high redundancy level (2oo3, 2oo4D) allowing for achieving appropriate SIL level.

2.5. Higher Layers These 4 above-mentioned layers aim at preventing accidents. Fifth layer is systems of engineering protection, such as discharge valve, curtains, safety barriers etc., which are only supposed to limit the consequences of the emergencies. The higher layers are protection measures minimizing the results of releases (dikes, security housing), as well as internal and external procedures and technical means, the aim of which is minimizing human and material losses [17].

3. Non-Standard Layers of Protection

Alarm systems due to their faults are not an effective tool for early detection of emergency states. This impedes undertaking effective protective actions by the operators. On the other hand, SIS operation is connected with stopping of the whole process or a part of it leading to economic losses. That is why it is advisable to apply solutions which guarantee elimination of risks in their initial stages and do not allow for SIS activity and emergency system shutdown. The methods of risk reduction, that do not cause stopping of the process are: • expert real-time diagnostic systems, • FTCS – Fault Tolerant Control Systems, • operators training, especially with the use of process simulators, which can run emergency scenarios, • computer systems for supporting safety – preventing accidents. These methods are a subject of scientific research and pilot implementation and at this stage are not covered by international regulations.

3.1. Expert Diagnostic Systems Deficiency of alarm systems is a cause of development of diagnostic systems (DS) for industrial processes. The aim of DS is early detection and recognition of faults (understood as all types of incidents influencing the process in a destructive way), including mainly faults of technological installation components, measuring devices and actuators [6, 12, 15, 16, 18, 29, 28, 24, 25]. Such systems may realize the following functions: faults detection and location, archiving data describing faults, generating diagnostic reports, visualization and justification of diagnoses, supporting operators decisions in emergency states. In DS intended for industrial processes, the methods of detection using partial models are of basic significance, while these are models designed for a normal state of the process. In Fig. 4 we present


Journal of Automation, Mobile Robotics and Intelligent Systems Faults

U

Proces Process

Y

Partial models analytical, neuronal, fuzzy

yi

yi

2019

ShutDown action

ri

Ocena Evaluation residuów of residuals

si Rules about faults

N° 1

SIS system is not activated and neither the whole technological process nor its part is stopped. Thus we avoid considerable economic losses.

Y Model Process procesu models

VOLUME 13,

Safety limit Alarm limit

Alarm

SIS Alarm system FDI,FTC

Lokalizacja Fault uszkodzeń isolation Diagnosis

Pv

f

t

Fault

Fig. 4. Diagram of diagnosis with the use of partial model of the process a diagram of diagnosis with the use of the process models. On the basis of the signals measured and calculated from the model, residues are generated carrying information on the symptoms of the faults (value of the residues diverging from 0) or the lack of them (value of the residues fluctuating around 0). As a result of the residue values assessment, diagnostic signals arise (binary or multiple-valued, crisp or fuzzy), which are the input of an algorithm of fault location. Detection methods based on the models allow for early detection of small faults before their negative consequences appear. Different types of object models can be applied: analytical, neural, fuzzy, statistical [4…15, 18, 28, 6]. Fault location is conducted on the basis of diagnostic signals generated by detection algorithms. The result of location is diagnosis, i.e. hypothesis on the observed fault (faults). For fault location it is necessary to know relation between the values of diagnostic signals S and faults F. Among the methods of fault location we can distinguish classification methods and automatic inference methods [20]. In fact, in industrial processes there are practically no measurement data for the states of faults. This restricts the possibility of application of classification methods requiring teaching data for particular states of the process. Fault location should then be conducted on the basis of automatic inference and the knowledge on the faults – symptoms relation should be determined on the basis of expert knowledge. The choice of fault detection and location methods is significant for reliable functioning of diagnostic system. Automatic implementation of diagnostic actions in the course of system operation considerably reduces the time of detection and location of an accident in comparison to diagnostics realized by the alarm system and operator. Diagnoses precisely indicate the observed faults. On this basis, the system is able to additionally advise the personnel by giving instructions to be followed in abnormal and emergency states. Due to that, they can undertake quick and effective protective actions. They should bring the process to a normal state (Fig. 5). As a result,

Fig. 5. Courses of the process in the system with and without diagnostics The beneficial influence of the diagnostics performed in real time on the reliability and security of the system may be proved by analysing the indicators characterizing these properties. For correctable systems it is common to use availability factor of the system. It is expressed by the following formula: A=

Tλ MTTF = Tλ + Tµ MTBF

(1)

where: • MTTF – Mean Time To Failure – Tl, • Mean Time Between Failures – MTBF = MTTF + MTTR. • MTTR – Mean Time To Repair, i.e. mean time from the moment of diagnosing failure to the moment of repairing the damaged equipment – Tm, • Mean Time To Diagnose – TD, • Mean Time to Renewal, i.e. repairing the damaged equipment or replacing it with a suitable one together with reconstruction of the system after the repair/replacement – TN. Mean Time To Repair is the sum of Mean Times To Diagnose and Replacement of the device: Tm = TD + TN (2) Shortening diagnostic times reduces the time Tm (MTTR), thus increasing the value of availability factor of the system (1). In practice, the time of diagnosis realized automatically is close to 0: TD ≈ 0. Intensity of the damages is the reverse of the mean time to the fault:

λ=

1 Tλ

(3)

The total intensity (probability) of the faults l is the total of the intensities of dangerous detectable faults lDD, dangerous undetectable faults lDU, safe detectable faults lDS and safe undetectable faults lSU [36]:

λ = λDD + λDU + λSD + λSU

(4)

Safety Integrity Level (SIL) specified in EN 61508 norm [36] depends on: the mean probability of not fulfilling the security function (PDFSYS) – for security Articles

23


Journal of Automation, Mobile Robotics and Intelligent Systems

VOLUME 13,

N° 1

2019

systems operating for call or mean probability of a dangerous fault per hour (PFHSYS) – for security systems operating in a continuous manner. The values of these probabilities depend, inter alia, on DC diagnostic coverage, which is formulated as follows:

Diagnostic system for the analysed object performs the following four tests:

DC =

λDD λDD + λDU

(5) Formula (5) shows that covering all dangerous faults by current diagnostics allows for increasing diagnostic coverage factor to the value of 1. This will reduce the risk, i.e. limit PDFSYS lub PFHSYS . Another factor exemplifying the influence of diagnostics is SFF (Safe Failure Fraction) factor determining contribution of the safe faults: SFF =

λSD + λSU + λDD λ − λDU = λSD + λSU + λDD + λDU λ

(6)

The more faults is detected the higher the value of this factor, and with full detectability it is equal to 1. This factor is of crucial importance for SIL verification of E/E/PE system made on the basis of data on the tolerance of the equipment faults. Example 1. Comparison of the operation of alarm and diagnostic systems Below can be found an example of alarm and diagnostic system for a simple installation of buffer tank of a toxic substance. Diagram of the object, as well as the system of level regulation, is presented in Fig. 6.

r1 = F − α12S 2 gL1 − A

r2 = F − α12S 2 gL2 − A

dL1 , dt dL2 , dt

r3 = F − Fˆ = F − f ( CV )

(8) (9)

(10)

r4 = L1 − L2 . (11) Test 1 and test 2 detects faults on the basis of noncompliance of the balance in the tank. Test 3 controls the compatibility of the measured flow and flow calculated from the model of the control valve F̂ = f ( CV ) . The model of water flow through the control valve has only one input – the CV signal from the controller. In the general case, such flow is also dependent on the pressure difference on the valve. However in this case it can be concluded that such drop is approximately constant. Well-adjusted pump should ensure the stability of pressure in front of the valve in the whole range of flow changes, whereas behind the valve the liquid flows out freely and at the end of the pipeline pressure is equal to atmospheric pressure. Therefore, the difference in pressures at the valves is constant. Test 4 verifies the compatibility of redundancy measurements of the medium level in the tank. Table 1 summarizes the possible faults. Tab. 1. List of faults Symbol

Fault

f1

leak of a toxic substance

f3

damage to F measurement chain

f3

damage to the control valve

f4

damage to L1 measurement chain

f5

damage to L2 measurement chain

The sensitivity of the particular tests on faults is given in binary diagnostic matrix (Tab. 2). Tab. 2. Binary diagnostic matrix for buffer tank S/F

f1

s2

1

s1

s3

Fig. 6. Diagram of the object – buffer tank of toxic substance and level control system In the alarm system, alarms L1LO, L1HI, L2LO, L2HI, LLO are detected. On the basis of the exceeded values of limiting the bottom of medium in the tank with the normal value of the F flow, operator may infer that the f1 fault has occurred – leak in a tank and leak of toxic medium. Inference algorithm is as follows: 24

If ( L1LO ∧ L2LO ∧ ¬FLO )Then f1

Articles

(7)

s4

1

f2

1

f3 1 1

1

f4

f5

1

1

1

1

According to binary diagnostic matrix, all faults are detectable in the designed diagnostic system. Leak of toxic substance is detected on the basis of the following principle: If ( s1 = 1) ∧ ( s2 = 1) ∧ ( s3 = 0 )

∧ ( s4 = 0 )Then f1

(12)


Journal of Automation, Mobile Robotics and Intelligent Systems

Comparing the operation of the alarm system with a diagnostic system one can state that: • in the alarm system the leak may stay undetected due to masking effect of the control circuit – detection is uncertain, • inferring on the causes of the alarm, according to dependency (7) is conducted by the operator, what may cause additional delays. Delays in inferring may also depend on the difference between the value of medium level in the moment of a failure and lower limit LO LIMIT; • time to recognize the toxic substance is definitely faster in the case of diagnostic system. It depends on the accepted residue limitations (they can be sharp or fuzzy), but automatic recognition of the fault is fast and reliable.

3.2. Fault Tolerant Control System Diagnostics realized in a real time is also the basis for realization of Fault Tolerant Control (FTC) systems [4, 9, 16, 18, 34, 10, 3]. FTC systems are currently one of the most important directions of research and development as regards automatic control. First works from this range concerned aviation industry. However, nowadays, in addition to applications in aircraft, FTC systems are designed in industrial processes. The idea of active FTC systems design consists in the realization of current diagnostics and real-time reconfiguration of the equipment or program structure of the system in the states with faults. Instead of operator intervention, restoring the ability of the system functioning is automatic. Therefore, these are systems of variable structure. The general diagram of Fault Tolerant Control system is presented in Fig. 7. f

Cotrollers

u

Aktuators

f

Process

f

Sensors

faults

outputs y

Diagnostics diagnoses Re-configuration mechanism

Fig. 7. The diagram of FTC system (u - control signals, y – outputs, f – faults) The concept of FTC systems itself coincides with the structure of dynamic redundancy. Specific nature of FTC systems is the use of program redundancy instead of equipment redundancy. When designing FTC systems, mainly measurement and actuator devices’ faults are taken into consideration. To recognize faults of the elements of the control system, diagnostic methods are applied based on the process models. In complex control systems, even with no equipment redundancy, there are usually possible reconfigurations of automatic systems in such a way to eliminate or reduce the unfavourable influence of the damages to the measuring circuits on the functioning

VOLUME 13,

N° 1

2019

of the process. In order to reproduce the values of the signals, which measuring circuits are damaged we usually use virtual sensors, which calculate the value of the signal on the basis of the model, using other measurement signals. Dynamic substitution of signal values from the damaged measuring circuits by equivalent signals is also possible. Much more difficult is to design systems resistant to faults of actuators. In the case of multidimensional objects with numerous control inputs, the inability to change the value of one of the inputs may be in some cases neutralized by appropriate setting of the remaining inputs. As an example, damage to the single engine in a plane may be compensated by changes of flying qualities, deflecting flaps and skilful power dispense of other engines. However, in most cases redundancy of these devices is indispensable. Developing Fault Tolerant Control system requires designing for each of the faults an algorithm of automatic system functioning in the state of this fault and the procedure of non-impact switching from the normal state control to reserve control. The condition for making such changes is however adequately quick detection and unambiguous location of the faults. Research works in the field of FTC systems are focused on the advanced control systems, while more than 90% of all applications are the systems with PID controllers [26]. This is the cause of delays of the current state of the art in relation to the progress in scientific research. The current control systems only marginally allow for designing systems tolerant of faults, as they are not equipped with the appropriate diagnostic and re-configuration software. FTC systems’ applications have the character of research and pilot implementations.

Example 2. Level control system of the boiler drum in sugar-refinery tolerating faults of measurement chains Control of the water level in a boiler drum of sugar-refinery is led in a cascade structure, when the main value of control is the level and supporting role is given to the inflow of supplying water and steam outflow. Tolerating faults of the L1, F1, F2 measurement chains is realized with the use of virtual sensors of these physical quantities executed in the technique of neural networks. Virtual sensor of the water level of a Lˆ3 = f ( F1 , F2 , P2 ) structure is only required to provide reliable fault diagnosis of the faults of L1 and L2 equipment chains. When L1 chain is damaged switching into redundancy L2 equipment chain is performed, provided that it is fit. When diagnostic system detects fault in measurement chain of F1, F2 water or steam flow, appropriate virtual sensor is used: Fˆ1 = f (Y1 , P1 , P2 ) lub Fˆ2 = f ( P2 , P3 ,Y2 ) . To detection and localization of faults, diagnostic systems use all three aforementioned models and other simple relations between process variables. In faults tolerated by control systems, both, time of diagnostics realized automatically is close to zero: TD ≈ 0, and time of automatically executed reconfiguration is reduced to zero TN ≈ 0. This increases system’s availability A ≈ 1. Articles

25


Journal of Automation, Mobile Robotics and Intelligent Systems

VOLUME 13,

Algorytm regulacji

N° 1

2019

L0 PI

Algorytm regulacji

controller

P

L0

U2

PI

P

∑ U Y1

P1

P2

F2 L1 F1

L2

U

Y2

Y1

P3

steam

water

Fig. 8. The structure of the level control system in boiler drum in sugar-refinery tolerating faults of measurement chains. Symbols: L1, L2 – measurements of the water level in boiler drum, F1 – inflow of water supplying boiler, F2 – steam outflow, P1- pressure of supplying water, P2 – pressure in boiler drum, P3 – steam pressure at the outlet (behind control valve), Y1 – location of the piston rod on the water inflow, Y2 – location of the piston rod on the steam outflow PFHSYS probability for the entire FTC control system is determined by the following formula: PFHSYS = PFHI + PFHC + PFHA

(13)

where particular elements correspond to the mean probability of dangerous fault per hour for the measurements of a controller and actuator. Due to the fact of tolerating faults of all measurement chains, PFHI ≈ 0 and thus the value of PFHSYS for the entire system decreases. The above mentioned example shows that implementation of the control systems tolerating faults significantly contributes to the improvement of security and reliability of control systems and controlled processes.

26

3.3. Process Simulators Statistical information on the causes of all types of damages from different sources are rather similar and indicate human errors as the most frequent cause (about 40%). The basic method of human errors reduction is training. Process operators perform particularly significant role. They analyse the alarms and make crucial decisions on the way of conducting the process in abnormal states. Requirements concerning adequate preparation of operators is particularly difficult to fulfil, since we notice that together with the increase of the level of automation and reliability of the process installation, operators qualifications in the area of undertaking proper actions in unusual and emergency situations are getting worse. They rarely conduct the process in the mode of manual control and thus they don’t feel its dynamics. Moreover, training of operators at real and operating installation is ineffective mainly due to the inability to train reactions in abnormal and emergency states. Even during the longest period of training all possible states of installation operation hardly ever happen, what sooner or later result in a situation when an operator will have to work in conditions he was not prepared to work (as an example, fault which didn’t occur during the training). Articles

Avoiding all mentioned inconveniences is possible due to F the application of program simulators of processes coupled with the real control system. Simulators areL realized on the basis of equations describing physical phenomena occurring in the object. Differential equations describing the object are in many cases very complex, there can occur highly non-linear relations with distributed parameters etc. Solving such set of equations with the use of numerical methods requires high computational costs and does not guarantee the realization of calculations in a real time. That is why we commonly use approximation models for building simulators, which has to reproduce the functioning of a real object with proper accuracy [15]. Requirements placed for process simulators are very strict. They have to, among others: • reproduce different states of the process, particularly: different points of operation, load changes, start and withdrawal, • ensure a possibility of simulating emergency states, • ensure a possibility of tuning parameters in order to obtain satisfactory compliance with the real process, • cooperate with physical control system or emulate this system with its protection system. Simulator of the process developed for the purposes of operators training may be later used for process diagnostics, testing new strategies of control and optimization of the object operation. 2

L1 F1

2

3.4. Security Support Systems – Preventing Accidents Significant element of supporting security in the facilities of increased or high risk may be computer systems preventing accidents. Their aim is to gather in a database documentation concerning safety, as well as monitoring and supervising actions connected with counteracting threats. The need of their realization results from legal regulations imposing a number of obligations on the industrial plants, as well as on the supervisory bodies. These are: Environmental Protection Law and the Directive 2012/18/EU of the European Parliament and Council on the control of major accident hazards involving dangerous substances. According to these regulations, one of the most important procedures of the system of counteracting serious accidents is a system of safety management (SSM), which implements a program of accident prevention (PAP) in the facility. Exemplary solution of such system is presented in Chapter 4.

4. Intelligent Accident Prevention System IAPS

Developing this system was undertook by the Institute of Automatic Control and Robotics of Warsaw University of Technology. The aim of the project is realization of IAPS – Intelligent Accident Prevention System. This is a computer system supporting introduction and monitoring of safety systems in facilities posing a risk of serious industrial accident. The operation of the system consists in gathering digital


Journal of Automation, Mobile Robotics and Intelligent Systems

documentation connected with safety in the plant, monitoring and supervising tasks connected with implementation and realization of SSM and PAP, as well as supporting making and gathering of data in the field of risk analysis, including HAZOP (Hazard and Operability Study). The system enables gathering in digital version and providing to the authorized people in a plant, as well as external entities, such as State Fire Service units, Chief Environmental Protection Inspectorate, Office of Technical Inspection, any data and documentation concerning safety in a plant, among others – a list of hazardous substances that can be found in a plant, the results of risk and emergency situations analyses, operational and rescue plans etc. The use of the system means introducing a new generation of tools supporting safety systems in facilities posing a risk of serious industrial accidents. Such solution should contribute to the increase of safety and competitiveness of all companies which will implement the system. It will be an additional security layer, the aim of which is prevention activities, securing against the occurrence of serious industrial accident. Such actions are of organizational character and include gathering and clearing the documentation connected with safety in a plant, as well as monitoring the current tasks concerning counteracting serious industrial accidents. The system enables also remote supervision of the plant by the authorized institutions.

5. Conclusion

The paper points out that not only the SIS influences the safety of the process, but the risk reduction may also be achieved by the use of: • the system of current diagnostics of process and object devices • fault tolerant control systems • process simulators for the purposes of operators training • computer systems of accident prevention Diagnostic systems together with the operators interventions, as well as FTC systems may constitute additional layer of protection for the process [19]. It reduces process risk by eliminating threats in their early stage. At the same time it reduces economic losses in the states of fault, because it does not lead to stopping of the process and thus – production. Diagnostic systems for industrial processes, as well as fault tolerant control systems are not sufficiently widespread in industry, but they are in the stage of pilot research and first implementations. We can expect that in a short time there will be an intense development of control systems equipped with software for process diagnostics and realization of FTC systems, as well as an increase in the number of industrial applications in this area. A certain obstacle impeding industrial applications is the lack of specialists in this field. However, at the moment at many universities these issues are included in the curriculum and young engineers will get prepared to the realization of innovative solutions of control systems. Additional argument supporting its application is striving

VOLUME 13,

N° 1

2019

for the reduction of economic losses connected with the faults and the ability to reduce the insurance costs of technological installation. Simulators are commonly used for the training of pilots, captains and operators in nuclear power stations. They are more and more frequently used also in conventional power sector and petrochemical and chemical industry. We may expect a rapid growth of their applications, despite high costs of their construction. Increasing number of companies offer simulators dedicated to specific processes. There are also program packages which are the basis for realization of the simulators. Introducing training on the simulators will result in significant increase of safety of the process and reduction of economic losses caused by the operators errors. Computer systems of accident prevention have a significant impact on the organizational part in the facilities of increased or high risk. Gathering and organizing the documentation connected with safety in a plant in a digital form and monitoring of current tasks concerning counteracting serious industrial accidents significantly influences the quality of organizational activities, human competences and increase of safety culture. Such system also allows for manual supervision of the plant by the authorized institutions.

Acknowledgements

Developed on the basis of the results of the 3rd stage of the long-term program entitled “Improving the safety and working conditions” funded by the Ministry of Science and Higher Education/National Centre for Research and Development in the years 2014-2016 in the field of research and development. Coordinator of the program: Central Institute for Labour Protection – National Research Institute.

AUTHOR

Jan Maciej Kościelny – Warsaw University of Technology, Faculty of Mechatronics, Institute of Automatic Control and Robotics, ul. Św. Andrzeja Boboli 8, 02-525, Poland, E-mail: jmk@mchtr.pw.edu.pl Michał Syfert – Warsaw University of Technology, Faculty of Mechatronics, Institute of Automatic Control and Robotics, ul. Św. Andrzeja Boboli 8, 02-525, Poland, E-mail: m.syfert@mchtr.pw.edu.pl Bartłomiej Fajdek* – Warsaw University of Technology, Faculty of Mechatronics, Institute of Automatic Control and Robotics, ul. Św. Andrzeja Boboli 8, 02-525, Poland, E-mail: b.fajdek@mchtr.pw.edu.pl *Corresponding author

REFERENCES

[1] J. Errington, D. V. Reising, C. Burns, and ASM Joint R & D Consortium, Effective alarm management practices, ASM Consortium: Phoenix, 2009. [2] S. Bajpai and J. P. Gupta, “Terror-Proofing Chemical Process Industries”, Process Safety and EnviArticles

27


Journal of Automation, Mobile Robotics and Intelligent Systems

28

ronmental Protection, vol. 85, no. 6, 2007, 559– 565 DOI: 10.1205/psep06046. [3] M. Blanke, C. Frei, F. Kraus, R. J. Patton, and M. Staroswiecki. “Fault-tolerant Control Systems”. In: K. Åström, P. Albertos, M. Blanke, A. Isidori, W. Schaufelberger, and R. Sanz, eds., Control of Complex Systems, 165–189, Springer London, 2001. [4] M. Blanke, M. Kinnaert, J. Lunze, and M. Staroswiecki, Diagnosis and Fault-Tolerant Control, Springer-Verlag: Berlin Heidelberg, 2006. [5] J. Chen and R. J. Patton, Robust Model-Based Fault Diagnosis for Dynamic Systems, The International Series on Asian Studies in Computer and Information Science, Springer US, 1999. [6] J. Chen and R. Patton, Robust Model-Based Fault Diagnosis for Dynamic Systems, Springer, 2012. [7] Engineering equipment and materials users association EEMUA Publication 191: Alarm Systems – A Guide to Design, Management & Procurement, London, 2007. [8] J. Gertler, Fault detection and diagnosis in engineering systems, Marcel Dekker: New York, 1998. [9] R. Isermann, Fault-diagnosis systems: an introduction from fault detection to fault tolerance, Springer: Berlin; New York, 2006. [10] J. Jiang and X. Yu, “Fault-tolerant control systems: A comparative study between active and passive approaches”, Annual Reviews in Control, vol. 36, no. 1, 2012, 60–72 DOI: 10.1016/j.arcontrol.2012.03.005. [11] C. Jochum, “Can Chemical Plants be Protected Against Terrorist Attacks?”, Process Safety and Environmental Protection, vol. 83, no. 5, 2005, 459–462 DOI: 10.1205/psep.04189. [12] S. Kabir, M. Walker, Y. Papadopoulos, E. Rüde, and P. Securius, “Fuzzy temporal fault tree analysis of dynamic systems”, International Journal of Approximate Reasoning, vol. 77, 2016, 20–37 DOI: 10.1016/j.ijar.2016.05.006. [13] S. Karnouskos, “Stuxnet worm impact on industrial cyber-physical system security”. In: IECON 2011 – 37th Annual Conference of the IEEE Industrial Electronics Society, Melbourne, 2011, 4490– 4494 DOI: 10.1109/IECON.2011.6120048. [14] F. Khan, S. Rathnayaka, and S. Ahmed, “Methods and models in process safety and risk management: Past, present and future”, Process Safety and Environmental Protection, vol. 98, 2015, 116–147 DOI: 10.1016/j.psep.2015.07.005. [15] J. Korbicz and J. M. Kościelny, eds., Modeling, diagnostics and process control: implementation in the DiaSter system, Springer-Verlag: Berlin, Heidelberg, 2010. [16] J. Korbicz, J. M. Kościelny, Z. Kowalczuk, and W. Cholewa, eds., Diagnostyka procesów: mo­ dele, metody sztucznej inteligencji, zastosowania, (Diagnostics of processes: models, artificial intelligence methods, applications), Wydawnictwa Naukowo-Techniczne: Warszawa, 2002 (in Polish). [17] K. T. Kosmowski, ed., Podstawy bezpieczeństwa funkcjonalnego (Basics of functional safety), Wydawnictwo Politechniki Gdańskiej: Gdańsk, 2016 (in Polish). Articles

VOLUME 13,

N° 1

2019

[18] J. M. Kościelny, Diagnostyka zautomatyzowanych procesów przemysłowych (Diagnostics of automated industrial processes), Akademicka Oficyna Wydawnicza EXIT: Warszawa, 2001 (in Polish). [19] J. M. Kościelny and M. Bartyś, “The Requirements for a New Layer in the Industrial Safety Systems”. In: IFAC-PapersOnLine, vol. 48, Paris, France, 2015, 1333–1338 DOI: 10.1016/j.ifacol.2015.09.710. [20] S. Leonhardt and M. Ayoubi, “Methods of fault diagnosis”, Control Engineering Practice, vol. 5, no. 5, 1997, 683–692 DOI: 10.1016/S0967-0661(97)00 050-6. [21] E. K. Mihailidou, K. D. Antoniadis, and M. J. Assael, “The 319 Major Industrial Accidents Since 1917”, International Review of Chemical Engineering, vol. 4, no. 6, 2012, 529–540. [22] T. Missala, Analiza wymagań i metod postępowania przy ocenie ryzyka i określaniu wymaganego poziomu nienaruszalności bezpieczeństwa (Analysis of requirements and proceeding methods for risk evaluation and determining the required safety integrity level), Oficyna Wydawnicza PIAP: Warszawa, 2009 (in Polish). [23] P. Okoh and S. Haugen, “A study of maintenancerelated major accident cases in the 21st century”, Process Safety and Environmental Protection, vol. 92, no. 4, 2014, 346–356 DOI: 10.1016/j.psep.2014.03.001. [24] Y. Papadopoulos, “Model-based system monitoring and diagnosis of failures using statecharts and fault trees”, Reliability Engineering & System Safety, vol. 81, no. 3, 2003, 325–341 DOI: 10.1016/S0951-8320(03)00095-4. [25] R. J. Patton, P. M. Frank, and R. N. Clark, eds., Issues of Fault Diagnosis for Dynamic Systems, Springer-Verlag: London, 2000. [26] M. Pawlak, J. M. Kościelny, and P. Wasiewicz, “Method of increasing the reliability and safety of the processes through the use of fault tolerant control systems”, Eksploatacja i Niezawodnosc – Maintenance and Reliability, vol. 17, no. 3, 2015, 398–407 DOI: 10.17531/ein.2015.3.10. [27] E. Piesik, M. Śliwiński, and T. Barnert, “Determining and verifying the safety integrity level of the safety instrumented systems with the uncertainty and security aspects”, Reliability Engineering & System Safety, vol. 152, 2016, 259–272 DOI: 10.1016/j.ress.2016.03.018. [28] S. Simani, C. Fantuzzi, and R. J. Patton, Modelbased Fault Diagnosis in Dynamic Systems Using Identification Techniques, Advances in Industrial Control, Springer-Verlag: London, 2003. [29] M. Syfert, P. Wnuk, and J. M. Kościelny, “DiaSter – Intelligent system for diagnostics and automatic control support of industrial processes”, Journal of Automation, Mobile Robotics and Intelligent Systems, vol. 5, no. 4, 2011, 41–46. [30] P. Tatjewski, J. M. Kościelny, W. Nagórko, and L. Trybus. “Wybrane układy i systemy automatyki przemysłowej: systemy sterowania, sterowanie zaawansowane, diagnostyka, zarządzanie alarmami (Selected schemes and control systems of industrial automation: control systems, advanced control, diagnostics, alarm management)”. In: K. Malinowski and R. Dindorf, eds., Postępy automatyki i robotyki, volume 2, 350–


Journal of Automation, Mobile Robotics and Intelligent Systems

382. Wydawnictwo Politechniki Świętokrzyskiej, Kielce, 2011 (in Polish). [31] T. Barnert, K. Kosmowski and M. Śliwiński, “Security Aspects in Verification of the Safety Integrity Level of Distributed Control and Protection Systems”, Journal of KONBiN, vol. 6, no. 3, 2008, 25–40 DOI: 10.2478/v10040-008-0056-0. [32] L. Urbas, A. Krause, and J. Ziegler, Process control systems engineering, Oldenbourg Industrieverl: München, 2012. [33] H.-J. Uth, “Trends in major industrial accidents in Germany”, Journal of Loss Prevention in the Process Industries, vol. 12, no. 1, 1999, 69–73 DOI: 10.1016/S0950-4230(98)00039-4. [34] M. Mahmoud, J. Jiang, and Y. Zhang, Active Fault Tolerant Control Systems: Stochastic Analysis and Synthesis, Lecture Notes in Control and Information Sciences, Springer-Verlag: Berlin Heidelberg, 2003. [35] E. Zio and T. Aven, “Industrial disasters: Extreme events, extremely rare. Some reflections on the treatment of uncertainties in the assessment of the associated risks”, Process Safety and Environmental Protection, vol. 91, no. 1, 2013, 31–45 DOI: 10.1016/j.psep.2012.01.004. [36] Technical Standard: “PN-EN 61508: Bezpieczeństwo funkcjonalne elektrycznych / elektronicznych / programowalnych elektronicznych systemów związanych z bezpieczeństwem (Functional safety of electrical / electronic / programmable electronic safety-related systems)”, PKN, Warszawa, 2003 (in Polish). [37] Technical Standard: “PN-EN 61511: Bezpieczeństwo funkcjonalne. Przyrządowe systemy bezpieczeństwa do sektora przemysłu procesowego (Functional safety. Safety instrumented systems for the sector of process industry)”, PKN, Warszawa, 2005 (in Polish). [38] Technical Standard: “IEC 61508, Functional safety of electrical/ electronic/programmable electronic safety-related systems”, International Electrotechnical Commission, 1998. [39] Technical Standard: “IEC 61511, Functional safety – Safety instrumented systems for the process industry sector”, International Electrotechnical Commission, 2003. [40] Technical Standard: “IEC 62061, Safety of machinery – Functional safety of safety-related electrical, electronic and programmable electronic control systems”, International Electrotechnical Commission, 2005. [41] Technical Standard: “IEC 61513, Nuclear power plants – Instrumentation and control for systems important to safety – General requirements for systems”, International Electrotechnical Commission, 2001. [42] “ANSI/ISA-18.2, Management of Alarm Systems for the Process Industries”, ISA 18 Committee, 2009. [43] Crescenzi F. et al.,”Vessel and In-Vessel Components Design Upgrade of the FAST Machine”, Fusion Engineering and Design 88 (9-10), pp. 2048-2051, 2013. [44] Health and Safety Executive, The explosion and fire at the Texaco refinery, Milford Haven, 24 July 1994: a report of the investigation by the Health and Safety Executive into the explosion and fires on the Pembroke Cracking Company Plant at the Texaco Refinery, Milford Haven on 24 July 1994., HSE Books: Sudbury, 1997.

VOLUME 13,

N° 1

2019

Articles

29


Journal of Automation, Mobile Robotics and Intelligent Systems

VOLUME 13,

N° 1

2019

Design and Analysis of a Soft Pneumatic Actuator to Develop Modular Soft Robotic Systems Submitted: 19 January 2019; accepted: 19 March 2019

Ahmad Mahmood Tahir, Matteo Zoppi

DOI: 10.14313/JAMRIS_1-2019/4 Abstract: In this paper, we describe the design and analysis of a Soft Cubic Module (SCM) with a single internal pneumatically actuated chamber. The actuation chamber’s shape, size and, orientation have been evaluated to realize a soft robotic actuator which can be further employed for the development of modular soft robotic systems. SCM can be easily manufactured through the molding process and it is composed of single soft material, the silicone polymer. Its external shape allows utilization of this module as a single block actuator as well as makes it easy to combine multiple SCM modules to build multiunit soft robotic systems. We consider it as our first tool to investigate whether the SCM scheme is sufficient to build soft robots which would be able to perform certain given tasks in various configurations like a soft gripper, bio-mimetic crawling mechanism or multi-axis manipulator. So far, the results obtained are encouraging in order to further develop and employ the SCM design scheme, focusing on its further geometrical optimization for both standalone configuration and assembly of multiple modules to realize novel, economic and easy to fabricate soft robotic systems. Keywords: Soft actuator, pneumatic actuation, soft robotic mechanism, Modularity, Scalability

1. Introduction

30

Soft pneumatic actuators (SPAs) have gained huge popularity and are being employed in the field of soft robotics during the last decade to realize a variety of hyper-elastic robotic innovations. In the developed soft robots [1], SPAs not only provide actuation means for the system but also craft and represent the main body part of its robotic structure. Frequently, such actuators are made of at least two parts having different stiffness characteristics: materials with different elasticity are employed in a certain combination to limit and utilize the interacting strains in an optimized manner in order to achieve a desired mechanical response to produce actuation or manipulation. One of the most famous SPA is described in [2]. This actuator is made of two parts: a pneumatic chamber composed of EcoFlex 00-30 and a layer of polydimethylsiloxane (PDMS). The latter is stiffer than the former

and provides longitudinal inextensibility to one of the faces of the actuator. As a result, the actuator performs planar bending and can be used as a finger to grasp an object or as a leg for a crawling robot [3]. Based on this principle, soft hands and orthoses (e.g., [4, 5]) have been developed; other examples of SPAs with relatively inextensible layers can be found in [6, 7]. In some works, rather than using such layers (or, in some cases, in addition to them), the strain of part of the elastomeric matrix is limited by recurring to fibers: in [4, 8], sewing thread is wound along a double helix on the soft body of the SPA to avoid ballooning effect. In [9] the fiber plays a major role: it is wound on the external surface following a helicoidal path, whose angle determines the mechanical behavior of the actuator; how the authors demonstrate, SPAs having different fiber angles can be properly combined in series, to build a soft snake able to move through a pipeline. In another work [10] the SPA consists of a soft cylindrical component made of EcoFlex 00-30, with three longitudinal channels. In order to reduce the ballooning effect, an accordion-like structure is embedded in the body of the actuator. Such structure is made of a silicone rubber whose hardness is higher than the one of EcoFlex 00-30 and therefore provides additional stiffness. So far, the combination of several materials to build SPAs has turn out to be a winning choice; however, it introduces complications in the fabrication process. Another convenient approach is the development of blocks or units to build modular soft robots: in [11], inflatable and non-inflatable units are provided with screw thread connectors to allow easy assembling and disassembling of soft mechanisms. The units described are made of several materials. Soft pneumatic actuators (SPAs) have been recognized as one of the basic building block in the field of soft robotics in the last decades. Electro-pneumatic or electro-hydraulic elastomeric actuators were initially employed in the 1980’s to realize biomimetic mechanisms [11-14]. Pneumatic artificial muscles (PAMs) or McKibben’s muscle were also used to develop soft prosthetic and rehabilitation systems [15-21]. Some latest designs for the bio-inspired soft mechanisms have been reported in [22-25]. SPAs facilitate actuation as well as serve as a part or body of the main actuator or the soft robotic structure (26). In the work presented in this paper, we also adopt a modular approach, but we aim at developing SPAs made of only one material. We introduce a soft cubic


Journal of Automation, Mobile Robotics and Intelligent Systems

module ‘SCM’ having a single internal chamber, and we show its deformation under actuation. The point of interest on the actuating face of the SCM is the point where the maximum deformation occurs which is to be utilized for the application of the SCM. The SCM has been evaluated as a single unit soft actuator and has been employed as a soft vacuum gripper [27]. Furthermore, thanks to the external cubical shape of the SCM, several cubes can be arranged to configure multi-unit soft systems. At the current stage, we present the design of the SCM and its internal actuation chamber. A discussion on the potential of the presented module follows, as well as a critical analysis of the limitation of the current work. We conclude by briefly introducing future works aimed at the development of more efficient and effective modules and modular soft robots employing this scheme.

VOLUME 13,

N° 1

2019

the output stress and the deformation. This scenario is best possible to achieve employing cylindrical profile with the highest actuation chamber surface area affecting the actuating surface of the SCM. In the case of spherical and ellipsoidal shapes, either the produced stress is being absorbed by the material itself or the large deformations on all surfaces of the cube at higher pressures affect the stability of the SCM.

1. Soft Cubic Module (SCM)

1.1. Geometrical Design of the SCM and its Internal Actuation Chamber SCM is the fundamental building block of this design scheme with an internal pneumatic actuation chamber. As the name suggests, SCM has a cubic shape while its actuation chamber or cavity resides beneath one of the surfaces of the cube, which is the actuating face of SCM. To design the internal actuation cavity, different shapes have been considered and analyzed at various orientations inside the silicone cube in order to achieve an effective deformation and respective resultant forces. The produced deformation can be utilized further to achieve the required actuation of a particular SCM which may be employed as a soft system in a standalone configuration, or in the combination of two or more SCM blocks. Spherical, ellipsoidal and cylindrical profiles of the internal actuation cavity (Fig. 1), with varied positions and orientations have been simulated for static hyper-elastic behavior of the actuated SCM in ‘Creo Parametric 3.0 M 130’ to validate their respective performance in terms of effective deformation and the resulting von-mises stress to ascertain the stress distribution on the module. Although the SCM module has proven scalability characteristics, let’s consider a 30×30mm cube for the purpose of demonstration and analysis. The spherical chamber with 12mm diameter is touching the center of the cube under the actuating face with minimum 2mm surface thickness. The ellipsoidal chamber, with major and minor diameters of with 24mm and 12mm respectively, is oriented between the faces of the SCM orthogonal to the actuating surface at 45° to the z-axis. The cylindrical chamber is positioned under the actuating face with minimum 2mm surface thickness around and at the top of the chamber. Increasing trends of output load set and the output stress and produced deformation against applied pneumatic pressure have been observed. Selected set of results for 1kPa to 3kPa applied pressure have been presented in Fig. 2 and Fig. 3. It is evident from the results that increasing the effective actuated area, against the selected surface being actuated, increases

Fig. 1. SCM with cylindrical, spherical and ellipsoidal actuation chambers: For load testing, SCM subjected to fully constrained bottom plate whereas a rigid plate simply attached to the top actuating surface of SCM

Fig. 2. Selection of actuation chamber shape: Von Mises Stress (kPa) against applied load set 1kPa to 3kPa Articles

31


Journal of Automation, Mobile Robotics and Intelligent Systems

VOLUME 13,

N° 1

2019

surface of the cube in such a way that the opposite face remains at normal state providing at least one surface for stability. This behavior is helpful in utilizing the SCM in majority of its multi-unit configurations. In this purview, the cylindrical actuation chamber as shown in the Fig. 4 has found to be the most appropriate profile satisfying design and required output performance for the development of SCM. Cylindrical profile design is further discussed here in detail.

Fig. 3. Selection of actuation chamber shape: output deformation (mm) against applied load set of 1kPa to 3kPa

32

1.2. Design Studies and Analysis of Actuation Cavities Deformation against Applied Pressure SCM design is being considered to make it a standalone stable unit, which can further be integrated into multiple unit configurations. Furthermore, to generate actuation, SCM needs to be capable of generating effective deformation at least on one surface of the cube. This deformation, like along z-axis orthogonal to the actuating surface, should be sufficient enough to exert forces on the interacting surface, whether it is some external body or another integrated SCM unit. Based on the considered profiles and orientations of the internal actuation chambers of SCM, the output load set and produced deformation suggest that the chamber should have a maximum interacting surface area to the corresponding actuating face of the SCM in order to achieve the maximum deformation on the target actuating surface. Furthermore, since the SCM is composed of a hyper-elastic material, spherical and ellipsoidal shapes will have a variable thickness of the actuating surface due to their curvature. Furthermore, soft material around the actuated chamber absorbs the pressurization effect which affects the promulgation of effective deformation and output loads at the required face of the SCM. These intuitive evaluations of the simulations suggest that the actuation cavity needs to be in the proximity of the target face of the SCM as well as should have a maximum possible surface area of the chamber in contact with the actuating surface to impart maximum deformation and forces. As already stated, another aspect to optimize the size and placement of the actuation chamber is the stability and portability of the SCM unit. This aspect implies to the feature of this cubic module to be portable on flat surfaces and a convenient approach to develop multi-unit configurations joining required faces of the cubes. Spherical and ellipsoidal shapes with various dimensions and orientations, eventually affect all the faces of the cube. This would be resulting in the instability of the cube as well as make it difficult to utilize SCM in a modular multi-unit configuration effectively. Whereas the cylindrical chamber deforms the nearest Articles

1.3. Cylindrical Actuation Chamber Configuration Cylindrical actuation cavity is considered and validated based on simulations analysis and the developed silicone polymer modules. The flat surface of this cylindrical actuation chamber is oriented parallel to a face of the SCM underneath the 2mm thickness of the outer surface which is the actuating surface. The central axis of the cylindrical chamber is coincident with Z-axis which is one of the principal axes of the cube; from now, this axis will be denoted as axis ζ. This orientation of the actuation cavity has been found to be the most appropriate against applied pneumatic pressure. Outline of the side view of the SCM is shown in Fig. 4. The dimensions are reported in Tab. 1.

1.4. Material A low-cost silicone polymer “SILICON MIX” provided by “ITALGESSI Srl” has been utilized to develop the designed SCM. This silicone can be used at room temperature. It has two components to mix them together: silicon (A) and catalyst (B). For our experimentation in the current study, a brick-red colored material with Shore A hardness 4 has been employed. The current modules have been developed with a mix of 4:1 ratio of components A and B. The material usually takes 2 to 3 hours to cure at room temperature.

Fig. 4. Side view of the SCM. The soft actuator with its internal actuation chamber in cylindrical shape with height h and diameter D The material, silicone polymer with Shore A hardness 4, has been tested for its tensile strength. Test results have been further used in Ansys Workbench 17.1® environment, in order to obtain a material behavior, using Arruda-Boyce model, with the consideration of initial shear modulus μ = 12.37kPa,


Journal of Automation, Mobile Robotics and Intelligent Systems

Tab. 1. Dimensions of the SCM with cylindrical actuation chamber Dimension

Description

Value (mm)

L

edge of the cube

30mm

D h t

diameter of the chamber

27mm

thickness of the top layer

2mm

height of the chamber

6mm

VOLUME 13,

N° 1

2019

the edges of the four side faces of the actuated face, a smaller deformation takes place which depends upon the height of the internal chamber. This maximum displacement (ds) occurs orthogonal to ζ and to the face. This is due to the minimum wall thickness at center of the edge. The points undergoing the displacements dt and ds are exhibited in Fig. 7 and 8, respectively.

limiting network stretch λL = 1.602 and incompressibility parameter D1 = 0.

1.5. Fabrication of SCM SCM module is developed in two parts: the major part which includes the main body volume and the actuation cavity, and a covering square layer for the chamber. 3D printed molds of using PLA thermoplastic has been used for molding of the silicone polymer Fig. 5.

Fig. 7. Directional deformation (dt). dt along ζ, coincident with Z axis in. A quarter of the SCM is shown

Fig. 5. Molds and respective molded parts to build the SCM. 3D printed molds and the molded silicone 1.6. Actuated Configuration The deformation of a single actuated SCM, with cylindrical actuation chamber as described, exhibits the deformation of the chamber’s corresponding face and secondarily the edges of side faces Fig. 6.

Fig. 8. Directional deformation (ds). ds orthogonal to Z axis in system of reference of simulation environment. A quarter of the SCM is shown

Fig. 6. SCM actuation. Normal (a) and actuated configuration (b) The point on the actuated face with maximum displacement occurs along axis ζ denoted by dt. Along

Due to the symmetry conditions, the computation is performed on a quarter of the SCM. The mesh is entirely tetrahedral; the load applied is a uniform pressure equal to 3kPa; as a constraint condition, the bottom face of the SCM (which stores no strain energy, being far from the actuated zone) is fully constrained. All the simulations that we have performed on the SCM take into account both material non-linearity and the effect of large displacements. Articles

33


Journal of Automation, Mobile Robotics and Intelligent Systems

By means of the finite element simulations, we have observed that the ratio r = dt /ds depends on the value of the applied pressure. Both dt and ds increase when pressure increases (see Fig. 9); however, their ratio is not constant and presents its maximum between 1kPa and 1.25kPa, as reported in Fig. 10. This means that increasing the pressure over 1.25kPa results in an enhanced effect of the deformation at the sides of the module, although the main displacement is provided along ζ at any pressure value.

Fig. 9. Directional deformations against applied pressure. ds (solid line) and dt (dashed line) vs. internal pressure inside the chamber of the SCM

VOLUME 13,

N° 1

terial for the SCMs, which is easy to handle to use as well as to discard avoiding hefty contamination to the environment; this latter problem has been raised also by other authors (see e.g., [28]). As previously explained, the molding process and maintenance of the SCM is convenient as well. Simply the two-part mix silicone can be poured into mold and leave for curing, without adding additional layers of fabric, fibers, or any other content. The same mix is useful for joining parts of the SCMs, modifying assemblies, and repairing fractured or punctured blocks. With proper repairing, the blocks have the same strength of actuation and output stress bearing capacity. Another advantage, with essential care, is that the joined SCMs can certainly be fragmented into basic molded shape and then can be reused again for new system development. This re-usability makes this approach cost- effective and environment friendly. The current SCM design is using only one surface for effective actuation while the remaining block is thick flexible rubber. This structure is useful for stability of the SCM individually as well as in combination with others, by maintaining firm contact with ground surface. This configuration is potentially useful to design multi degree of freedom robotic systems and manipulators. This design scheme is considered as the first tool to investigate its capacity to perform certain given tasks in various configurations. Alongside its application as a single unit gripper PASCAV: a Pneumatically Actuated Soft Cubic Archetypal Vacuum gripper [27], and a two-unit bio-mimetic crawling mechanism PASCAR: a Pneumatically Actuated Soft Cubic Archetypal Robot, this soft actuator has been employed to realize a four degree of freedom robotic mechanism PASCAM: a Pneumatically Actuated Soft Chewing Articulation Mechanism (Fig. 11). The formation of this primitive soft robotic four axis mechanism is being further considered to develop an equivalent mechanism similar to the well-known Stewart platform, with advantages of compactness, simpler kinematics design, easier control, and lesser cost.

Fig. 10. Initial phase relationship ”r= dt/ ds” between directional deformations

2. SCM Design Evaluation Results

34

This current effort is a primitive presentation of a targeted work to realize a scheme which will completely be based on design optimization. The aim is to make this scheme modular and economical. The initial results accomplished by the SCM are running the notion: cubic shape makes it easy to be modular in any orientation; single cylindrical inner actuation chamber is easy to mold; connection for the pneumatic line through the soft material is simple; pneumatic actuation is simple; the material is convenient for molding and capable of soft actuation. A supplemental advantage of this design is the use of single maArticles

2019

Fig. 11. SCM and respective developed applications PASCAV, PASCAR and PASCAM


Journal of Automation, Mobile Robotics and Intelligent Systems

3. Conclusion This study presents the preliminary evaluation of the geometrical configuration of the actuation cavity or the chamber of a Soft Cubic Module (SCM) which is under consideration to realize a pneumatically actuated soft robotic actuator. The purpose of this scheme is to design a soft actuator that can exhibit modularity and scalability in order to further develop soft actuation mechanisms. The overall analysis and experimental results encourage towards exploration and implementation of SCM to realize soft robotic mechanisms. It is anticipated that further studies on SCM will allow finding the optimum value of the ratio ‘r’ for the specific soft robot assembled using SCMs. While performing size optimization (and eventually shape optimization) on the internal chamber, the cubic of shape of the module will be maintained, for the reasons that we have mentioned. The cylindrical chamber profile is under further evaluation for modification to achieve improved results. The modified profile would potentially be transformed into a convex disk shape which can generate more force at the actuated surface of the SCM cube than the cylindrical actuation chamber. Furthermore, the ellipsoidal and arched horn shaped actuation chambers are also under consideration to achieve two axis deformations: one linear and the other torsional. Overall, this design scheme of SCM is helpful in realizing a simple and cost-effective soft pneumatic actuator which is modular and scalable. Another important point of the work will be the use of single material. The single block has wide application range from a simple push button to the formation of a bio-mimicking robotic mechanism. Archetypal arrangements of the SCM have suggested a wide range of possible mechanisms which are under consideration for further design analysis and development. The targeted soft systems, which are under consideration for development based on this scheme, employ single or multiple SCM units and include customized actuation and manipulation systems as well as some bio-mimetic configurations.

AUTHOR

Ahmad Mahmood Tahir* – DIME–PMAR Robotics Group of the University of Genoa, 16145 Genoa, Italy, E-mail: tahir@dimec.unige.it. Matteo Zoppi – DIME–PMAR Robotics Group of the University of Genoa, 16145 Genoa, Italy. ASME and IEEE Member. E-mail: zoppi@dimec.unige.it. *Corresponding author

REFERENCES

[1] A. M. Tahir, G. A. Naselli, and M. Zoppi, “Soft robotics: A solid prospect for robotizing the natural organisms”, Advances in robotics research, vol. 2, no. 1, 2018, 69–97 DOI: 10.12989/arr.2018.2.1.069.

VOLUME 13,

N° 1

2019

[2] F. Ilievski, A. D. Mazzeo, R. F. Shepherd, X. Chen, and G. M. Whitesides, “Soft Robotics for Chemists”, Angewandte Chemie, vol. 123, no. 8, 2011, 1930–1935 DOI: 10.1002/ange.201006464. [3] R. F. Shepherd, F. Ilievski, W. Choi, S. A. Morin, A. A. Stokes, A. D. Mazzeo, X. Chen, M. Wang, and G. M. Whitesides, “Multigait soft robot”, Proceedings of the National Academy of Sciences, vol. 108, no. 51, 2011, 20400–20403 DOI: 10.1073/pnas.1116564108. [4] R. Deimel and O. Brock, “A novel type of compliant and underactuated robotic hand for dexterous grasping”, International Journal of Robotics Research, vol. 35, no. 1-3, 2016, 161–185 DOI: 10.1177/ 0278364915592961. [5] H. Zhao, J. Jalving, R. Huang, R. Knepper, A. Ruina, and R. Shepherd, “A Helping Hand: Soft Orthosis with Integrated Optical Strain Sensors and EMG Control”, IEEE Robotics Automation Magazine, vol. 23, no. 3, 2016, 55–64 DOI: 10.1109/MRA.2016.2582216. [6] A. D. Marchese, C. D. Onal, and D. Rus, “Autonomous Soft Robotic Fish Capable of Escape Maneuvers Using Fluidic Elastomer Actuators”, Soft Robotics, vol. 1, no. 1, 2014, 75–87 DOI: 10.1089/ soro.2013.0009. [7] C. D. Onal, X. Chen, G. M. Whitesides, and D. Rus. “Soft Mobile Robots with On-Board Chemical Pressure Generation”. In: H. I. Christensen and O. Khatib, eds., Robotics Research, Springer Tracts in Advanced Robotics, Springer, Cham, 2017, 525–540 DOI: 10.1007 /978-3-319-29363-9_30 [8] R. Deimel and O. Brock, “A compliant hand based on a novel pneumatic actuator”. In: 2013 IEEE International Conference on Robotics and Automation, 2013, 2047–2053 DOI: 10.1109/ICRA.2013.6630851. [9] F. Connolly, P. Polygerinos, C. J. Walsh, and K. Bertoldi, “Mechanical Programming of Soft Actuators by Varying Fiber Angle”, Soft Robotics, vol. 2, no. 1, 2015, 26–32 DOI: 10.1089/soro.2015.0001. [10] Y. Elsayed, A. Vincensi, C. Lekakou, T. Geng, C. M. Saaj, T. Ranzani, M. Cianchetti, and A. Menciassi, “Finite Element Analysis and Design Optimization of a Pneumatically Actuating Silicone Module for Robotic Surgery Applications”, Soft Robotics, vol. 1, no. 4, 2014, 255–262 DOI: 10.1089/soro.2014.0016. [11] J. Lee, W. Kim, W. Choi, and K. Cho, “Soft Robotic Blocks: Introducing SoBL, a Fast-Build Modularized Design Block”, IEEE Robotics Automation Magazine, vol. 23, no. 3, 2016, 30–41 DOI: 10.1109/ MRA.2016.2580479. [12] K. Suzumori, S. Iikura, and H. Tanaka, “Development of flexible microactuator and its applications to robotic mechanisms”. In: 1991 IEEE International Conference on Robotics and Automation, 1991, 1622–1627 DOI: 10.1109/ ROBOT.1991.131850. Articles

35


Journal of Automation, Mobile Robotics and Intelligent Systems

[13] K. Suzumori, “Flexible Microactuator: 1st Report, Static Characteristics of 3 DOF Actuator”, Transactions of the Japan Society of Mechanical Engineers Series C, vol. 55, no. 518, 1989, 2547– 2552 DOI: 10.1299/ kikaic.55.2547. [14] K. Suzumori, “Flexible Microactuator: 2nd Report, Dynamic Characteristics of 3 DOF Actuator”, Transactions of the Japan Society of Mechanical Engineers Series C, vol. 56, no. 527, 1990, 1887–1893 DOI: 10.1299/kikaic.56.1887. [15] K. Suzumori, S. Iikura, and H. Tanaka, “Flexible microactuator for miniature robots”. In: IEEE Micro Electro Mechanical Systems, 1991, 204–209 DOI: 10.1109/MEMSYS.1991.114797. [16] R.S. Caines, “Robotic fluid-actuated muscle analogue”, U.S. Patent 5,021,064, issued June 4, 1991. [17] R.T. Pack and M. Iskarous, “The use of the soft arm for rehabilitation and prosthetic”, Proceedings of the Annual Conference RESNA, 1994, 472–475. [18] M. Hamerlain, “An anthropomorphic robot arm driven by artificial muscles using a variable structure control”. In: Proceedings 1995 IEEE/ RSJ International Conference on Intelligent Robots and Systems. Human Robot Interaction and Cooperative Robots, vol. 1, 1995, 550–555 DOI: 10.1109/IROS.1995.525851. [19] P. van der Smagt, F. Groen, and K. Schulten, “Analysis and control of a rubbertuator arm”, Biological Cybernetics, vol. 75, no. 5, 1996, 433–440 DOI: 10.1007/s004220050308. [20] A. Alford, D. M. Wilkes, K. Kawamura, and R.T. Pack, “Flexible human integration for holonic manufacturing systems”. In: Proceedings of the World Manufacturing Congress, 1997, 53–62. [21] D. M. Wilkes, R. T. Pack, A. Alford and K. Kawamura, “HuDL, A Design Philosophy for Socially Intelligent Service Robots”. In: Technical Report FS-97-02, The AAAI Press, Menlo Park, California, 1997, 140–145. [22] M. E. Cambron, R. A. Peters II, D. M. Wilkes, J. L. Christopher and K. Kawamura, “Human-Centered Robot Design and the Problem of Grasping”, Proceedings of the the 3rd International Conference on Advanced Mechatronics ICAM’98 – Innovative Mechatronics for the 21st Century, August 3–6, Okayama, Japan, 1998, 191–196. [23] G. Udupa, P. Sreedharan, and K. Aditya, “Robotic gripper driven by flexible microactuator based on an innovative technique”. In: 2010 IEEE Workshop on Advanced Robotics and its Social Impacts, 2010, 111–116 DOI: 10.1109/ARSO.2010.5680040. [24] M. Cianchetti, A. Arienti, M. Follador, B. Mazzolai, P. Dario, and C. Laschi, “Design concept and validation of a robotic arm inspired by the octopus”, Materials Science and Engineering:C, vol. 31, no. 6, 2011, 1230–1239 DOI: 10.1016/j.msec.2010.12.004. 36

Articles

VOLUME 13,

N° 1

2019

[25] M. O. Obaji and S. Zhang, “Investigation into the force distribution mechanism of a soft robot gripper modeled for picking complex objects using embedded shape memory alloy actuators”. In: 6th IEEE Conference on Robotics, Automation and Mechatronics (RAM), 2013, 84–90 DOI: 10.1109/RAM.2013.6758564. [26] D. Sasaki, T. Noritsugu, M. Takaiwa, and Y. Kataoka, “Development of Pneumatic Wearable Power Assist Device for Human Arm “ASSIST””, Proceedings of the JFPS International Symposium on Fluid Power, vol. 2005, no. 6, 2005, 202–207 DOI: 10.5739/ isfp.2005.202. [27] A. M. Tahir, M. Zoppi, and G. A. Naselli, “PASCAV Gripper: a Pneumatically Actuated Soft Cubical Vacuum Gripper”. In: International Conference on Reconfigurable Mechanisms and Robots (ReMAR), 2018, 1–6 DOI: 10.1109/ REMAR.2018.8449863. [28] J. Shintake, H. Sonar, E. Piskarev, J. Paik, and D. Floreano, “Soft pneumatic gelatin actuator for edible robotics”. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, 2017, 6221–6226 DOI: 10.1109/IROS.2017. 8206525.


Journal of Automation, Mobile Robotics and Intelligent Systems

VOLUME 13,

N° 1

2019

Development and Optimization of an Automated Irrigation System Submitted: 15 January 2019; accepted 23 February 2019

Lanre Daniyan, Ezechi Nwachukwu, Ilesanmi Daniyan, Okere Bonaventure

DOI: 10.14313/JAMRIS_1-2019/5 Abstract: The deployment of appropriate technologies to enhance modern agricultural practices and improve crop yields is imperative for sustainability. This paper presents the development of a standalone automated irrigation system. The system design features good automation and control, which was achieved using an array of electronic timing system, soil feedback sensor and wireless communication system. Autonomous irrigation events are based on the states of the timing system, the soil feedback system and the wireless communication system. The control and automation of these systems was done using an AVR microcontroller, which was programmed to trigger intelligent and independent farm irrigation operation through a water pump attached to the system. The system also operates remotely via SMS command from mobile device and sends operational status feedback via SMS to preprogrammed mobile user(s). It also sends soil moisture condition to a remote user upon query. The system package was produced using additive manufacturing technique. The power supply system was implemented using solar power system in order to achieve a standalone, autonomous and reliable power supply necessary for an independent operation. The performance evaluation of the developed system show impressive response time, good reliability and excellent stability. Furthermore, the numerical experiment conducted using the Response Surface Methodology (RSM) produced a mathematical model for the optimization of the irrigation process for optimum performance and cost effectiveness. Keywords: Additive manufacturing, Automation, Irrigation, Intelligent system, Microcontroller

1. Introduction The development of appropriate technologies in modern agricultural practices have not only improved crop yields but have also increased the ease with which farming practices are being done especially with the advent of robotic tools. With the increasing awareness and advocacy for round – the – year farming, and the need to meet up with the challenge of global food demand, deploying appropriate technologies to increase productivity and reduce the burden often associated with farm practices has become inevitable. Since the

role of agriculture in the economic development of any nation cannot be overemphasized, there is need boost agricultural productivity with the use of intelligent systems for increased food production and revenue generation [1–2]. One of such areas in agricultural practices that has posed much burden, wasted useful times and resources is farm irrigation. Irrigation is the artificial application of water to the land or soil which assist in the growing of agricultural crops, maintenance of landscapes, and revegetation of soils in dry areas as well as during periods of inadequate rainfall [3–5]. Mainly, there are two types of irrigation systems namely; sprinkler [6–7] and drip irrigation system [8–9]. The use of the sprinkler irrigation system involves the spraying of water into the air with the aid of a pumping system and sprinkler, which falls on the crops in form of small drops of water like the rain. It finds suitable application in farm rows or fields either uniform or undulating [10]. On the other hand, the drip irrigation involves the passage of water directly to the root of plants from or below the soil surface using pipes or tubes. This form of irrigation is highly effective and conservative and most suitable in areas with acute shortage of water supply [11]. With regards to the challenges posed by conventional irrigation practices, the use of intelligent systems for irrigation are better appreciated particularly with large-scale farming. Conventional methods of irrigation are labour intensive, time-consuming, relatively ineffective due to poor water distribution and lack of monitoring as well as smart control mechanisms [12–13]. Its over-reliance on human control makes it not suitable for large-scale farming [14–16]. Automatic irrigation systems on the contrary are neither laborious nor time-consuming and can help meet the ever increasing demand for food production all the year round [17–18]. In addition, the process is cost effective in the long run as the initial capital invested will be offset via large scale farming to meet the increasing demand for food production [19–20]. The burden the automated irrigation system seeks to resolve is the ability to intelligently and independently irrigate the farm environment. As such, farmers can commit the time hitherto wasted on prolonged and laborious manual irrigation to other productive tasks while the system independently executes the irrigation process. According to Ganturi [21] and Curtis [22], the requirements for a smart irrigation system includes the effective application and distribution of water as well as good control and monitoring of the process of irrigation with appropriate feedback mechanisms. Karim et al. [23]

37


Journal of Automation, Mobile Robotics and Intelligent Systems

developed a sensor-based M2M agriculture monitoring systems for developing countries citing high level of automation and control as one of the challenges to be addressed. Also, Alagupandi et al. [24] developed a smart irrigation system for outdoor environment using TinyOS while Gutierrez [25] developed an automated irrigation system using a wireless sensor network and GPRS module. These works provided a convenient platform for automation, control and tracking of irrigation activities. Mrinmayi et al. [26] also developed a smart irrigation system using internet of things while Roy and Ansari [27], developed autonomous irrigation systems that uses every day climate criterion for irrigation purpose. This helps in saving considerable amount of water for irrigation via the use of Programmable Logic Controller (PLC). The issue of water loss due to precipitation has been identified as one of the major challenges during the irrigation process. In a bid to address this challenge, Kumar et al. [28] developed an automatic irrigation system that uses field sensing and forecasting in an attempt to control irrigation based on soil moisture for the sustainable irrigation of crops. For effective distribution and control of water irrigation activities, Bai and Liang [29] as well as Li-Fang [30] developed an optimal model for water conservation. The parameters that are essential in order to facilitate good monitoring and control are; temperature, air humidity and soil moisture [30–35]. The design objectives of the developed automated irrigation system are to achieve operational intelligence, automation and independence thereby easing the stress often associated with conventional irrigation practices. The novelties of this work lies in the fact that the ease of operation of the developed automated system comes with its multiple modes of operation, which affords the user convenient choices of operation. The modes include the full automation mode whereby the system operates autonomously using the states of the timing system and the soil feedback sensor to take intelligent decisions on appropriate time and extent of irrigation activities to be implemented. In addition, a user could send a command remotely via mobile phone to the field system to start or stop irrigation. Furthermore, users have the privilege of querying the system remotely in order to get feedback on the real time soil conditions. Upon receiving such command via the Short Message Service (SMS), the system queries the soil feedback sensor and sends the soil moisture readings to the authorized user(s). The system can also be made to operate in the manual mode relying on human effort or semi autonomously relying on human control during certain periods such as the start up or shut down based on the environment and nature of irrigation activities to be carried out. The aforementioned peculiarities of the developed automated irrigation system have not been sufficiently reported by the existing literature.

2. Materials and Method

38

The materials employed in the implementation of this system are real time clock module, YL – 69 sensor, SIM800L, Atmega328 microcontroller, DC water pump, PV module, LM6009 module, LM 2596 module, Articles

VOLUME 13,

N° 1

2019

Lipo cells, LCD, switches, relay, LED, resistors, capacitors, crystal Oscillator, active SIM card, filament for additive manufacturing, metal frame for PV and control unit. The capability of the automated irrigation system include the ability to take intelligent decisions on its state, based on operational and soil conditions. Also, the solar power provision will enable it sustain a continuous operation at all times and to achieve high reliability in its operations. The need for irrigation tend to increase with increase in solar energy usually accompanied by soil moisture loss. Hence, the solar energy can be trapped for powering the autonomous system.

2.1. System Architecture The architecture of the developed system is presented in Figure 1.

Fig. 1. Architecture of the automated irrigation system 2.2. Automation and Control Generally, irrigation exercise is usually carried out in the morning or evening hours for the sake of the health of farm crops. The pre-programmed time for irrigation was done and stored on the micro controller. Hence, the microcontroller routinely checks for this time with respect to the environmental conditions. A DS1307 timing system also known as the real time clock was incorporated to give an accurate track of the date and time of the irrigation activities (Figure 2).

Fig. 2. DS1307 timing system According to the programming of the microcontroller, there will be need for irrigation only if the soil moisture goes below 25% and such irrigation operation is halted if the farm is irrigated up to 50%. Therefore, the microcontroller keeps its check for the environmental conditions in respect of the pre-set time most especially when the humidity drops below


Journal of Automation, Mobile Robotics and Intelligent Systems

25%. Anytime these two conditions are active, a relay is activated to switch the irrigation pump ON. A Short Message Service (SMS) alert is sent to the authorized user on the state of the pump and the active mode of control. The authorized user receives a message “Pump ON”, and Control mode: “Auto”. At any condition outside the aforementioned operational limits, the pump state is kept “OFF”, and the state indicates “Idle”. These are designed to take place without the users’ input. In addition, users can send a message “ON” from a mobile phone to the SIM number in the system. Upon receiving this message, the system activates the relay which will in turn activate the irrigation pump. The state changes to “Pump ON” and the control mode changes to “Remote”. The state is sent as feedback to the remote operator’s mobile phone. Upon receiving the message “OFF” from the user, the pump goes off and the state returns to “Idle”. This would equally be sent as feedback to the users’ mobile phone. The user also has the privilege to query the system via a simple SMS command to get the real time soil moisture reading, which will be sent as a message from the system to the users’ mobile phone. In the event of an emergency need of water on site, such as in the case of washing harvested farm crops, a manual mini switch is provided for the pump on – site. The Light Emitting Diode (LED) goes “OFF” in the idle state, “BLUE” in the remote control state, “GREEN” in the timing or soil automation state and “RED” in manual operation mode. One of the peculiarities of this system is that users have the privilege of querying the system remotely in order to get feedback on the real time soil conditions. Upon receiving such command via the SMS, the system queries the soil feedback sensor and sends the soil moisture readings to authorized user(s). For the purpose of this work, only one moisture sensor was used and this is shown in Figure 3. The features of the soil moisture sensor include: power supply (3.3 V–3.5 V), output voltage (0–4.2 V), current 3.5 mA), size (60 × 20 × 5 mm). It comprises of two probes which passes current into the soil and thereafter reads the electrical resistance so as to determine the moisture level. High moisture content in the soil often triggers high conductivity of electricity by the soil and vice versa. The output voltage from the soil moisture sensor is then amplified and sent to the micro controller where it is converted to digital using Analogue to Digital Converter (ADC). On the controller, the voltage is compared to the threshold value pre-set on it, if the measured voltage is less than the threshold value, the micro controller activates the relay which turns ON the pump for irrigation activities. On the other hand, if the measured value exceeds or

VOLUME 13,

N° 1

2019

equal to the threshold, it implies there is no need for irrigation hence system remains in the idle state with the LED indicating “OFF”.

2.3. Power Provisioning For a continuous and reliable operation of this system, an independent power supply is essential. As such, a PV module, which was scaled relative to the power requirement of the DC operated irrigation pump, was deployed. The control unit is equipped with a 10,500 mAH lithium ion battery backup to keep the system active at night. The power supply unit of the system is designed to take power also from the PV installation and provision power according to power requirement of the control sub – systems as shown in Figure 4.

Fig. 4. Proteus design model of the automated irrigation system 2.4 Packaging and Product Outlook The product outlook was first designed in the Computer Aided Design (CAD) environment (Figure 5) and produced with the aid of additive manufacturing technique (Figure 6).

Fig. 5. CAD of the system

Fig. 3. The soil moisture sensor

For simplified installation or ease of decommissioning, interfaces were created at the side of the system unit where the PV module, the DC water pump and the soil moisture feedback sensor were wired onto the system. Articles

39


Journal of Automation, Mobile Robotics and Intelligent Systems

VOLUME 13,

N° 1

2019

Re = 11910.11 Since Re > 4000, the flow of water for the irrigation purpose is turbulent. The advantage of turbulent water flow for irrigation is that water is dispatched quickly through the conveying pipe at high velocities and flow rates thereby driving the irrigation process to quick completion. Neglecting the minor losses due to pipe orientation, the frictional factor for losses in a flexible rubber pipe with smooth bore is expressed by Equation 5 [35].

0.165    3170   0.0076     Re   + 16 (5) = f  7.0   Re  3170   1+    Re   

Fig. 6. The automated irrigation system

2.5. Design Calculations The volume of water pumped was 750 L/hr which is adequate for the small experimental farm, hence the volumetric flow rate is calculated as f 2.08 × 10−4 m3/sec. For a pumping operation = expected to last for 1 hour (3600 secs), the total volume of water pumped is calculated from Equation 1. Q=

V t

V 1.94 × 10−5 = 3600

(1)

Using a 25 mm (0.025 m) diameter pipe to convey the water, the area of the pipe is calculated from Equation 2.

πd 2 (2) 4 The area of the pipe from Equation 2 is calculated as 4.90 × 10−4 m2 . Since the volumetric flow rate is a function of the velocity and cross sectional area, Equation 3 holds thus, A=

Q = vA

2.08 × 10−4 = v × 4.90 × 10−4

(3)

Where; Q is the volumetric flow rate calculated as 750 L/hr, (2.08 × 10–4 m3/sec) and v is fluid velocity (m/s). Hence, the fluid velocity from Equation 3, is calculated thus, v = 0.424 m/s

The Reynold’s number, which determines the nature of flow, is expressed as Equation 4. Re =

ρ vd µ

(4) v is the fluid velocity (0.424 m/s); ρ is the water density (1000 kg/m3), d is the pipe diameter (0,025 m) and m is the coefficient of dynamic viscosity for water at temperature of 25°C (8.9 × 10–4 Ns/m2) 40

Re =

Articles

1000 × 0.424 × 0.025 8.9 × 10−4

f = 0.0075

The major head loss H due to this frictional effect is expressed by Equation 6 [36].

4 flv 2 (6) D.2 g Where; f is the frictional factor (0.825), v is the length of the pipe (1000 m); v is the fluid velocity (0.424 m/s); g is the acceleration due to gravity9.81 m/s2), d is the pipe diameter (0.025 m) H=

V = 0.750 m3 (750 L )

0.165    3170   0.0076    16  11910.11   + 7.0   11910.11  3170   1+    11910,11   

H=

4 × 0.0075 × 1000 × 0.4242 0.025 × 2 × 9.81 H = 10.99 m of water

Equation 7 expresses the density of a fluid.

mass (7) volume Water density (r) is 1000 kg/m3 and volume V is 0.750 m3, hence, from Equation 7, the mass of fluid is calculated as 570 kg. According to Rajput (2008), the power required to pump fluid is given by Equation 8. density( ρ ) =

P = ρ gQH (8) where: r is the water density (1000 kg/m3), g is the acceleration due to gravity (9.81 m/s2), Q volumetric flow rate (2.08 × 10–4 m3/sec) and H is the pumping head, (12.09 m): P = 1000 × 9.81 × 2.08 × 10–4 × 10.99 P = 22.5 W The power required to pump the fluid is calculated from Equation 8 as 22.5 W and a 50 W (0.067 hp) pump is selected using a safety factor of 2.2 The system is energy efficient in that it needs only 50 W of power to pump fluid. The pump will increase the temperature of the fluid stream as given in Equations 9 and 10.


Journal of Automation, Mobile Robotics and Intelligent Systems

= q mc p ∆T

VOLUME 13,

(9)

Pt = mc p ∆T (10) p where: q is the heat input (kJ), P is the required power of the pump (kW), tp is the time required to run pump, (sec), m is the mass of water, (kg), Cp is the specific heat capacity, (kJ/kgK), and ΔT is the change in temperature, (Kelvin). For a pump whose required power is 25 W, running for 3600 sec and the mass of water is 750 kg with specific heat capacity of 4200 J/kgK, the change in temperature is obtained using Equation 9.

25 × 3600 = 750 × 4200 × ∆T

= ∆T 0.0287°C At equilibrium, the temperature of water equals the room temperature 25°C. From Equation 10, the pump will increase the temperature by 0.0287°C, hence the final temperature of water for irrigation is 25.0287°C. This falls within the safe limit of the water temperature for irrigation. The current demand of the pump is expressed by Equation 10. P = IV (10) Where P is the power required by the pump (50 W); I is the current required (A) and V is the applied voltage 24 V 50 = I × 24 I = 2.083 A

2.6 Performance Evaluation of the Developed Automated Irrigation System For large farms, pump sizing can be easily done by estimating the water volume requirements of the farm. This would also entail appropriate sizing of the solar power system to deliver required power to the irrigation pump. Also, the ampere rating of the switching relay in the control system will be matched adequately with the current demand of the pump. The operational indices show that it takes the system about 25 secs to boot up when powered ON and about 5 secs to receive a message instruction for pump activation. The solar power system delivers about 300 W, with the feedback response time from the soil moisture sensor about 0.5 secs and the volume of water pumped approximately 750 L/hr which is adequate for small experimental farm. The smart irrigation monitoring is embedded with soil moisture sensor to sense and obtain the values of moisture for any location to be irrigated. This is a critical factor which provides information about the condition of the soil to be irrigated so as to determine the need for irrigation or otherwise as well the quantity of water needed for irrigation. The system also provides irrigation data in terms of the volume of water used for irrigation, period of irrigation as well as the total time spent on irrigation, hence the development of the smart irrigation system provides design data for prediction and forecasting as well secondary data scaling its development or subsequent redesign.

N° 1

2019

2.7 Numerical Experiment In order to develop a predictive model for the determination of volume of required for irrigation and study the effect of three critical factors that influences the water requirement of the soil namely; soil moisture, ambient air temperature and humidity, the dynamic modelling and simulation was carried out using the Response Surface Methodology (RSM). While the ambient air temperature accounts for the overall temperature of the outdoor air where the irrigation activity is to be performed, humidity represents the amount of water vapour present in the air. A multifunctional temperature and humidity measurement device (PCE-THA 10-ICA) whose temperature measurement ranges from –15 to 50° and humidity measurement ranges between 5–95% was employed for measuring both the temperature and the humidity of the environment where irrigation activity is to be performed. The range of values of three critical parameters namely; moisture content (20–40%), ambient temperature (15–35°C) and humidity (10–30%) were used as input parameters into the Central Compose Design (CCD) and Response Surface Methodology (RSM) to develop a predictive model that correlates the volume of water required for irrigation purpose as a function of the three critical parameters. The essence of optimization is to provide an optimum solution for irrigation purpose. The optimum solution include the determination of the need for irrigation and the right volume of water required. This will save water, time and cost thereby promoting effective irrigation process.

3. Results and Discussion

The irrigation of small farm land of cross section 100 m × 100 m was carried for three different soil samples namely; sandy, loamy and clay soil at five different locations between 8:00–8:59 am. The data collected from the soil moisture sensor, the multifunctional temperature and humidity measurement device and the micro controller are presented in Table 1. Figure 7 is a plot of volume of water used and the time spent for the irrigation for the three soil samples. Three critical factors namely; soil moisture, ambient temperature and humidity determined the volume of water required for irrigation. The water requirement was highest in sandy soil and least in clay soil. This is due to the fact that the moisture content is highest in clay and least in sandy soil. The clay soil has high water retention ability due to its structure hence the reason for the high value of its moisture content, followed by loamy soil and then sandy soil. The requirement of water by the soil was observed to increase as the value of soil moisture decreases. The equation of the predictive model is expressed by Equation 11:

Volume = +759.43 − 284.97 A − 7.69B − 0.88C + 31.25 A × B + 21.25 A × C + 83.44 A2 − 3.18B 2 + 4.78C 2

(11)

where; A is the percent soil moisture, B is the ambient temperature (°C) and C is the percent humidity. Articles

41


Journal of Automation, Mobile Robotics and Intelligent Systems

VOLUME 13,

N° 1

2019

Tab. 1. Results obtained for five different locations S/N

Soil type

Soil sample

Initial moisture content (%)

Ambient Temperature (°C)

Humidity (%)

Volume of water used for irrigation (L)

Total time spent for Irrigation (sec)

2.

Clay

B

21

24.9

16.5

200

720

1. 3. 4.

5.

6.

7.

8.

Clay Clay

A

E

21

24.2

17.5

20

24.5

17.2

24

16

Loamy

A

Loamy

C

B

E

19

B

16

D

11.

Sandy

A

Sandy

12.

13. 14.

15.

Sandy

Sandy

Sandy

25

23.8

Loamy Loamy

19

21.7

20

9.

10.

24

22

17.2

23

D

Loamy

24.5

C

Clay

Clay

22

18

18

19.5 16

140

640

610

151

600

220

930

245

980

205

895

23

16.2

150

800

24

15.6

250

960

23.5

C

18

22

16.8

E

18

21.5

19

17

155

690

18

17

D

230

23

16

16.3

169

260

230

265 275

876

985

920

1400 1200

Fig. 7. Volume of water required and time for irrigation

42

The developed model was validated using the Analysis of Variance (ANOVA). It was found to be highly adequate for the prediction of the volume of water needed for irrigation as the regression model was found to be highly significant at 95% confidence level. The correlation coefficients namely; the R-squared (0.9695), adjusted R-squared (0.9533) and predicted R-squared (0.9785) were within the same range and very close to 1 for input parameters. The closer the correlation coefficients to 1, the more efficient and reliable the predictive ability of developed model. Figure 8 is a 3 D plot that studies the effect of the interaction of humidity and temperature on the need for irrigation as well as the volume of water required. Humidity is the amount of water vapour present in the air. It is high when the amount of water vapour in the air is high while the ambient temperature is the degree of hotness or coldness of the air in the environment. Humidity increases as the amount of water vapour in the air is increases. From Figure 8, the relationship between the humidity and temperature is inversely proportional. Keeping the percent moisture content Articles

Fig. 8. Cross effect of humidity and temperature constant at 23.02%, the percent humidity increases as ambient temperature decreases. This is due to the fact that air tend to hold more water molecules as ambient temperature increases hence the relative humidity decreases. The optimum volume of the water required for irrigation is 1025 litres. Figure 9 is a 3 D plot that studies the effect of the interaction of soil humidity and temperature on the need for irrigation as well as the volume of water required. As temperature increases, percent moisture content decreases. This is due to the fact that the rate of evaporation increases with increase in temperature with attendant decrease in the moisture content. The optimum value of the volume required for irrigation is 1060.9 litres.


Journal of Automation, Mobile Robotics and Intelligent Systems

VOLUME 13,

N° 1

2019

AUTHORS Lanre Daniyan* – NASRDA Centre for Basic Space Science, University of Nigeria, Nsukka, Nigeria. Ezechi Nwachukwu – NASRDA Centre for Basic Space Science, University of Nigeria, Nsukka, Nigeria. Ilesanmi Daniyan* – Department of Industrial Engineering, Tshwane University of Technology, Pretoria, South Africa, E-mail: afolabiilesanmi@yahoo.com. Okere Bonaventure – NASRDA Centre for Basic Space Science, University of Nigeria, Nsukka, Nigeria. *Corresponding author

REFERENCES

Fig. 9. Cross effect of temperature and moisture content

4. Conclusion The development of the automated irrigated was successful carried out in a CAD environment and produced with the aid of additive manufacturing technique. It offers the benefits of improved farming practices via automation which enhances agricultural productivity and encourages round – the – year farming, efficient water distribution and management, while reducing the drudgery of manual labour, real time monitoring and control as well as proper scheduling of irrigation activities and enhances multiple mode of operation with query and feedback mechanism. The work is an improvement over existing work in that the ease of operation comes with its multiple modes of operation which affords user convenient choices of operation. Also, a predictive model that correlates the volume of required as a function of the soil moisture, air temperature and humidity was developed. This will assist in predicting the time necessary for irrigation as well as the volume of water required for the irrigation process.

PUBLIC INTEREST STATEMENT

This work is in line with the quest to improve food security via the deployment of appropriate technology for farm irrigation in order to improve crop yields and ease human drudgery. With the increasing awareness and advocacy for round-the-year farming, and the need to meet up with the global challenge of food demand, deploying appropriate technologies to increase productivity and to reduce the burden often associated with farm practices has become inevitable hence this work provides a standalone automated irrigation system which features good automation and control achieved using an array of electronic timing system, soil feedback sensor and wireless communication system.

[1] C. Angel and S. Ansha, “A Study on Developing a Smart Environment in Agricultural Irrigation Technique”, The International Journal of Ambient Systems and Applications, vol. 3, no. 2/3, 2015, 11–17 DOI: 10.5121/ijasa.2015.3302. [2] P. S. Bains, R. K. Jindal, and H. K. Channi, “Modeling and Designing of Automatic Plant Watering System Using Arduino”, International Journal of Scientific Research in Science and Technology (IJSRST), vol. 3, no. 7, 2017, 676–680. [3] D. S. Pavithra and M. S. Srinath, “GSM based Automatic Irrigation Control System for Efficient Use of Resources and Crop Planning by Using an Android Mobile”, IOSR Journal of Mechanical and Civil Engineering, vol. 11, no. 4, 2014, 49–55 DOI: 10.9790/ 1684-11414955. [4] A. M. Rasyid, N. Shahidan, M. O. Omar, N. Hazwani, and C. J. Choo, “Design and Development of Irrigation System for Planting Part 1”, 2nd Integrated Design Project Conference (IDPC), 2015. [5] B. D. Kumar, P. Srivastava, R. Agrawal, and V. Tiwari, “Microcontroller Based Automatic Plant Irrigation System”, International Research Journal of Engineering and Technology, vol. 4, no. 5, 2017, 1436–1439. [6] M. H. Razali, M. N. Masrek, and S. Roslan, “Microcomputer Application for Instrumentation Development in Drip Irrigation System”, Journal of Computer Sciences and Applications, vol. 1, no. 3, 2013, 39–42 DOI: 10.12691/jcsa-1-3-2. [7] D. Kissoon, H. Deerpaul, and A. Mungur, “A Smart Irrigation and Monitoring System”, International Journal of Computer Applications, vol. 163, no. 8, 2017, 39–45. [8] J. M. Moreira Barradas, S. Matula, and F. Dolezal, “A Decision Support System-Fertigation Simulator (DSS-FS) for design and optimization of sprinkler and drip irrigation systems”, Computers and Electronics in Agriculture, vol. 86, 2012, 111–119 DOI: 10.1016/j.compag.2012.02.015. [9] J. Kumar, S. Mishra, A. Hansdah, and R. Mahato, “Design of Automated Irrigation System based on Field Sensing and Forecasting”, InternationArticles

43


Journal of Automation, Mobile Robotics and Intelligent Systems

44

al Journal of Computer Applications, vol. 146, no. 15, 2016, 17–21 DOI: 10.5120/ijca2016910938. [10] S. Jadhav and S. Hambarde, “Android based Automated Irrigation System using Raspberry Pi”, International Journal of Science and Research (IJSR), vol. 5, no. 6, 2016, 2345–2351 DOI: 10.21275/ v5i6.NOV164836. [11] M. Dursun and S. Ozden, “A wireless application of drip irrigation automation supported by soil moisture sensors”, Scientific Research and Essays, vol. 6, no. 7, 2011, 1573–1582. [12] G. Nisha and J. Megala, “Wireless sensor Network based automated irrigation and crop field monitoring system”. In: 6th International Conference on Advanced Computing (ICoAC), 2014, 189–194 DOI: 10.1109/ICoAC.2014.7229707. [13] A. R. Al-Ali, M. Qasaimeh, M. Al-Mardini, S. Radder, and I. A. Zualkernan, “ZigBee-based irrigation system for home gardens”. In: International Conference on Communications, Signal Processing, and their Applications (ICCSPA’15), Sharjah, 2015, 1–5 DOI: 10.1109/ICCSPA.2015.7081305. [14] J. Haule, and K. Michael, “Designing and Simulation of an Automated Irrigation Management System Deployed by using Wireless Sensor Networks (WSN)”, IOSR Journal of Electronics and Communication Engineering, vol. 9, no. 5, 2014, 67–73 DOI: 10.9790/2834-09526773. [15] A. F. Agbetuyi, H. E. Orovwode, A. A. Awelewa, S. T. Wara, and T. Oyediran, “Design and implementation of an automatic irrigation system based on monitoring soil moisture”, Journal of Electrical Engineering, vol. 16, no. 2, 2016, 206–215. [16] M. Ojha, S. Mohite, S. Kathole, and D. Tarware, “Microcontroller based automatic plant watering system”, International Journal of Computer Science and Engineering, vol. 5, no. 3, 2016, 25–36. [17] D. Dharrao, L. Kolape, S. Pawar, and A. Patange, “Automated Irrigation System using WSN”, Asian Journal of Engineering and Technology Innovation, vol. 3, no. 6, 2015, 18–21. [18] S. Malge and K. Bhole, “Novel, low cost remotely operated smart irrigation system”. In: International Conference on Industrial Instrumentation and Control (ICIC), 2015, 1501–1505 DOI: 10.1109/IIC.2015.7150987. [19] P. Archana and R. Priya, “Design and Implementation of Automatic Plant Watering System”, International Journal of Advanced Engineering and Global Technology, vol. 4, no. 1, 2016, 1567–1570. [20] M. S. Manoj and B. Hemalatha, “Automatic irrigation using microcontroller basing on pressure”, International Journal of Pure and Applied Mathematics, vol. 116, no. 20, 2017, 349–353. [21] V. N. R. Gunturi, “Micro Controller Based Automatic Plant Irrigation System”, International Journal of Advancements in Research & Technology, vol. 2, no. 4, 2013, 194–198. [22] A. Curtis, “Smart irrigation”. In: L.D. Currie and L.L Burkitt, eds., Moving farm systems to improved attenuation, Occasional Report No. 28. Fertilizer Articles

VOLUME 13,

N° 1

2019

and Lime Research Centre, Massey University, Palmerston North, New Zealand, 2015, http:// flrc.massey.ac.nz/publications.html. [23] L. Karim, A. Anpalagan, N. Nasser, and J. Almhana, “Sensor-based M2M Agriculture Monitoring Systems for Developing Countries: State and Challenges”, Network Protocols and Algorithms, vol. 5, no. 3, 2013, 68–86 DOI: 10.5296/npa.v5i3.3787. [24] P. Alagupandi, R. Ramesh, and S. Gayathri, “Smart irrigation system for outdoor environment using Tiny OS”. In: International Conference on Computation of Power, Energy, Information and Communication (ICCPEIC), 2014, 104–108 DOI: 10.1109/ICCPEIC.2014.6915348. [25] J. Gutiérrez, J. F. Villa-Medina, A. Nieto-Garibay, and M. Á. Porta-Gándara, “Automated Irrigation System Using a Wireless Sensor Network and GPRS Module”, IEEE Transactions on Instrumentation and Measurement, vol. 63, no. 1, 2014, 166–176 DOI: 10.1109/TIM.2013.2276487. [26] M. S. Gavali, B. J. Dhus, and A. B. Vitekar, “A Smart Irrigation System for Agriculture Based on Wireless Sensors”, International Journal of Innovative Research in Science, Engineering and Technology, vol. 5, no. 5, 2016, 6893–6899. [27] D. K. Roy and M. H. Ansari, “Smart Irrigation Control System”, International Journal of Environmental Research and Development (IJERD), vol. 4, no. 4, 2014, 371–374. [28] Dan Bai and Wei Liang, “Optimal planning model of the regional water saving irrigation and its application”. In: International Symposium on Geomatics for Integrated Water Resource Management, 2012, 1–4 DOI: 10.1109/GIWRM.2012. 6349622. [29] T. Li-Fang, “Application of autocontrol technology in water-saving garden irrigation”. In: International Conference on Computer Science and Information Processing (CSIP), 2012, 1311–1314 DOI: 10.1109/CSIP.2012.6309103. [30] S. Lucksman, P. Subramaniyam, H. Suntharalingam, S. G. S. Fernando, and C. D. Manawadu, “Ralapanawa RND – Automated water management system for Irrigation Department, Sri Lanka”. In: 8th International Conference on Computer Science Education, 2013, 213–217 DOI: 10.1109/ICCSE.2013. 6553912. [31] L. Gao, M. Zhang, and G. Chen, “An Intelligent Irrigation System Based on Wireless Sensor Network and Fuzzy Control”, Journal of Networks, vol. 8, no. 5, 2013, 1080–1087 DOI: 10.4304/jnw.8.5.1080-1087. [32] S. V. Devika, S. Khamuruddeen, S. Khamurunnisa, J. Thota, and K. Shaik, “Arduino Based Automatic Plant Watering System”, International Journal of Advanced Research in Computer Science and Software Engineering, vol. 4, no. 10, 2014, 449–456. [33] K. Kansara, V. Zaveri, S. Shah, S. Delwadkar, and K. Jani, “Sensor based Automated Irrigation System with IOT: A Technical Review”, Internation-


Journal of Automation, Mobile Robotics and Intelligent Systems

VOLUME 13,

N° 1

2019

al Journal of Computer Science and Information Technologies, vol. 6, no. 6, 2015, 5331–5333. [34] D. Rane, P. R. Indurkar, and D. M. Khatri, “Review paper based on automatic irrigation system based on RF module”, International Journal of Advanced Information and Communication Technology, vol. 1, no. 9, 2015, 736–738. [35] F. A. Morrison, An introduction to fluid mechanics, Cambridge University Press: Cambridge; New York, 2013. [36] R. K. Rajput, A Textbook of Fluid Mechanics, S. Chand Limited, India, 2008,

Articles

45


Journal of Automation, Mobile Robotics and Intelligent Systems

VOLUME 13,

N° 1

2019

A Statistical Approach to Simulate Instances of Archeological Findings Fragments Submitted: 26th June 2018; accepted 15th January 2019

Fabrizio Renno, Antonio Lanzotti, Stefano Papa

DOI: 10.14313/JAMRIS_1-2019/6 Abstract: First aim of this paper is to describe a methodology developed to create virtual fragments of archeological archetypes in CAD (Computer Aided Design) environment. A simple Reverse Engineering (RE) technique was adopted to reconstruct the shape of vases allowing the archeologists, and so the CAD inexpert personnel, to use it. Moreover, another relevant aspect is the definition of a procedure to simulate shape errors on the virtual prototypes to make more realistic the results. The characteristics of the fragments to be reproduced were selected by means of Design of Experiment (DOE) techniques. So, an algorithm was implemented to simulate the shape error, related to the working operations, that represents the typical noise for the feature recognition of archeological findings. Furthermore, this algorithm can make more complex the hypotheses related to the Gaussian model of simulation of the error and can adapt the value of the shape error (i.e. increasing it) according to the data gathered in archaeological excavation. The case study was based on the definition of a catalogue of archetypes of the black Campanian vases studied and classified by the archeologist J.P. Morel. The procedure conceived was applied to five (among one hundred) vases of the virtual catalogue obtaining forty instances of fragments affected by errors and so creating virtual mock-ups of typical pieces which may be found in the archeological site considered for the case study. Keywords: Archetype, Profile Reconstruction, Geometric Modeling, Design of Experiments, Simulation of the shape and recognition errors

1. Introduction

46

The algorithms of recognition, reconstruction and classification of fragments in virtual environments have to be validated and optimized by means of wide test campaigns. Only in in this way, in fact, it is possible to demonstrate that the automated techniques can simplify the manual operations linked to the recognition and classification of thousands of fragments coming from an archeological site. Moreover, it is interesting to evaluate the benefits that can derive, for the study of 3D shapes of fragments initially not classifiable, from the use of automatic reconstruction and

classification procedures [2–4]. The last ones can be included among the Virtual Prototyping Techniques widely spread from engineering to humanities fields [5–17]. To recognize and classify the fragments in virtual environments the archetypes to be used as references are needed. So, first step of the procedure, for the development of the cases studies, is the creation of the virtual catalogue and the CAD modeling of the archeological findings starting from the example of the Morel Catalogue [1]. Second step is the definition and the planning of the fragments to be simulated in order to use in the best way the information coming from results of the tests based on the recognition and classification techniques. Furthermore, after the data gathered from the first phase of tests, the successive and more evolved experimentation can be planned [18]. Therefore, the paper is based on the following sections: 1. creation of the archetypes catalogue; 2. planning of the features of the virtual fragments; 3. simulation of the virtual fragments.

2. Creation of the Archetypes Catalogue

The archetypes of the virtual catalogue significantly represent the shapes of the black Campanian vases, both closed both open, found at the site of the “Santuario di Hera alla foce del Sele”. In particular, the following parts were chosen from the Morel Catalogue (1981): convex and concave cups, inset lip, skyphoi, pitchers, lekythoi and situlae (Appendix A). Moreover, the shape of a so-called “Standard vase” used as case study was defined according to the typical black Campanian vases of the Paestum area.

2.1. Semiautomatic Vectorizing of the Profile The reconstruction of the profile of the vase is grounded on two phases: I. acquisition by means of 2D scanner of the image of the vase or of the generic archaeological finding; II. vectorizing of the acquired image. The acquisition allows to get the digital image in a common raster (graphic) format (i.e. bmp, jpeg, tiff). In Figure 1 an example of 2D acquisition of a vase profile is depicted. The vectorizing is a process that, starting from a raster image, allows the definition of lines, arcs and geometrical shapes that can be modified in CAD


Journal of Automation, Mobile Robotics and Intelligent Systems

VOLUME 13,

N° 1

2019

of the image has be done to reduce the number of possible errors during the vectorizing phase. In the end, in Figure 3, the vectorized image of the vase, then imported in the CAD environment, is shown. In particular, the CAD tools allow to analyse the curvature and if needed to simplify the geometry reducing the number of control points and of the cubic spline arcs that forms the B-Spline. Fig. 1. Raster Image of the profile of a vase [1]

Fig. 2. Result of the vectorizing of the Figure 1 by means of the R2V software

2.2. Manual Vectorizing for the Reconstruction of the Profile An alternative technique for the reconstruction of the profile contemplates to use the (scanned) raster image as reference directly in the CAD environment. But in this case, it is required the knowledge of the main geometric rules on curves and surfaces to be used and in particular the expertise in the use of the specific CAD software adopted for the reconstruction process. Once the image has been imported and located in respect of the reference system, it can be useful to plot auxiliary lines to better control and draw the profile shape. For instance, these lines can define the start and the end points of the profile.

Fig. 4. Auxiliary lines used on the raster image imported in the CAD environment

Fig. 3. Analysis of the B-Spline curve realized in the CAD environment environment. These elements are called “Vector Drawing” [5]. It is possible to get it by means of specific software with “raster to vector” tools. In these environments, the typical CAD drawing elements as point, line, spline, polygon, rectangle, square, circle and text are available. Therefore, the obtainable results are vectors and so 2D entities. Usually, before to vectorize an image a pre-processing phase is needed to avoid errors. The scanning process, in fact, can acquire also noises as specks of dust or colour lacks can be generated (see Figure 1) making harder and complex the profile reconstruction defining wrong and unwanted geometric features. In particular, in this paper, the vectorizing phase by means of a specific freeware software (Algolab Photo Vector) is used and analysed. This kind of software allows to get the boundaries of the case study as set of spline arcs. In Figure 2 the vectorizing example of the boundary of a vase is shown. A preliminary optimization and so a “cleaning operation”

Starting from the raster image it is possible to draw an overlapped control point curve defining the main points that the curve has to approximate (Figure 4). The adequate number of points can influence the continuity of the profile and it is determined by the CAD user and on the base of his expertise (Figure 5). The resolution of the image can affect the goodness of the process.

Fig. 5. External profile Approximated by means of a B-Spline and several control points

Fig. 6. Modeling of the profile by means of faired curves Articles

47


Journal of Automation, Mobile Robotics and Intelligent Systems

Afterwards, it is possible to model the curve reducing the degree of curvature, the number of arcs and of the points to be approximated. It allows to get the needed profile by means of a faired curve [16].

2.3. Geometric Modeling The vectorized profiles of the virtual database are shown in the Appendix B. Starting from the 2D profiles it is possible to realize the 3D models of the vases by means of simple operations and features in CAD environment. The virtual reconstruction of the complete database can allow the setting up of a virtual museum available on the web. The first step for the virtual reconstruction of the whole vase is the definition of the symmetry axis for the revolve of the profile. Then, the profile is drawn. So, the Figure 7a shows the result of the image acquisition of the virtual catalogue, whereas the Figure 7b points out the vectorized profile and the symmetry axis used as main reference.

(a) (b) Fig. 7. (a) Raster image of a vase, (b) Vectorized profile (extrados and intrados) and symmetry axis

Fig. 8. 3D CAD solid model of the full vase

48

In Figure 8 the solid (CAD) model of the full vase obtained by means of a simple revolve feature is depicted. So, considering the internal or external profile or both, it is possible to get the corresponding surface Articles

VOLUME 13,

N° 1

2019

of the vase. Otherwise, starting from the full (closed) profile that represent the section of the vase it is possible to realize the 3D CAD reproduction of the vase by means of the solid modeling tools. Moreover, it allows to use the CAD to have detailed information about the volume, the mass and the other main mechanic characteristics of the final model.

3. Planning of the Characteristics of Virtual Fragments

To identify the characteristics of the fragments to be simulated, the DOE (Design of Experiments) techniques were employed. The advantage of this approach is widely documented [18-21]. Experimentation using experiments design techniques is articulated in seven basic steps: 1. definition of the problem; 2. choice of factors and levels; 3. choice of the response variable; 4. choice of the experimental plan; 5. conduct of experiments; 6. data analysis; 7. conclusions and planning of following experimental phases.

3.1. Problem Definition The objective of the planning of the tests is the creation of simulated case studies, which allow the verification of the goodness of recognition and classification algorithms managing to define: – the characteristics (or factors) that influence recognition and classification; – the essential advantages of recognition and classification using 3D models compared to traditional manual techniques. Then, knowing from which archetype the fragment analysed was extracted, it is possible to evaluate, for example, the percentage of correct/incorrect classification of the fragments.

3.2. Choice of Factors and Levels The characteristics of the fragments taken into consideration are: A. size of the fragment, B. position on the vase, C. orientation on the vase, D. aspect ratio, E. production error. These characteristics are identified as factors of experimentation and for each of them the values ​​to be tested are chosen, called levels (Table 1). The size of the fragment is chosen equal to 0.5% or 2% of the whole vase in order to obtain small fragments and not immediately “speaking”. As a consequence, starting from the total number of points that constitute a complete scan of good quality of the vase, the number of points relative to each of the two levels is obtained. The position on the vase is defined on the basis of the area belonging to the profile with a small or large radius of curvature. With the choice of factors, the


Journal of Automation, Mobile Robotics and Intelligent Systems

presence of particular geometric features that would make the recognition univocal (e.g. the foot or the upper edge) was excluded. The orientation on the vase defines the arrangement of the long side of the fragment according to a meridian or a parallel. The aspect ratio is the theoretical ratio between base and height, having chosen rectangular fragments, and can assume values ​​close to one or two times the golden ratio. With re gard to the choice of the rectangular shape of the point clouds, the simplified hypothesis is made of considering the maximum rectangle inscribed in the real geometry with an irregular contour. This assumption is generally conducted to verify the reconstruction algorithms. The production error is related to the deviation of the real geometry from the reference archetype of the vase. In our hypothesis the reference archetype (that is the nominal geometry) is reported in [1]. Since data on the variability of production of black paint Paestum vases are not available in the reference period, two natural tolerance values ​​of 0.5 mm and 2 mm are assumed.

VOLUME 13,

N° 1

2019

olution III [17]. In this way 8 experiments are planned for each of the chosen vases (Figure 10) among the 100 that make up the virtual catalogue, defining the 40 fragments proposed as case studies. Tab. 2. Fractional factorial plan 2III5−2 EXPERIMENT

1 2

FACTORS A

B

C

D

E

0

0

0

0

0

0

1

1

0

0

0

3 4

0

5

1

6

1

7

1

8

1

0

1

0

0

1

1

0

1

1

1

0

0

1

1

0

1

0

1

1

1

1

0

1

0

Tab. 1. Choice of factors and levels for fragment generation FACTORS

A

Fragment size (percentage of vase)

LEVELS 0

1

0.5%

2%

High curvature area

Small curvature area

~ 1.6

~3.2

B

Position on the vase

C

Orientation on the vase Aspect Ratio

Along a meridian

Production error

0.5 mm

D E

Along a parallel 2 mm

3.3. Choice of the Response Variable The “phenomenon” [19] object of study is the ability of an algorithm to correctly assign to a class of vases the fragment reconstructed starting from a cloud of points acquired through laser digitizer. Since the information contained i n the i n complete fragment, possible studied answer s conce r n the adequacy of the reconstruction of the fragment geometry and the correctness of the recognition of the archetype.

Fig. 9. From the fragment to the archetype through automated techniques 3.4. Choice of the Experimental Plan For the choice of the experimental plan a fractional factorial plan 25III−2 was used to reduce the number of experiments from 32 to 8 (Table 2). The plan is of res-

7224a1

Standard

2685b1

2122a1

1531e1

Fig. 10. Vases chosen as case studies For example, by applying the fractional plane to the standard named vase, the characteristic levels of each of the eight experiments are obtained, hence the consequent selection of the fragments. The profile of this vase is made up of 170 points, one of which belongs to the axis of revolution, for a total of 60841 points of the whole vase. In this case, the percentage value of the dimension (0.5% or 2%) gives the theoretical number of points of each fragment: 304 or 1217 points. The level of the Aspect Ratio (theoretical B/H ratio close to one or two times the golden ratio) defines the lengths of the two sides. The other factors are univocally defined, except for the position whose final definition is left by the investigator. Figure 11 shows eight fragments corresponding to the experimental plan shown in Table 3 in the case study of the standard vase. Articles

49


Journal of Automation, Mobile Robotics and Intelligent Systems

VOLUME 13,

Tab. 3. Experiments corresponding to the eight fragments EXP.

FACTORS Dimension Position Orientation (% of vase) (Radius) of the vase

1 2 3 4 5 6 7 8

0.5%

High

Merid.

0.5%

Small

Paral.

2%

High

0.5% 0.5% 2%

2% 2%

High

Small High

Small Small

Aspect Ratio (B/H)

Error

1.618

0.05

1.618

0.05

1.618

2

Merid.

2 x 1.618

Paral.

2 x 1.618

Paral.

2 x 1.618

0.05

Merid.

2 x 1.618

0.05

Paral.

Merid.

1.618

2

2

2

from which, for example, for Tn = 2 mm a value of s equal to 0.33 mm is obtained. The s-independence hypothesis of the sections of the process, due to the production technology, cannot be formulated. Therefore, in general, it is possible to derive the combined probability density function of any two sections ε ( x ) e ε ( x + ∆x ) and, distant ∆x from each other, as follows [20]: f ( x , x + ∆x ) =

 x 2 − 2ρ x( x + ∆x ) + ( x + ∆x )2  exp  −  2(1 − ρ02 )K εε (0) 2π K εε (0) 1 − ρ02  

where: with

4. Simulation of the Shape Error One of the factors considered in the simulation of fragments using DOE techniques is the shape error inevitably present in real products. To the shape error due to the production the measurement error is added, caused by the survey technique used. The use of laser digitizers makes the error of the acquisition negligible compared to that of production. The technique of producing small series of artisanal vases on the lathe requires the realization of the desired shape by manually imposing the profile in the radial direction (y), moving along the vertical direction (x). The simplified hypothesis made at the base of the simulation of this type of error are: 1. generation of the error in the radial direction, in the diametric plane xy; 2. constancy of the error along the circumference on the surface of the fragment.

50

4.1. Nominal Profile and Simulated Profile Indicate with yn(x) the curve that explicitly represents the nominal profile of the vase and lies in the xy plane. Then, in the simplified hypothesis referred to in the previous paragraph, the simulated profile affected by error, ys(x), can be defined as follows: Articles

2019

ys ( x ) = yn ( x ) + ε ( x ) con x ∈ Dx (1) where ε ( x ) represents a normal steady variance process σ ε2 . It is known that, conventionally, the variance σ ε2 is linked to the natural tolerance of production from the relation: Tn = 6σ ε (2)

=

Fig. 11. Fragments of the standard vase

N° 1

1

ρ0 =

K εε ( ∆x ) , K εε (0)

K εε= ( ∆x ) E {ε ( x ) ⋅ ε ( x + ∆x )}

.

4.2. Procedure for Generating Instances of Simulated Profiles In the study carried out the continuous curve of the profile was obtained, obtained by means of vectorization in a finite set of points identified through the x and y coordinates. In this way the acquisition process was simulated using laser digitizers, which in fact provide a cloud composed of a finite number of points (and proportional to the resolution). In the hypothesis that: a. the nominal profile is discretized into n + 1 points yn(i) equi-spaced along the x axis (so as to divide the interval Dx into n length intervals Dx); b. the error ε ( x ) is represented by the simplest autoregressive model (AR (1)) [21]:

ε i = ρε i −1 + vi con =i 1,…, n (3) where r is the coefficient of autocorrelation (–1 ≤ r ≤ 1) and vi is the so-called white noise or pure error; c. the pure error vi is a random variable Normal with mean 0 and variance σ v2 equal to:

2 σ v= (1 − ρ 2 )σ ε2 (4) the points of the simulated profile ys(i) are obtained starting from (1):

with:

y s (i ) = yn ( i ) + ε i

con i = 1,..., n

y= yn (0) + v0 . s (0)

(5)


Journal of Automation, Mobile Robotics and Intelligent Systems

For the considered model the error in each point of the profile is given by the sum ow two quote, the first directly proportional to the error in the previous point and the other merely random.

Fig. 12. Instances of the Simulated Profiles by means Monte Carlo method

VOLUME 13,

N° 1

2019

To generate the pure error vi values, the Monte Carlo method was used, through which it is possible to simulate a random experiment to the computer, generating the results in a pseudo-random manner [19]. The result of the extraction of each set of pseudo-random numbers depends on the set seed. By varying the seed, it is possible to generate a predetermined number of replications of the same fragment, simulating its extraction from vases which, although belonging to the same type/class, are each different from the other due to the only production error. In this way, by fixing a sufficient number of replications, it is possible to evaluate the robustness of the classification algorithms in identifying a nominal profile based on the detected data that deviates from it (depending on the supposed error). The generation of the shape error, thanks to the extraction of pseudo-random numbers, provides, by means of (3) and (5), five different replications of the profile. If, for example, this method is applied to a rectilinear profile of 10 mm, having divided it into 10 parts with a pitch of Dx = 1 mm, the points yn(i) = 0 with i = 0,...,10 are obtained. Figure 12 shows the five replications of simulated profiles ys(i) with i = 0,...,10 that are obtained in the hypothesis that the natural tolerance assumes the value 0.5 mm (corresponding to the level 0 of the “production error” factor ) and therefore, from (2) and (4) the standard deviation of the pure error is equal to 0.026 mm, having fixed the autocorrelation coefficient equal to 0.95. In the same way for the fragments, the procedure for extracting pseudo-random numbers was repeated with five different seeds, obtaining the five profiles of the considered fragment corresponding to the same experiment of the factorial plane and, therefore, characterized by the same factor levels. 4.3. Simulation of Virtual Fragments The phase of acquisition of the geometry of a fragment allows obtaining a cloud of points to be processed using the reconstruction and classification algorithms (Figure 13a). In the simulation of fragments, fixed the characteristics of the test plan, the point clouds are generated directly or obtained from the solid model or by surface obtained by revolution (Figure 13b). For the purposes of three-dimensional modeling, the position of the fragment on the vase (arranged along a profile area that has a large or small curvature radius) and the orientation on the vase (with the longer side along a meridian or along a parallel) are of interest: from these elements it is possible to identify the profile of the fragment that must rotate around the axis. Consider, for example, the fragment corresponding to experiment 5 of the test plan (Table 3). It is defined by a cloud of 1217 points. Figure 14a shows the whole profile that defines the shape of the vase. Starting from the points, shown in Figure 14b, extracted from the whole profile the fragment identified by the factorial plane is reconstructed. By applying the procedure for generating simulated profiles, defined by Articles

51


Journal of Automation, Mobile Robotics and Intelligent Systems

(a) Real Case

(b) Virtual Case

VOLUME 13,

N° 1

2019

equation (5), the profile affected by error is obtained at the points of the nominal profile. In Figure 15 the comparison between the nominal discretized and simulated profile is proposed by Monte Carlo. Starting, therefore, from the simulated profile the revolution operation around the axis is realized defining an appropriate angular revolution step (Figure 16). By means of a software for the surface reconstructon from point clouds, the fragment obtained can be visualized as a polygonal model (Figure 17). The program connects all the points of the cloud through segments and, therefore, builds triangles that allow to have a more realistic image of the fragment, with a more or less smooth surface.

Fig. 13. From the fragment to the recognition of the archetype

(a) (b) Fig. 14. Discretized profile and detail of the fragment with high curvature radius

Fig. 17. Representation of the simulated fragment by means of the best fitting surface

5. Conclusion

Fig. 15. Comparison between Nominal and Simulated (▪) Profiles

The DOE techniques allow to deal with the problem of fragment simulation in a scientific way. The approach makes it possible to discover the robustness of automated recognition and classification algorithms and to set threshold values ​​for fragment characteristics that make the use of Virtual Prototyping advantageous. This algorithm can make more complex the hypotheses related to the Gaussian model of simulation of the error and can adapt the value of the shape error (i.e. increasing it) accordin g to the data gathered in archaeological excavation. The study of the results of the first experiments may allow the planning of successive phases of experimentation in a virtual environment dedicated to the most influential characteristics on the reconstruction, recognition and classification of archaeological finds. At the end, the results of the vectorizing phase of the case studies of the Morel Catalogue are shown in the Appendix B and are available for academics and students for research purposes by request to the authors.

AUTHORS

Fig. 16. Example of a Point Cloud of a simulated instance 52

Articles

Fabrizio Renno* – Fraunhofer JL Ideas, Dipartimento di Ingegneria Industriale, Università degli Studi di Napoli Federico II, 80125 Naples, Italy, e-mail: fabrizio.renno@unina.it.


Journal of Automation, Mobile Robotics and Intelligent Systems

Antonio Lanzotti – Fraunhofer JL Ideas, Dipartimento di Ingegneria Industriale, Università degli Studi di Napoli Federico II, 80125 Naples, Italy. Stefano Papa – Fraunhofer JL Ideas, Dipartimento di Ingegneria Industriale, Università degli Studi di Napoli Federico II, 80125 Naples, Italy. *Corresponding author

Acknowledgments The authors thank Dr. Maria Letizia Busiello for the archaeological consultancy and Eng. Giuseppe Sanso for the technical collaboration provided in the development of the simulation code.

REFERENCES

[1] J. P. M. Morel, Céramique campanienne: les formes, École Française de Rome, Palais Farnèse, 1981. [2] C. Neamtu, S. Popescu, D. Popescu, and R. Mateescu, “Using reverse engineering in archaeology: ceramic pottery reconstruction”, Journal of Automation Mobile Robotics and Intelligent Systems, vol. 6, no. 2, 2012, 55–59. [3] F. Bruno, S. Bruno, G. D. Sensi, M.-L. Luchi, S. Mancuso, and M. Muzzupappa, “From 3D reconstruction to virtual reality: A complete methodology for digital archaeological exhibition”, Journal of Cultural Heritage, vol. 11, no. 1, 2010, 42–49. [4] G. Artese, L. D. Napoli, and S. Artese, “T.O.F. Laser Scanner for the Surveying of Statues: A Test on a Real Case”. In: International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. XL-5-W2, 2013, 67–72 DOI: 10.5194/ isprsarchives-XL-5-W2-67-2013. [5] J. D. Foley, A. van Dam, S. K. Feiner, and J. F. Hughes, Computer Graphics: Principles and Practice, Addison-Wesley Longman Publishing Co., Inc.: Boston, MA, USA, 1990. [6] S. Gerbino, D. M. Del Giudice, G. Staiano, A. Lanzotti, and M. Martorelli, “On the influence of scanning factors on the laser scanner-based 3D inspection process”, International Journal of Advanced Manufacturing Technology, vol. 84, no. 9, 2016, 1787–1799 DOI: 10.1007/s00170-015-7830-7. [7] A. Lanzotti, M. Grasso, G. Staiano and M. Martorelli, “The impact of process parameters on mechanical properties of parts fabricated in PLA with an open-source 3-D printer”, Rapid Prototyping Journal, vol. 21, no. 5, 2015, 604–617 DOI: 10.1108/RPJ-09-2014-0135. [8] S. Patalano, A. Lanzotti, D. M. Del Giudice, F. Vitolo, and S. Gerbino, “On the usability assessment of the graphical user interface related to a digital pattern software tool”, International Journal on Interactive Design and Manufacturing, vol. 11, no. 3, 2017, 457–469 DOI: 10.1007 /s12008-015-0287-y.

VOLUME 13,

N° 1

2019

[9] A. Lanzotti, D. M. Del Giudice, A. Lepore, G. Staiano, and M. Martorelli, “On the Geometric Accuracy of RepRap Open-Source Three-Dimensional Printer”, Journal of Mechanical Design, vol. 137, no. 10, 2015 DOI: 10.1115/1.4031298. [10] E. Martelli, et al., “Advancements in DEMO WCLL breeding blanket design and integration”, International Journal of Energy Research, vol. 42, no. 1, 2018, 27–52 DOI: 10.1002/er.3750. [11] G. M. Perri, M. Bräunig, G. Di Gironimo, M. Putz, A. Tarallo, and V. Wittstock, “Numerical modelling and analysis of the influence of an air cooling system on a milling machine in virtual environment”, International Journal of Advanced Manufacturing Technology, vol. 86, no. 5, 2016, 1853–1864 DOI: 10.1007/s00170-015-8322-5. [12] G. Di Gironimo, A. Lanzotti, D. Marzullo, G. Esposito, D. Carfora, and M. Siuko, “Iterative and Participative Axiomatic Design Process in complex mechanical assemblies: case study on fusion engineering”, International Journal on Interactive Design and Manufacturing, vol. 9, no. 4, 2015, 325–338 DOI: 10.1007 /s12008-015-0270-7. [13] Lanzotti A., Carbone F., Di Gironimo G., Papa S., Renno F., Tarallo A., D’Angelo R., On the usability of augmented reality devices for interactive risk assessment, International Journal of Safety and Security Engineering, Volume 8 (2018), Issue 1 DOI: 10.2495/SAFE-V8-N1-132-138. [14] S. Patalano, F. Vitolo, and A. Lanzotti, “A Digital Pattern Approach to 3D CAD Modelling of Automotive Car Door Assembly by Using Directed Graphs”. In: S. Zawiślak and J. Rysiński, eds., Graph-Based Modelling in Engineering, Mechanisms and Machine Science, 175–185. Springer, Cham, 2017 DOI: 10.1007/ 978-3-319-39020-8_13. [15] F. Vitolo, S. Patalano, A. Lanzotti, F. Timpone, and M. De Martino, “Window shape effect in a single bowden power window system”. In: 2017 IEEE International Systems Engineering Symposium (ISSE), 2017, 1–5 DOI: 10.1109/SysEng.2017.8088308. [16] F. Renno and S. Papa, “Direct Modeling Approach to Improve Virtual Prototyping and FEM Analyses of Bicycle Frames”, Engineering Letters, vol. 23, no. 4, 2015, 333–341. [17] Crescenzi F. et al.,”Vessel and In-Vessel Components Design Upgrade of the FAST Machine”, Fusion Engineering and Design 88 (9-10), pp. 2048-2051, 2013. [18] G. E. P. Box, W. G. Hunter, and J. S. Hunter, Statistics for Experimenters: An Introduction to Design, Data Analysis, and Model Building, John Wiley & Sons: New York, 1978. [19] P. Erto, La Qualità Totale, CUEN: Napoli, 1995. [20] P. Erto, Probabilità e statistica per le scienze e l’ingegneria, 2nd ed., McGraw-Hill: Milano, 2004. Articles

53


Journal of Automation, Mobile Robotics and Intelligent Systems

[21] U.N. Bhat, Elements of Applied Stochastic Processes, John Wiley & Sons: New York, 1984.

54

Articles

VOLUME 13,

N° 1

2019


Journal of Automation, Mobile Robotics and Intelligent Systems

VOLUME 13,

N° 1

2019

APPENDIX A Virtual Catalogue of archetypes1 (source: Morel 1981)

1

1531e1

2122a1

2421a1

2421b1

2421c1

2421d1

2422a1

2423a1

2423b1

2423c1

2423d1

2424a1

2424b1

2424c1

2424d1

2424e1

2431a1

2432a1

2433a1

2433b1

2433c1

2433d1

2433e1

2433f1

The Archetypes are not shown with the same scale of representation for editorial needs.

Articles

55


Journal of Automation, Mobile Robotics and Intelligent Systems

56

Articles

VOLUME 13,

2433g1

2433h1

2434a1

2435a1

2435b1

2435c1

2435d1

2436a1

2437a1

2437b1

2437c1

2437d1

2441a1

2441b1

2441c1

2442a1

2442b1

2525c1

2685b1

2911a1

2911b1

2912a1

2913a1

2913b1

N° 1

2019


Journal of Automation, Mobile Robotics and Intelligent Systems

VOLUME 13,

2913c1

2914a1

2914b1

2914c1

4271a1

4272a1

4272a2

4272a3

4311a1

4311a2

4311b1

4311b2

4311c1

4312a1

4313a1

4313a2

4313b1

4313b2

N° 1

2019

Articles

57


Journal of Automation, Mobile Robotics and Intelligent Systems

58

Articles

VOLUME 13,

5341a1

5342a1

5343a1

5343b1

5343c1

5411a1

5411b1

5411d1

5412a1

5413b1

5413c1

5414c1

5415a1

5415b1

5416a1

N° 1

2019


Journal of Automation, Mobile Robotics and Intelligent Systems

VOLUME 13,

5416b1

5416c1

5416d1

5416e1

5416f1

5417a1

5417b1

5418b1

5418c1

5419b1

6521a1

6521b1

6522a1

6522b1

6523a1

N° 1

2019

Articles

59


Journal of Automation, Mobile Robotics and Intelligent Systems

6531a1

VOLUME 13,

6531b1

7224a1

APPENDIX B Virtual Catalogue of the vectorized profiles

60

Articles

1531e1

2122a1

2421a1

2421b1

2421c1

2421d1

2422a1

2423a1

2423b1

2423c1

2423d1

2424a1

2424b1

2424c1

2424d1

N° 1

2019


Journal of Automation, Mobile Robotics and Intelligent Systems

VOLUME 13,

2424e1

2431a1

2432a1

2433a1

2433b1

2433c1

2433d1

2433e1

2433f1

2433g1

2433h1

2434a1

2435a1

2435b1

2435c1

2435d1

2436a1

2437a1

2437b1

2437c1

2437d1

2441a1

2441b1

2441c1

2442a1

2442b1

2525a1

N° 1

2019

Articles

61


Journal of Automation, Mobile Robotics and Intelligent Systems

62

Articles

VOLUME 13,

2525c1

2685b1

2911a1

2911b1

2912a1

2913a1

2913b1

2913c1

2914a1

2914b1

2914c1

4271a1

4272a1

4272a2

4272a3

4311a1

4311a2

4311b1

4311b2

4311c1

4312a1

4313a1

4313a2

4313b1

N° 1

2019


Journal of Automation, Mobile Robotics and Intelligent Systems

VOLUME 13,

4313b2

5341a1

5342a1

5343a1

5343b1

5343c1

5411a1

5411b1

5411d1

5412a1

5413b1

5413c1

5414c1

5415a1

5415b1

5416a1

5416b1

5416c1

N° 1

2019

Articles

63


Journal of Automation, Mobile Robotics and Intelligent Systems

64

Articles

VOLUME 13,

5416d1

5416e1

5416f1

5417a1

5417b1

5418b1

5418c1

5419b1

6521b1

6522a1

6522b1

6523a1

6531a1

6531b1

7224a1

N° 1

2019


Journal of Journal of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

VOLUME 2019 VOLUME 13,13, N°N°1 1 2019

O�� DO� R���� M���������� ������� ������� ����-2 ����� R����� A������� ���������� ��bm��ed: 17th November 2017; accepted: 3rd March 2019

Amir Naderolasli, Abbas Chatraei DOI: 10.14313/JAMRIS_1-2019/7 Abstract: In this ar�cle� the one DOF robot manipulator control is assessed through second type robust fuzzy-adap�ve controller. The objec�ve is to obtain a tracking path with appropriate accuracy. The stability of the closed loop system is verified through Lyapunov stability theory and the efficiency of tracking is analyzed subject to the constraints and uncertainty. In order to design the fuzzy controller a set of if-then fuzzy rules are considered which describe the system input-output behavior. �imula�on and the results of the experiments on the one DOF robots indicate the e�ec�veness of the proposed methods. Keywords: �obot manipulator� Lyapunov func�on� OneDOF robot� �dap�ve-fuzzy controller� �obust

�� ��trod�c�o� The robot manipulator control system has made advances in various areas like physical health, welding services, soldering, packing and working in hard environments. In order to apply the robots in an appropriate manner, their respective equations must be obtained which may lead to more appropriate control. In this control system, the effects due to uncertainty and external disturbance must be considered in the controller design. Most manipulator systems are �ixed on a platform. The most essential challenge in this context is the estimation of the robot manipulator parameters. Due to the high volume of the computations and their complexity, controlling robots is a dif�icult task. Flexibility in joints is a factor leading to the complexity of the robotic systems. Most of the robot control methods are based on robot joints’ voltage control strategy; hence, these control methods are complex. In order to overcome this challenge, the robot motor torque’ control strategy is applied. Different structure of adaptive controller is utilized to control of one DOF robot manipulator due to existing uncertainty parameters. In [5], an adaptive control of a mechanical manipulator with parametric uncertainty and displacement constraints is suggested. Here, control and acceleration constraints are combined and converted into common input constraints. In [12], an adaptive-neural control is developed to obtain tracking when the issue of displacement constraints is addressed by considering the normal input saturation effect. The bound in all closed loop signals is monotone through Lyapunov stability analysis. In [13], a combining adaptive-fuzzy output feedback control design approach is proposed for a speci�ic class of non-

linear multi input-multi output systems with variable delays and input saturation constraint. This proposed control schematic does not need the extraction of dynamic equation in robot. In [8], the robust-adaptive control rules are proposed in the intended schematics to reduce the effect of error caused by approximation. Thus, the adaptive impedance control for n-link robot manipulator with input saturation is developed through neural networks. An adaptive fuzzy logic is designed to estimate sliding mode control parameters in order to prevent chattering effect [3]. Real-virtual control inputs are inferred by solving a set of dynamic equations. In [2, 18], an adaptive backstepping control is applied to n-degree of freedom robotic manipulator. This suggested controller promise the system stability against nonlinearities and uncertainties. Thus, an adaptive back stepping controller is suggested based on support vector machine with the least square for accurate positioning of robot manipulator which can compensate for static effect and uncertainty available in dynamism [9]. An essential capability is the universal approximation of the fuzzy system. In order to approximate the uncertainty of the system, a radical based saturated input is applied through designing and auxiliary system. In [4], controlling the fuzzy logic as a design view point is developed as a free control model. A new robust control is suggested for mechanical manipulator through adaptive uncertainty estimator based on Taylor �irst order series. In this state, the controller does not need constraint function. One of the drawbacks of this method could be the inappropriate tracking in presence of external disturbances. Due to speci�ic features like time-varying, crosscoupling effect and being of high order of robot models is a complex issue in the given systems. Intelligent methods like combined fuzzy methods are appropriate choice for these systems. In [6], a fuzzy sliding mode with nonlinear observer for the robot manipulator is suggested, where it leads to an accurate tracking path in presence of disturbances. It is obvious that, all fuzzy systems are not applicable in fuzzy controllers and depend on the capability and ef�iciency of the fuzzy system. An appropriate observer is provided to estimate the accurate disturbances through recursive algorithm. Thus, a modi�ied fuzzy algorithm with an extra integral agent is introduced to eliminate the static error state in the manipulator [17]. Also, this approach using supervisory fuzzy control is utilized to

65

65


Journal Journal of of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

control of 3-DOF planar robot manipulator. In [14], a sliding mode controller is designed for robot systems through wavelet networks. Proposed training algorithms can align online parameters based on their weights, [7], where the manipulator tracking neural network with dead constraint input and output are suggested. In [11], a robust-adaptive power and replacement control schematic is proposed for n-link robot manipulator. In [10], a fuzzy switching is applied through model adaptive control (MRAC) in manipulator robot. In [15], the sliding mode control with a robust control scheme is implemented for the joint angular position control of a 6-DOF PUMA robot. In [1], the fractional-order controller with fuzzy type-2 compensator is utilized for two-DOF robot manipulator. Moreover, an adaptive fractional-order sliding mode controller is applied for two-DOF gimbal system [16]. An adaptive neural network control is applied for robot manipulator with nonlinear uncertainties and environment disturbances [22]and [23]. In [20], a fuzzy neural network control through dynamical model-based method is employed for single-link �lexible robot manipulator. In [21], a robot control scheme is implemented to recognize the unknown robot kinematic and dynamic robot manipulator parameters with convergence rate. The remainder of this paper is organized as follows. Section 2 describes the dynamic of one-DOF robot manipulator. The proposed robust second type fuzzy-adaptive controller is presented in Section 3. The performance of the proposed robust second type fuzzy-adaptive controller is demonstrated by simulation given in Section 4. Finally, Section 5 concludes the paper.

2. One-DOF Robot Manipulator Dynamics

Robotic systems are highly nonlinear, uncertain, coupling and multi-input multi-output dynamic systems, in nature. In order to control such system, the isolation control method is adopted which converts a multi input-multi output system to a single inputsingle output one. This control method in many modern robots is used due to simpli�ication of the computations and reduction of software costs. In order to control the robot, each joint must be controlled in a separate manner. In this method, all the effects caused by cross-couple are considered as uncertainty which is compensated by the controllers. The simpli�ied schematic of one DOF robot manipulator is illustrated in Figure 1.

Fig. 1. ^ĐŚĞŵĂƟĐ ŽĨ ŽŶĞ K& ƌŽďŽƚ ŵĂŶŝƉƵůĂƚŽƌ 66

66

VOLUME N°11 2019 2019 VOLUME 13,13, N°

The state space representation of one DOF robot manipulator together with actuator dynamics are presented as follows  ẋ1 = x2      ẋ2 = x3     ẋ3 = x4 (1) K K M gl   + + )x3 + ẋ4 = −( ∗   I cos(x1 ) I J      K K M gl  (x22 − ) sin(x1 ) + u I J I ∗J where x1 , x2 and x3 are position, velocity and acceleration of robot manipulator, respectively. x4 is the current of motor and u is controlling voltage. M is body mass, l is the length of robot arm, g is the �ixed gravity� I and J are the inertial momentum. Regarding these equations, this fourth-order state space can be written in the form state vector and output as following:  (4)  y (t) = f (x) + bu T (2) X = [x1 , x2 , x3 , x4 ]   y = x1 where f (x) and b are unknown function and parameter, respectively.

�. �econ�-�ype ��ap��e Fu��y �ontroller Design Fuzzy logic systems are applied for the unknown nonlinear approximation functions in the system. Hence, a smooth function is applied for approximation of input saturation and an adaptive-fuzzy observer is designed to solve the problem of unmeasured cases. By applying the adaptive fuzzy dynamic level control technique and previous error between the observer model and parallel series-estimation model, a new fuzzy control with adaptive rules of combining parameters is adopted based on Lyapunov rule. The fuzzy controllers are nonlinear which are designed according to fuzzy logic. The fuzzy control based on the model guarantees the closed loop stability with the least computation and analyzes the robustness and ef�iciency of the closed loop system. The robot performance in a structured environment consists of constraints like acceleration and position ones. In order to reduce the fuzzy rules, dynamic features of robots and uncertainty function analysis are taken into account. The membership functions of type one fuzzy system are selected according to human knowledge. In the type two fuzzy systems, the membership degree therein is a fuzzy digit with more capability and �lexibility compared to type one fuzzy system. A type two fuzzy set, in general, is described as follows. ] [∫ ∫ fx (u) à = /x (3) u x∈X u∈Ux

where à is a type-2 fuzzy system, Ux is set degrees of membership, x is primary variable, u is secondary variable and fx (u) is secondary membership function. A


Journal of Journal of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

VOLUME 2019 VOLUME 13,13, N°N°1 1 2019

fuzzy logic system consists of fuzzy rule base, fuzzy inference engine and defuzzi�ication mechanism. Type2 fuzzy rules can be written as follows which in this regard, x is input vector, µij (j = 1, ..., n) is type-2 fuzzy sets and y i is the center of a type-2 membership function.

if xi1 is µ̃i1 , ..., xin is µ̃in then y is y i , (i = 1, ..., m) (4) A type-2 fuzzy system structure is similar to a type1 fuzzy system, which has one extra order reduction block. This block is a modi�ied defuzzi�ication of type1 fuzzy which converts the type-2 fuzzy set output of inference engine into a type-1 fuzzy set. y(x) = [yl , yr ] =

M ∑

i i

f y

i=1 M f i ∈F i (x) ∑

fi

i=1

(5)

where yr and yl are the output values for right and i left functions, respectively. f i (x) and f (x) are lower and upper limitations of the output function as follows [20] (6) f i (x) = µx̃i (x1 ) × · · · × µx̃i (xp ) p

1

i

f (x) = µx̃i1 (x1 ) × · · · × µx̃ip (xp ) i

i

(7)

where f (x) and f (x) are upper and lower limitations of the membership function, respectively. yr and yl in above equations are calculated through equations (8) and (9). l M ∑ ∑ i f yli + f i yli i=1 i=l+1 (8) yl = l M ∑ ∑ i f + fi i=1

r ∑

i=1

yr =

i=l+1

f i yri +

r ∑

i

f +

i=1

M ∑

i=l+1 M ∑

i

f yri f

(9)

i

i=l+1

where yr and yl are obtained through Kernick-Mandel approach [12]. yr =

R ∑ i=1

yl =

M ∑

q i yri +

−r

R ∑ i=1

q̄ri yri = θrT ξr

(10)

q i yli = θlT ξl

(11)

i=R+1

q̄li yli +

M ∑

i=l+1

−l

The reduce order block is calculated through Defuzzi�ierr block. y(x) = 0.5(yl + yr ) =

0.5(θlT ξ1

+

θrT ξr )

(12)

Fuzzy logic system can be utilized to approximate uncertain nonlinear function. In adaptive-fuzzy control system design, the object is to design a feedback controller U = u( x| θ) based on fuzzy system and an adaptation rule to adjust the parameter vector θ so

that system output (y) tracks desired output (ym ) as well as the time derivatives are bounded. The block diagram of second-type fuzzy system is illustrated in Figure 2. Accordingly, the following feedback control

Fig. 2. The block diagram of second-type fuzzy system law can be considered:

U = [−f (X) + ym + K T e]

(13)

where ym is the output of reference model, K is the parameters vector and e is the signal error. To make the system robust against external disturbances and uncertainties, term uc can be added to the control law (12) as follow which is the H∞ control term. U = [−f (X) + ym + K T e + uc ]

(14)

where, robust term uc is calculated as follow [19]: 1 uc = − B T P e r

(15)

where e denotes the output error (e = y − ym ) . There exists a positive de�inite matrix P for given positive de�inite matrix Q which is the solution of the following matrix equation [24] AT P + P A = −Q

(16)

The error dynamics of the system can be obtained as follow: e(n) + k1 e(n−1) + ... + kn e = 0

(17)

The centers average is obtained using product inference engine, singular Fuzzi�ier and Defuzzi�ierr. Suppose y lf1 ...ln are free parameters which are collected together in n ∏

pi

θf ∈ Ri=1 ,

that can rewrite as following. System knowledge (fuzzy if-then rules) has been involved in designing process fˆ( X| θf∗ ) through initial parameters θf (0).

fˆ( X| θ) = θfT ξ(X)

(18)

where, parameters in θf change during online functions. These parameters are considered as primary. In this case, the modi�ication rule for θf is designed in a manner that the tracing error of e is reduced. By replacing the control law (13) in system dynamics (1), the closed loop fuzzy control system is obtained as follows ė = −K T e + [fˆ( X| θf ) − f (X)]UI

(19)

67

67


Journal Journal of of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

VOLUME N°11 2019 2019 VOLUME 13,13, N°

The block diagram of adaptive-fuzzy control is illustrated in Figure 3. In this block diagram, ym , uI , eI , θf and θf (0) are reference input, control signal, error signal, estimation parameters vector and initial condition for state estimation, respectively. To guarantee

Also designing parameters are considered as k1 = 10, k2 = 15 and desired input yr = sin(t). In equation (16), the identity matrix is obtained as:   1.2 2.6 0.7 0.9 2.2 0.1 0.2 4.3  P = (24) 3.5 9.7 1.3 5.6 1.9 0 2.1 22

To evaluation and comparison type-2 adaptive-fuzzy and robust type-2 adaptive-fuzzy controller, the error index performance is considered as follow: ∫ 2 (25) IP = e dt

In the designed robot control system, the states response to the input are shown in Figures 4-7. Fig. 3. ��a���e f���y co�tro� ��oc� ��a�ra� the system stability and derive adaptation laws, the following Lyapunov function candidate is considered: V =

1 1 T T e Pe + (θf − θf∗ ) (θf − θf∗ ) 2 2γ

(20)

where γ is a positive constant and P is the positive de�inite matrices. The time derivative V will be as

Fig. 4. Trajectory of x1

1 1 T V̇ = − eT P e + (θf − θf∗ ) [θ̇f + γeT P bξ(x)] (21) 2 γ

In order to minimize the tracking error (e) and the parameter error θf − θf∗ , the adaptation rule must be selected in a manner which V̇ is negative de�inite. Since, − 12 eT P e is negative, the fuzzy systems may be selected in a manner that the approximation error is minimized. The closed loop system is obtained with applying control law. Thus, error signal tends to zero (e(t) → 0) with selection of appropriate parameters for (t → 0). Hence, system output converges to actual output asymptotically. In an appropriate strategy the adaptation rule is provided as follows: θ̇f = −γeT P bξ(x)

Fig. 5. Trajectory of x2

(22)

The convergence velocity of system (γ) is determined through fuzzy control rules. This purposed approach can be utilized to linear or nonlinear model which is robust to uncertainties and disturbances.

4. �i������� �������

After designing the controller, the control system is implemented in Matlab-Simulink and the proposed controller is simulated. The system parameters are considered as follow: I = 0.01(kg/m2 ), J = 0.05(kg/m2 ), M = 1(kg), g = 9.8(n/kg), l = 1(m)

68

68

(23)

Fig. 6. Trajectory of x3 Control signal which applied to the system is illustrated in Figure 8. Path tracking error for every state


Journal of of Automation, Automation, Mobile Journal MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

VOLUME 2019 VOLUME 13,13, N°N° 1 1 2019

5. Conclusion In this paper, a type-2 fuzzy robust adaptive controller for one DOF robot manipulator is proposed. This proposed method is designed and simulated on the robot manipulator, based on control study. In comparison with the previous methods based on control study, this method is simpler, with less computation and higher ef�iciency, guaranteeing stability and robust against changes and uncertainty.

AUTHORS

Fig. 7. Trajectory of x4

Amir Naderolasli – Department of Electrical Engineering, Najaf Abad Branch, Islamic Azad University, Isfahan, Iran, e-mail: amir.naderolasli@gmail.com, www: http://researchgate.net/amir_naderolasli/. Abbas Chatraei∗ – Department of Electrical Engineering, Najaf Abad Branch, Islamic Azad University, Isfahan, Iran, e-mail: abbas.chatraei@gmail.com, www: http://research.iaun.ac.ir/pd/abbas.chatraei/. ∗

Corresponding author

REFERENCES Fig. 8. Applied control signal to the system

[1] A. T. Azar and F. E. Serrano, “Fractional Order Two Degree of Freedom PID Controller for a Robotic Manipulator with a Fuzzy Type-2 Compensator”. In: A. E. Hassanien, M. F. Tolba, K. Shaalan, and A. T. Azar, eds., Proceedings of the International Conference on Advanced Intelligent Systems and Informatics 2018, 2019, 77–88. [2] Byung Kook Yoo and Woon Chul Ham, “Adaptive control of robot manipulator using fuzzy compensator”, IEEE Transactions on Fuzzy Systems, vol. 8, no. 2, 2000, 186–199, 10.1109/91.842152.

Fig. 9. Error signals in proposed control system

is shown in Figure 9. The value of performance index (25) for without (Test1) and with (Test2) robust controller are listed in Table 1. Tab. 1. The iner�al moment�m �al�es for t�o�a�is gimbal system Parameters Test1 Test2

IP1 0.64 0.23

IP2 1.53 0.95

IP3 4.32 2.12

Controlling the tracking of robots at high acceleration and accuracy is an essential issue in the realm of control. The fuzzy-adaptive controller accomplishes this issue by applying the features of the fuzzy systems and adaptive controllers. The Lyapunov theory method is applied as an ef�icient method in fuzzy-adaptive controller design.

[3] W. Chang, S. Tong, and Y. Li, “Adaptive fuzzy backstepping output constraint control of �lexible manipulator with actuator saturation”, Neural Computing and Applications, vol. 28, no. 1, 2017, 1165–1175, 10.1007/s00521-016-24252. [4] M. J. Er and S. Mandal, “A Survey of Adaptive Fuzzy Controllers: Nonlinearities and Classi�ications”, IEEE Transactions on Fuzzy Systems, vol. 24, no. 5, 2016, 1095–1107, 10.1109/TFUZZ.2015.2501439.

[5] M. M. Fateh and M. Souzanchikashani, “Indirect adaptive fuzzy control for �lexible-joint robot manipulators using voltage control strategy”, Journal of Intelligent & Fuzzy Systems, vol. 28, no. 3, 2015, 1451–1459, 10.3233/IFS-141430.

[6] J. He, M. Luo, Q. Zhang, J. Zhao, and L. Xu, “Adaptive Fuzzy Sliding Mode Controller with Nonlinear Observer for Redundant Manipulators Handling Varying External Force”, Journal of Bionic Engineering, vol. 13, no. 4, 2016, 600–611, 10.1016/S1672-6529(16)60331-1.

[7] W. He, A. O. David, Z. Yin, and C. Sun, “Neural Network Control of a Robotic Manipulator

69

69


Journal Journal of of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

With Input Deadzone and Output Constraint”, IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 46, no. 6, 2016, 759–770, 10.1109/TSMC.2015.2466194.

[8] W. He, Y. Dong, and C. Sun, “Adaptive Neural Impedance Control of a Robotic Manipulator with Input Saturation”, IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 46, no. 3, 2016, 334–344, 10.1109/TSMC.2015.2429555.

[9] X. Huang, H. Gao, J. Li, R. Mao, and J. Wen, “Adaptive back-stepping tracking control of robot manipulators considering actuator dynamic”. In: 2016 IEEE International Conference on Advanced Intelligent Mechatronics (AIM), 2016, 941–946, 10.1109/AIM.2016.7576890.

[10] B. Kharabian, H. Bolandi, S. M. Smailzadeh, and S. K. Mousavi Mashhadi, “Fuzzy switching for multiple model adaptive control in manipulator robot”, Journal of Automation, Mobile Robotics and Intelligent Systems, vol. Vol. 11, No. 1, 2017, 10.14313/JAMRIS_1-2017/7.

[11] C.-H. Lee and W.-C. Wang, “Robust adaptive position and force controller design of robot manipulator using fuzzy neural networks”, Nonlinear Dynamics, vol. 85, no. 1, 2016, 343–354, 10.1007/s11071-016-2689-1. [12] M. Li, Y. Li, S. S. Ge, and T. H. Lee, “Adaptive Control of Robotic Manipulators With Uni�ied Motion Constraints”, IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 47, no. 1, 2017, 184–194, 10.1109/TSMC.2016.2608969.

[13] Y. Li, S. Tong, and T. Li, “Hybrid Fuzzy Adaptive Output Feedback Control Design for Uncertain MIMO Nonlinear Systems With TimeVarying Delays and Input Saturation”, IEEE Transactions on Fuzzy Systems, vol. 24, no. 4, 2016, 841–853, 10.1109/TFUZZ.2015.2486811.

[14] C. Lin, “Nonsingular Terminal Sliding Mode Control of Robot Manipulators Using Fuzzy Wavelet Networks”, IEEE Transactions on Fuzzy Systems, vol. 14, no. 6, 2006, 849–859, 10.1109/TFUZZ.2006.879982. [15] A. Medjebouri and L. Mehennaoui, “Adaptive Neuro-Sliding Mode Control of PUMA 560 Robot Manipulator”, Journal of Automation, Mobile Robotics and Intelligent Systems, vol. Vol. 10, No. 4, 2016, 10.14313/JAMRIS_4-2016/27.

[16] A. Naderolasli and M. Tabatabaei, “Stabilization of the Two-Axis Gimbal System Based on an Adaptive Fractional-Order Sliding-Mode Controller”, IETE Journal of Research, vol. 63, no. 1, 2017, 124–133, 10.1080/03772063.2016.1229581.

70

70

[17] S. R. Naghibi, A. A. Pirmohamadi, and S. A. A. Moosavian, “Fuzzy MTEJ controller with integrator for control of underactuated manipulators”, Robotics and Computer-Integrated Manufacturing, vol. 48, 2017, 93–101, 10.1016/j.rcim.2017.03.006.

VOLUME N°11 2019 2019 VOLUME 13,13, N°

[18] N. Nikdel, M. A. Badamchizadeh, V. Azimirad, and M. A. Nazari, “Adaptive backstepping control for an n-degree of freedom robotic manipulator based on combined state augmentation”, Robotics and Computer-Integrated Manufacturing, vol. 44, 2017, 129–143, 10.1016/j.rcim.2016.08.007.

[19] G. G. Rigatos, “Adaptive fuzzy control of DC motors using state and output feedback”, Electric Power Systems Research, vol. 79, no. 11, 2009, 1579–1592, 10.1016/j.epsr.2009.06.007.

[20] C. Sun, H. Gao, W. He, and Y. Yu, “Fuzzy Neural Network Control of a Flexible Robotic Manipulator Using Assumed Mode Method”, IEEE Transactions on Neural Networks and Learning Systems, vol. 29, no. 11, 2018, 5214–5227, 10.1109/TNNLS.2017.2743103. [21] C. Yang, Y. Jiang, W. He, J. Na, Z. Li, and B. Xu, “Adaptive Parameter Estimation and Control Design for Robot Manipulators With Finite-Time Convergence”, IEEE Transactions on Industrial Electronics, vol. 65, no. 10, 2018, 8112–8123, 10.1109/TIE.2018.2803773.

[22] S. Zhang, Y. Dong, Y. Ouyang, Z. Yin, and K. Peng, “Adaptive Neural Control for Robotic Manipulators With Output Constraints and Uncertainties”, IEEE Transactions on Neural Networks and Learning Systems, vol. 29, no. 11, 2018, 5554–5564, 10.1109/TNNLS.2018.2803827. [23] Q. Zhou, S. Zhao, H. Li, R. Lu, and C. Wu, “Adaptive Neural Network Tracking Control for Robotic Manipulators With Dead Zone”, IEEE Transactions on Neural Networks and Learning Systems, 2019, 1–10, 10.1109/TNNLS.2018.2869375.

[24] M. M. Zirkohi, M. M. Fateh, and M. A. Shoorehdeli, “Type-2 Fuzzy Control for a Flexible- joint Robot Using Voltage Control Strategy”, International Journal of Automation and Computing, vol. 10, no. 3, 2013, 242–255, 10.1007/s11633-0130717-x.


Journal of Automation, Mobile Robotics and Intelligent Systems

VOLUME 13,

N° 1

2019

Preface to Special Issue of the Journal of Automation, Mobile Robotics and Intelligent Systems on Recent Advances in Information Technology II

DOI: 10.14313/JAMRIS_1-2019/8 This issue of the Journal of Automation, Mobile Robotics and Intelligent Systems is devoted to selected aspects of current studies in the area of Information Technology - as presented by young talented contributors working in this field of research. This special issue is already the second edition of this series. Among included papers, one can find contributions dealing with the digital audio filter designing problem, building service-oriented systems, neural networks, modelling and classification. The idea of creating this special issue was born as a result of broad and interesting discussions during the Fifth Doctoral Symposium on Recent Advances in Information Technology (DS-RAIT 2018), held in Poznań (Poland) on September 9-12, 2018 as a satellite event of the Federated Conference on Computer Science and Information Systems (FedCSIS 2018). The aim of this meeting was to provide a platform for the exchange of ideas between early-stage researchers in Computer Science (PhD students in particular). Furthermore, the Symposium was to provide all participants an opportunity to obtain feedback on their ideas and explorations from the vastly experienced members of the IT research community who had been invited to chair all DS-RAIT thematic sessions. Therefore, submission of research proposals with limited preliminary results was strongly encouraged. Here, we would like to individually mention the contribution entitled “UAV downwash dynamic texture features for terrain classification on autonomous navigation” written by João Pedro Carvalho, José Manuel Fonseca, and André Damas Mora. This contribution has received the Best Paper Award at DS-RAIT 2018. This issue contains the following DS-RAIT papers in their special, extended versions. The first paper, entitled The Design of Digital Audio Filter System used in Tomatis Method Stimulation, and authored by Krzysztof Jóźwiak, Michał Bujacz, and Aleksandra Królak, is a multidisciplinary research in which the authors merge the Tomatis Method (which is a rehabilitation technique used in psychology) with digital Electronic Ear systems, using an STM32F4 family micro-controller and ADC/DAC integrated circuits. This contribution is reviewing the Tomatis Method, as well as the main functions of the Electronic Ear, and describes the designed system with comparative measurements with the analogue original. In the opinion of editors and reviewers, this work apart from its practical values, might have extraordinary influence on the development of contemporary medicine. Nikita Gerasimov, in his work entitled New approach to typified microservice composition and discovery, addresses the problems related to building service-oriented systems. One can name, among the aforementioned problems, the following issues: lack of static typing, lack of inter-service data type checking, as well as high services connectivity. In this contribution, the Author proposes a strong and static polymorphic type system and a type check algorithm. The latter technique was tested in a real service system, which demonstrated its reliability. The last paper entitled Terrain classification using static and dynamic texture features by UAV downwash effect, was written by the team consisting of João Pedro Carvalho, José Manuel Fonseca, and André Damas Mora. In this paper, the Authors deal with identifying terrain type, in particular, they address this issue in the fields of autonomous navigation, mapping, decision-making and choice of emergency landing sites. The range of work and variety of techniques used to solve this problem is impressive. The authors start by acquiring real drone originating data, and then they analyze received real images and subsequently make the classification. In the opinion of the members of the Program Committee of DS-RAIT, this work, because of its multi-aspect nature, great presentation and promising results, was awarded the Best Paper of the event. We would like to thank all those who were participating in, and contributing to the Symposium program, as well as all the authors who have submitted their papers. We also wish to thank all our colleagues, the members of the Program Committee, both for their hard work during the review process and for their cordiality and outstanding local organization of the Conference. Editors: Piotr A. Kowalski Systems Research Institute, Polish Academy of Sciences and Faculty of Physics and Applied Computer Science, AGH University of Science and Technology

Szymon Łukasik Systems Research Institute, Polish Academy of Sciences and Faculty of Physics and Applied Computer Science, AGH University of Science and Technology 71


Journal of Automation, Mobile Robotics and Intelligent Systems

72

Articles

VOLUME 13,

N° 1

2019


Journal of Automation, Mobile Robotics and Intelligent Systems

VOLUME 13,

N° 1

2019

The Design of Digital Audio Filter System used in Tomatis Method Stimulation Submitted: 8th October 2018; accepted 7th February 2019

Krzysztof Jóźwiak, Michał Bujacz, Aleksandra Królak

DOI: 10.14313/JAMRIS_1-2019/9 Abstract: The Tomatis Method is a rehabilitation technique used in psychology, the main aim of which is stimulating the cochlea in the inner ear by filtered air-conducted and bone-conducted sounds. The system of electronic filters and amplifiers used for this therapy is called the Electronic Ear. Commonly, it is a commercial analog device that is expensive and after a few years its functionality declines. In this paper, we propose a digital Electronic Ear system using an STM32F4 family micro-controller and ADC/ DAC integrated circuits. The design of the digital sound filters allows to adjust more parameters and overcomes some of the constraints of analog systems. In this paper, we provide a short review of the Tomatis Method, the main functions of the Electronic Ear and we describe the designed system with comparison measurements to the analog original. Keywords: Sound Filtering, Digital Signal Processing, Shelving Filter, FIR Filter, Electronic Ear Stimulator, Tomatis Method

1. Introduction Music therapy has numerous applications in psychology and rehabilitation. It can be used to help patients manage stress, concentration or depression, even if in some cases these methods do not give significantly stronger effects, when they are compared with a control placebo group [1], [2], [3], [4]. The Tomatis Method (TM) has been shown to help people with various psychological disorders (autism, dyslexia, ADD and more) [10], [11], [12], [13], [14], [15], [16], [17], [18], [20] though in some controlled trials the results were also inconclusive [8], [19]. It uses music, but it is not purely music therapy, as it is based on brain and inner ear stimulation by selective sound filtering. The electronic device that implements the filter system for TM is called an Electronic Ear (EE). Most EEs that are commonly used are analog systems, that have their own constraints and high prices. We propose a new, digital Electronic Ear (DEE) system designed for cheaper implementation of TM and its further studies.

2. Tomatis Audio-Psycho-Phonology Alfred Tomatis (1919–2001) was a professor, laryngologist and otorhinolaryngology specialist. He formulated the Tomatis Laws, which cover the theory behind the relationship of hearing and speaking. He observed that problems with singing or speaking in some particular spectrum are connected with difficulties with listening in the same spectrum and proposed that the improvement of vocal and voice range can be provided by opening the hearing ability to a specific frequency spectrum. This principle is called the Tomatis effect. He also emphasized the difference between listening and hearing – hearing is the ability to combine listening to a voice and focusing the attention on it. The main aim of psychological therapies based on Tomatis method was to improve not only the listening, but also the hearing ability. Over the years his ideas were implemented in other forms of sound therapy and a generic Electronic Ear tool was developed. The Electronic Ear is based on the concept of stimulating the basilar membrane (BM) in the cochlea in two ways – by air and bone conduction. Air conduction is based on the typical ear pathway, with sound vibrations picked up by the eardrum, transferred by the ossicles to the oval window of the cochlea. Bone conduction is the perception of vibrations travelling through the bone and stimulating the BM either through pressure changes in the cochlear fluid or vibrations of the cochlear wall [5]. Bone conducted sounds reach the brain faster due to the higher density of the medium; however, large part of the acoustic signal is reflected due to the air-bone impedance mismatch. That’s why bone conduction mostly favors our own voice. The conclusion drawn by Tomatis from this observation was that air conduction is used to “communicate with the outer world” and bone conduction is used to “listen to oneself”. [6], [7]. The next feature that is important for TM is human earedness, i.e. which ear is dominant in the hearing process. Left ear dominance is problematic, because the left ear has the longer path to the left hemisphere. The delay between left and right ear is about 0,4–0,9 ms. People, who are left-eared have more frequent listening problems that can cause various psychological disorders, such as dyslexia [6]. One of the main aims of Tomatis Rehabilitation is to increase the role of the right ear for the people who are left-eared. For this reason there are separate delays and adjustable volume levels for left and right channels.

73


Journal of Automation, Mobile Robotics and Intelligent Systems

In the beginning of the Tomatis rehabilitation a patient is tested for his audio attention capabilities by testing the lowest volume level of sound frequency that the patient can detect. The process is similar to a typical audiogram; however, it is performed four times: for air and bone conduction for the left and the right ear. The ideal shape is smooth, without sharp changes and with a peak in the range of 1-3 kHz. The bone conduction line should be exactly 5 dB lower than the air conduction. Fig. 1 shows the flow chart of the EE. It is an audio system, the main function of which is amplifying, gating and filtering sound going to the three output channels – air left, air right and bone conduction. Gating is the process of changing the filter type depending on signal level. Below a certain amplitude level the C1 filter is on and above a certain level the filter is switched to C2 filter. The sudden change gives an impulse that theoretically stimulates the brain. The C1 and C2 filters are shelving filters with 1 kHz cross-band frequency. The boost/attenuation level is adjusted by the specialist administering the TM therapy and can be set from -5 to 5 dB. The EE also contains a second filter – a High Pass filter with an adjustable cut-off frequency. Its supposed function is to imitate the medium of a mother’s womb – the environment, where the hearing system of a fetus was developing. In some cases there is a need to increase or decrease a role of bone conduction in comparison to air conduction. For this reason there is an adjustable delay between the air and bone channel called precession.

Fig. 1. Flow chart of the Electronic Ear (EE). The Gate depending on the signal level diverts it to the shelving filters C1 or C2. The HP filter cuts off unpleasant low frequencies. The Delay allows to the air and bone conducted sounds to arrive at different times and the Amplifiers control the level of the three outputs

74

TM is used for many purposes. One of the popular aims is in therapy of autistic children. According to Tomatis theory autism is connected with problems with listening to the external world – and it is true that in most autistic children bone conduction sensitivity is higher than air conduction. The therapy’s aim is to lower this sensitivity in order to open the child up to communication with others. There were a number of studies carried out to verify the effectiveness of this rehabilitation. One of the most controversial was Brief Report: The Effects of Tomatis Sound Therapy on Language in Children with Autism [8]. It revealed that there was no clear difference between the effects on a control group and an experimental group put under Articles

VOLUME 13,

N° 1

2019

TM rehabilitation. For response to this article there were two follow-up studies – Sound Therapy: an Ex­ perimental Study with Autistic Children [10] and Re­ sponse to ‘‘Brief Report: The Effects of Tomatis Sound Therapy on Language in Children with Autism [9]. They showed the opposite view and pointed out missteps in the previous research. These two and another one [11] show a significant improvement after the TM rehabilitation measured in the Children’s Autism Rating Scale (CARS) or in Gilliam Autism Rating Scale (GARS). An overview of over 30 TM studies by Gerritsen showed that the majority of studies demonstrated positive effects of TM therapy. Some of the better documented TM treatments include: • reduction of the Attention Deficit Disorder (ADD) by, similarly to autism, increasing the focus on air conducted sounds versus bone conduction [6], [12], [13] • treating dyslexia [6],[13],[20] (that is theoretically connected with left-earedness) by increasing the sensitivity of the right ear • supporting language learning [14], by setting the C1/C2 filters to frequencies most commonly appearing in a given language. TM rehabilitation has also been successfully used for: • Epilepsy – visible effect in more than 50% of tested patients [15] • Cerebral palsy – significant improvement [16] • Stuttering and hoarseness [13] • Emotional problems including depression – high effect, compared with a control group [17] • Music skills improvement – results are divided [18], [19]. Most of tests have shown a positive effect of TM, but there are also those, that did not report any differences when compared to a placebo treatment. We need to consider that negative results can stem from inappropriateness in investigation methods or in the way the EE device was used [9]. This shows that the TM requires further rigorous studies.

3. Electronic Ear Stimulator

In terms of the electronic design main functions of Electronic Ear are: 1. Converting audio signal to digital data (ADC) 2. Pre-processing 3. Gating 4. Filtering 5. Digital-Analog converting (DAC) 6. Amplifying The device can be divided into four main modules: audio input, audio output, micro-controller and power supply. Three voltage levels are needed: • 5V –external power supply. It is used as an input for the step-down converter, LCD power supply and as an analog power supply for the input section. Ferrite beads are used between the analog and digital sections. A transformer power source is used in order to decrease the distortions in the low frequency audio signals.


Journal of Automation, Mobile Robotics and Intelligent Systems

• 3.3V – it is given by Low Dropout Positive Regulator TC2117 from MicroChip. It is used for digital power supply for µC and for the digital part (serial communication interfaces I2S and I2C) of the ADC/ DAC converters. • 2.5V – it is necessary for the output analog power supply and it is provided by LP2985-N – Low dropout Linear regulator from Texas Instruments. It can provide 150 mA current for the normal work conditions, that is enough for the purpose of this project. The maximum resolution for ADC and DAC converters built into the STM32F4 family controllers is 12-bits. Since this is insufficient for this project, external 24-bit stand-alone integrated circuits are used. For input a PCM4201 from Texas Instruments and for output CS432L22 from Cirrus Logic are used. In both cases communication between the audio converter and µC is provided by an I2S interface, the audio data word length is 24 bits and is Left−Justified in the frame. CS43L22 supports I2S and it is working in the slave mode by being connected to four pins: • SCLK – clock signal for the serial interface • LRCK – clock signal determining which channel is currently sent by SDO line. Its frequency is equal to the sampling frequency. • SDO – audio data digital signal • MCLK – MCLK line is necessary for proper work of CS43L21 since it serves as a clock source for the delta-sigma modulators. The input converter communication protocol is not completely compatible with I2S. PCM4201 has its own transmission interface that has 3 communication ports – BCK (equivalent to SCLK), DATA (equivalent to SDO), FSYNC (equivalent to LRCK) and one system clock port, that is necessary for proper work. The serial interface is very similar to I2S and after a few modifications it can work in high performance mode communicating in master mode. The main difference is that the system clock needs to be 512 times larger than the sampling frequency (256 times larger in I2S mode). To get that result an extra I2S interface is used with a 2 times larger sampling frequency and the MCLK pin connected to the system clock pin in the audio converter. That allows a clean transmission without errors. The STM32F4 microcontroller works in the I2S master mode for the communication with the output DAC converter. Since the input ADC converter has its own protocol, it has to work in the master mode and µC works in the slave mode. For the STM32F401 microcontrollers a frame synchronization error occurs when the I2S interface works in the slave mode. In order to solve this problem, after the serial interface initialization and before communication start the LRCK pin has to be set high internally from the device. CS43L22 has two serial interfaces and except of I2S for data transmission it also has an I2C interface for control. It is needed to initialize the device and set the work conditions. It allows also to regulate all functions of the chip during work, for example – volume, that is continuously sent to the control register in the main loop of program.

VOLUME 13,

N° 1

2019

The CS43L22 is a DAC converter, but also a headphone amplifier. It allows to prevent from using two independent chips (one for converting, one for amplifying) and minimalizes space on the PCB board. In the project there are two output channels – air and bone channel. In order to support three channels (that are required for the EE) there is used an external adapter that splits the air channel to left and right channels with a build-in balance regulator. The headphone amplifier can provide 88 mW power for 16 Ω output. The air channel lines of the measured headphones have 320 Ω impedance, so the parallel connection impedance is equal to 160 Ω. The bone conduction vibrator has 32 Ω impedance. In those conditions the 150mA power supply used in this project is sufficient and all of the three output channels are supplied. For digital signal processing the STM32F407VGT6 microcontroller from STMicroelectronics is used. It has a CPU frequency of up to 168 MHz, 1 Mbyte of flash memory and 196 Kbytes of static RAM memory. It provides three I2C and two I2S serial communication interfaces. The 144 MHz clock is used to minimize the error of timer frequency to less than the sampling frequency and I2S clock signal frequency (the error is equal to 0.017%). Data is stored in a float buffer. Its size is set in order to provide maximum 500 milliseconds delay, which for 48kHz frequency sample gives (24000 + maximum filter order) size. The SRAM memory has enough space to store all data needed, so the flash memory is not used in this project. User interface gives possibility to adjust the main parameters of the EE – C1, C2 levels value, filters frequencies, precession and volume and the advanced options as gate upper level and four options for filter order. Changing filter order affects its selectivity and gate level includes the C1/C2 filters switching rate. Both parameters determine the intensity of stimulation. Volume control is provided by sending a proper value to the register of amplifier, the same level for air conduction and bone conduction channel. The gain level difference between both channels has to be added in the program function after uploading the ADC value before signal processing. It is necessary, because the gate has to switch these two channels separately depending on bone conduction gain level and precession value.

Fig. 2. Picture of the designed device Fig. 2. shows the picture of the designed equipment. The prototype of the device is made with a plastic case. It uses an LCD display with HD44780 Articles

75


Journal of Automation, Mobile Robotics and Intelligent Systems

76

driver to communicate with the user. The four buttons and an incremental encoder are used for navigating a menu and adjusting all of the device parameters in an easy way. The device contains four LED diodes to inform which filter is currently used – C1 or C2 filter for air and bone conduction channel. The potentiometer on the left side of the case is used for adjusting the proper pre-amplifier gain. This is an effective way to increase the quality of the sound by making use of all the 24 bits of the ADC converter. In this project two types of filters are required – simple High-Pass & Low-Pass filters and a shelving filter for the C1 and C2 filtering. The analog filters were commonly implemented using the original Electronic Ear. The FIR (Finite Impulse Response) filters were chosen for this project. Digital FIR filters are not able to be derived from analog filters, because the analog ones always give an infinite impulse response. However, the main goal of the project was to design the digital system without analog filters disadvantages instead of simply copying the original. The FIR filters are inherently stable and have a linear phase being designed with a proper method. The Discrete Fourier transform (DFT) is used to perform linear filtering. The filter impulse response is approximated to function sin(x)/x. The filter orders in range of 51 to 101 are used for this project. The designing method with only the odd orders was chosen. Every filter is designed as FIR filter using the window method. The window that was chosen for this purpose is modified Tukey window (tapered cosine window). It is combination of a rectangular window and a Hamming window with an α coefficient that determines what part of the window should be flat. It makes cosine lobe of width α/2 * N (N is filter order) and rectangular window of width (1 − α/2) * N at the center of filter function. With α = 0.8 it gives the best results – sufficient filter selectivity providing expected filter band levels even for low cross-frequency (500 Hz is a minimum used value) and low ripple frequency response. To obtain a C1 & C2 two-level shelving filter it was necessary to use a parallel connection of filters – the multiplying filters coefficients presented in Z transform. Two simple filters are multiplied – one for frequencies below the cross-frequency (LP filter for a left part of characteristic) and one for frequency range above the cross-frequency (HP filter). The cross-frequency is set where the characteristic should reach the 0 dB line. For that reason there is a need to modify one frequency (adjusted by the user), by a coefficient dependent on filter order and C1 & C2 levels. The big challenge was to appropriately design the filter in case when both shelving filter levels (lower and upper than cross-frequency) have the same sign. Normally it should not cross the zero decibels line. But measurements of the original EE show, that with this case it should be artificially modified in order to give 0 dB gain at the cross-frequency point. A cascade connection of two filters with similar cut-off frequency was used, but modified regarding to the purpose. For the positive shelving filter levels there is a need to decrease the gain near the cross-frequency (frequenArticles

VOLUME 13,

N° 1

2019

cies of filters diverge) and for negative it should be increased (frequencies converge). Then, obtained filter is serially connected (adding filter coefficients in the Z-transform domain) with the normal shelving filter, that is computed by the usual method.

4. Results

In this section the frequency-responses of C1 & C2 shelving filters in the designed DEE are shown and compared with those measured for the original, analog EE device. All measurements were made for input signal frequency in range of 20 Hz to 16 kHz with variable steps depending on the cross-frequency value. The output signal voltage levels on the Y axis are presented as relative values to All-Pass filter characteristic (C1 and C2 equal to 0 dB). Fig. 3 and Fig. 4 show frequency-responses for 101-order digital filters for designed device. As it can be seen, passband levels have correct values, remain stable and C1 & C2 filter lines cross near the 0 dB level. In the first case the 500 Hz (the minimal needed value for this project) and in the second 1 kHz (frequency of analog EE) cross-band frequency is set. In Fig. 4 the filter with two positive levels is shown to provide appropriate gain modification. The characteristic fall into 0 dB value near cross-frequency and then return to a stable level.

Fig. 3. Frequency-response for n=101, f=500 Hz and passband levels set

Fig. 4. Frequency-response for n=101, f=1000 Hz and passband levels set In the Figs 5 and 6 frequency responses measured for the DEE and analog EE are shown for 1 kHz crossband frequency and [±5, ±5] and [±5, ±1] levels respectively. All filters of the analog EE are expected to


Journal of Automation, Mobile Robotics and Intelligent Systems

have 1 kHz zero-cross frequency with a slope of 6dB/ octave. The unstable and exceeded passband levels during the whole frequency bandwidth for analog devices can be seen. In the Fig. 6 one can notice the ringing artifacts for the bandwidth above 1 kHz. The cross-point for analog device also has an incorrect value i.e. 850 Hz for first case and 865 Hz for the second case.

Fig. 5. Frequency-response of C1/C2 filters measured for digital and analog device. Theoretically, the analog EE’s pass bands should be flat, but the measurements show they clearly have a linear drop-off

VOLUME 13,

N° 1

2019

visible effect of this occurrence for the analog device– the filter levels are different than expected and are unstable. Another advantage is that the cost of production of the DEE (less than 200 €) is much lower than the analog TM device (approximately 10 000 €). The fact that filters in the designed system are digital also gives an opportunity to develop it in the future, change the parameters or to upgrade it depending on the requirements. The next advantage is that digitalization allows more parameters to be adjusted by the user. The main and very important difference is the possibility to set the cross-frequency of the shelving filter. In analog EE C1 and C2 filters have always the same cross-frequency and it is 1 kHz. It has also only one type of filter selectivity. In the designed device there are four cases for filter order. It allows to change the intensity of stimulation and pick the best one for the individual patient. The changeable filter frequency goes with opportunity to modify frequency band, that impacts the most to the listener. To sum up, a modernized digital version of the device used for the rehabilitation by the Tomatis method was designed and tested. Its performance was measured to be the same or better than the original equipment, while allowing much more control over the functionality.

AUTHOR

Fig. 6. Frequency-response of C1/C2 filters measured for the proposed digital and the original analog TM device

5. Conclusion The DEE, which was the topic of this paper was designed properly and it provides all the required functions. It services 2 channels, that with external splitting adapter allow to support three channels. If the right/left channel delay was needed, there would be a possibility to use second, same DAC integrated device for extra output channels. The DEE uses 24-bit audio data with 48kHz sampling frequency. The filters, that it includes, give the desired effects – the shelving filter levels and cross-frequency value are as expected. It monitors changes at air and bone channels and gates them to C1 or C2 filters depending on the signal level and precession value. Compared to the analog EE, the designed device has all the same functions, but it has also extra- features allowing to remove some previous constraints. First it is a digital system, so the filter parameters are stable and do not depend on the duration of operation or component aging. In Fig. 5 and 6 there is

Krzysztof Jóźwiak* – Lodz University of Technology, Institute of Electronics, ul. Wólczańska 211/215, 90-924 Łódź, Poland, email: kjozwiak@dmcs.pl. Michał Bujacz – Lodz University of Technology, Institute of Electronics, ul. Wólczańska 211/215, 90-924 Łódź, Poland, email: michal.bujacz@p.lodz.pl. Aleksandra Królak – Lodz University of Technology, Institute of Electronics, ul. Wólczańska 211/215, 90-924 Łódź, Poland, email: aleksandra.krolak@p.lodz.pl. *Corresponding author

REFERENCES

[1] L. Hohmann, J. Bradt, T. Stegemann, and S. Koelsch, “Effects of music therapy and music-based interventions in the treatment of substance use disorders: A systematic review”, PLOS ONE, vol. 12, no. 11, 2017 DOI: 10.1371/journal.pone.0187363. [2] M. Bodner, R. P. Turner, J. Schwacke, C. Bowers, and C. Norment, “Reduction of Seizure Occurrence from Exposure to Auditory Stimulation in Individuals with Neurological Handicaps: A Randomized Controlled Trial”, PLOS ONE, vol. 7, no. 10, 2012 DOI: 10.1371/ journal.pone.0045303. [3] M. Bucur and A.-L. Marian, “The impact of the Mozart effect on creativity: myth or reality”, Creativity & Human Development, 2016, www.creativityjournal.net/newsevents/ item/298-the-mozart-effect-and-creativity. Articles

77


Journal of Automation, Mobile Robotics and Intelligent Systems

78

[4] L.-C. Lin, W.-T. Lee, H.-C. Wu, C.-L. Tsai, R.-C. Wei, H.-K. Mok, C.-F. Weng, M.-w. Lee, and R.-C. Yang, “The long-term effect of listening to Mozart K.448 decreases epileptiform discharges in children with epilepsy”, Epilepsy & Behavior, vol. 21, no. 4, 2011, 420–424 DOI: 10.1016/j.yebeh.2011.05.015. [5] P. Henry and T. Letowski, Bone conduction: Anat­ omy, physiology, and communication, Army Research Laboratory, 2007. [6] P. Sollier, Listening for wellness: An Introduction to the Tomatis Method, The Mozart Center Press, 2005. [7] N. Doidge, The brain’s way of healing: remarka­ ble discoveries and recoveries from the frontiers of neuroplasticity, Viking: New York, 2015. [8] B. A. Corbett, K. Shickman, and E. Ferrer, “Brief Report: The Effects of Tomatis Sound Therapy on Language in Children with Autism”, Journal of Autism and Developmental Disorders, vol. 38, no. 3, 2008, 562–566 DOI: 10.1007/s10803-007-0413-1. [9] J. Gerritsen, “Response to “Brief Report: The Effects of Tomatis Sound Therapy on Language in Children with Autism”, July 3, 2007, Journal of Autism and Developmental Disorders”, Journal of Autism and Developmental Disorders, vol. 38, no.3, 2008, 567–567 DOI: 10.1007/s10803-007-0471-4. [10] M. AbediKoupaei, K. Poushaneh, A. Z. Mohammadi and N. Siampour, “Sound Therapy: An Experimental Study with Autistic Children”, Procedia – Social and Behavioral Sciences, vol. 84, 2013, 626–630 DOI: 10.1016/j.sbspro.2013.06.615. [11] J. M. Neysmith-Roy, “The Tomatis Method with Severely Autistic Boys: Individual Case Studies of Behavioral Changes”, South African Journal of Psychology, vol. 31, no. 1, 2001, 19–28 DOI: 10.1177/ 008124630103100105. [12] L. Sacarin, “Early Effects of the Tomatis Listening Method in Children with Attention Deficit”, Dissertations & Theses, 2013, 44, https://aura. antioch.edu/etds/44. [13] J. Gerritsen , “A Review of research done on Tomatis Auditory Stimulation”, 2009. [14] I.-M. du Toit, W. F. du Plessis, and D. K. Kirsten, “Tomatis Method Stimulation: Effects on Student Educational Interpreters”, Journal of Psychology in Africa, vol. 21, no. 2, 2011, 257–265 DOI: 10.1080/ 14330237.2011.10820454. [15] G. Coppola, A. Toro, F. F. Operto, G. Ferrarioli, S. Pisano, A. Viggiano, and A. Verrotti, “Mozart’s music in children with drug-refractory epileptic encephalopathies”, Epilepsy & Behavior, vol. 50, 2015, 18–22 DOI: 10.1016/j.yebeh.2015.05.038. [16] I. Przybek-Czuchrowska, E. Mojs, and E. Urna­Bzdęga, “Opis przypadku dziecka z organicznym uszkodzeniem w obrębie ośrodkowego układu nerwowego leczonego metodą treningu słuchowego Tomatisa (Case study of a child with orArticles

VOLUME 13,

N° 1

2019

ganic damage within the central nervous system treated with the Tomatis method)”, Neuropsy­ chiatria i Neuropsychologia / Neuropsychiatry and Neuropsychology, vol. 10, no. 1, 2015, 40–45 (in Polish). [17] J. O. Coetzee, The effect of the Tomatis Method on depressed young adults, 2001. [18] W. du Plessis, S. Burger, M. Munro, D. Wissing, and W. Nel, “Multimodal Enhancement of Culturally Diverse, Young Adult Musicians: A Pilot Study Involving the Tomatis Method”, South Af­ rican Journal of Psychology, vol. 31, no. 3, 2001, 35–42 DOI: 10.1177/008124630103100305. [19] I. Vercueil, H. Taljaard, and W. d. Plessis, “The effect of the Tomatis Method on the psychological well-being and piano performance of student pianists: An exploratory study”, South African Music Studies, vol. 31, no. 1, 2011, 129–158. [20] T. Gilmor, “The Efficacy of the Tomatis Method for Children with Learning and Communication Disorders: A Meta-Analysis”, International Jour­ nal of Listening, vol. 13, no. 1, 1999, 12–23 DOI: 10.1080/ 10904018.1999.10499024.


Journal of Journal of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

VOLUME 2019 VOLUME 13,13, N°N°1 1 2019

��� A������� �� �������� M����������� ����������� ��� D�������� �ub���ed: 8th October 2018; accepted: 7th February 2019

Nikita Gerasimov DOI: 10.14313/JAMRIS_1-2019/10 Abstract: Several problems related to work reliability appear while building service-oriented systems. The first problem consists of the lack of sta�c typing and the lack of interservice data type checking. The second one consists of the high connec�vity of services. The ar�cle shows an e�ample of the strong and sta�c polymorphic type system and the type check algorithm. The service-contract and the contract discovery concepts for universal service linking and type verifica�on are described. ��er theore�c results had been realized in a service form, they were applied in prac�ce in the real system, which improved its reliability. �lso, technical realiza�on decreased services connec�vity, which promoted system �uality increase. �owever, the increased comple�ity of the resul�ng system leveled advanced reliability. Keywords: microservice architecture, sta�c typing, S��, services composi�on, service contract

�� ��trod�c�o� Development of modern, convenient multi-logic systems often bases on service or microservice oriented architectures (SOA). SOA means that application logic is divided into several self-suf�icient components, providing separate tasks realization [11]. Every component has a single responsibility. The advantages of SOA get obvious when developing high-loaded web-services: separate components encourage horizontal scaling. Every service, as usual, does not require various dependencies or speci�ic con�iguration. Separate components with a limited range of tasks have a lower cost of maintenance and delivery to production. Also, logic separation motivates developers to design scalable services. However, SOA has also disadvantages: separation of service logic leads to the development of a communication layer between components. Other disadvantages are dependency management, service linking, type consistency of interfaces, an inequality of providing and using interfaces [5]. During the development of a solid application programming language’s features solve mentioned issues. After any application function moves outside the main project, interoperation problems may appear. Various frameworks and approaches suggest the ways of system decomposition but do not suggest any ways for static checking of types consistency. Just as at dynamic-typing language this fact leads to an increase in working system instability.

RPC-frameworks like Google Protobuf or Apache Thrift partially solve static type checking by providing client and server code generation based on API de�inition. Mentioned solutions enable ensuring at development stage that client and server would use the identical protocol. However code generation becomes less trivial while using JSON/XML-PRC, REST or using event-driven architectures. Moreover, they all provide synchronous calls and responses. Next problem is less critical. Detection of outdated API usage can be nontrivial in complex systems with various components. Lack of automatized control over API usage leads to the possibility of important component disabling. Let us consider the real case: an outdated service A provides statistics collection and sending with mailing service once per month. Logs analysis proves that there were no API calls during last three weeks (for example); that is why the service can be disabled. Finally, we have two main problems: - the absence of strong type system with static checking for SOA that leads to potential stability decrease

- the absence of dependency control for SOA leading to possible breaking system in runtime after disabled outdated APIs

Therefore, the primary goal of this research is to increase the stability of SOA-based systems and decrease runtime errors. There are two stages to reach the goal: - improve the existing approach to service API typi�ication to check types statically

- develop service that should control API dependencies in the SOA system

Much of the work presented here touches the description of a new tool providing the achievement of formulated goals. New service purpose is close to service-discovery systems purpose: to detect suitable components over the network automatically [13]. The main objective of the new service is checking of type consistency for providing and using API de�initions. We call it “contract discovery”. The next section demonstrates state of the art. The third section de�ines the proposed type system and the type checking algorithm to be realized in contract discovery service. Section 4 surveys the concept of contract and how contract-discovery service provides a client to service linking. Section 5 describes our realization details of contract discovery service. Section 6 illustrates our experience in application such service

79

79


Journal Journal of of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

to the real microservice-oriented event-driven system. This paper is an extended version of conference paper “Static typing and dependency management for SOA” [7].

2. State of the Art

2.�. Approaches to Service Composi�on Since the microservice concept is the development of the service one, we can look over methods and ideas of composition for SOA. Several well-known standards of organizing service communication exist, e.g. WS-BPEL, WS-CDL, BPML, ebXML, OWL-S, WSMF, etc. Though the mentioned de�initions suit business process speci�ication, they cover various complex scenarios, for example, WS-BPEL or WS-CDL allows de�ining user roles [12]. According to several de�initions, microservices do not cover hole processes, but they implement limeted logic operations. Therefore microservice composition can not be expressed in terms of mentioned standards. Except for well-known speci�ications like mentioned above, serveral research projects exist: eFlow, WISE, SOA4All, etc. Most of them have the same disadvantages. Another projects (e.g., METEOR-S, BCDF, SCENE) requires custom runtime environment or custom service executor [9]. We suppose such environment to be super�luous. The most �itting project we found are SWORD and ASTRO. The �irst one determines service interface with lightware domain-speci�ic language that is more simple than XML from previous examples. Project‘s compiler automatically veri�ies new service scheme to be compatible with current running services. This guarantees that a new update of a service would not break the whole system or that new service would work with another. All described projects seem to be too complicated or too limited according to our requirements. Firstly, no one supports nonsynchronous communication, for example, event-driven architectures. Secondly, most of them suit for a description of business-process, not for a description of the behaviour of small services. Thirdly, we consider underlying XML format too verbose and not enough compact for our purposes. 2.2. Linking of Microservices

One of the modern popular ways to link microservices is the service discovery one. The matter of the way is �inding dependent service by the name in the central services registry. The central registry provides registering of instances, checks the state of already registered ones, and provides access to information about services addresses. However, service discovery does not check the compatibility of acquired and acquiring services. 2.3. Interface Consistency

80

80

Developer can de�ine an interface of synchronous microservice with OpenAPI for REST [4], WSDL for SOAP [3], Protobuf for GRPC [2] or Apache Thrift. Data

VOLUME N°11 2019 2019 VOLUME 13,13, N°

validation is usually performed with XML-Schema or JSON-Schema. Except Thrift or Protobuf, Apache Avro [1] is another project attempting not only to describe a way of data encoding and validation but also attempting to control interfaces compatibility and versioning. All mentioned, Thrift, Protobuf, Avro, WSDL, and OpenAPI are created to support synchronous RPC or REST.

3. Type System

Data can be encoded with custom binary or text format while interoperation: with XML or JSON. Encoded data satis�ies restrictions of communication protocol: SOAP, XML-RPC (XML); REST, JSON-RPC (JSON); Protobuf, Thrift (binary) and so on. APIs based on the communication protocols can be described with formal speci�ications: WSDL for SOAP, OpenAPI for REST, etc. Event-driven SOA often uses the message broker system like Apache �a�ka, RabbitM� or NATS which transfers text-encoded messages. Among the mentioned ways of data representation, JSON is the most popular format for service communication. Validating received data becomes a simple task with JSON Schema validation or Apache Avro validation. We suppose JSON Schema to be more actual than Avro. However, it is only the validation standard without any subtyping or polymorphism support, so we make our own subtype checking algorithm. All mentioned above protocols are limited by using simple (integer, boolean, etc.) or complex (arrays and records) types [4] [3]. For example, simpli�ied JSON Schema type system [14] can be expressed as presented at the �igure 1. Described grammar is simpli�ied because it does not cover complex predicates containing boolean logic. Also, the grammar does not cover speci�ic type formats. According to standardization and popularity, we took JSON Schema as a base for our type system. To improve the compatibility of services, we suppose the described type system to be structural one [10]. This statement enables us to ensure that B is a subtype of A in A <: B if for every parameter from A there is an equal parameter from B (1). We assert that types predicates are equal if their names and parameters conform. An induction rule is used to specify the subtype with predicates relation (2). Γ⊢A Γ⊢B

Γ ⊢ A <: B Γ ⊢ AP1 Γ ⊢ BP2 Γ ⊢ P1 = P2 Γ ⊢ AP1 <: BP2

(1) (2)

Finally, we did not change JSON Schema syntax for compatibility with existing software purposes.


Journal of Journal of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

⟨t⟩

::= ⟨arr⟩ | ⟨obj⟩ | ⟨num⟩ | ⟨str⟩ | boolean | ⟨p⟩ ⟨t⟩

⟨arr⟩ ::= {⟨t⟩} | ⟨ap⟩ ⟨arr⟩

⟨ap⟩ ::= additionalItems | maxItems | minItems | uniqueItems | contains ⟨obj⟩ ::= ⟨t⟩ ⟨t⟩ | ⟨op⟩ ⟨obj⟩

⟨op⟩ ::= maxProperties | minProperties | required | properties | patternProperties | additionalProperties | dependencies | propertyNames

⟨num⟩ ::= integer | real | ⟨np⟩ ⟨int⟩

⟨np⟩ ::= multipleOf | maximum | minimum ⟨str⟩ ::= string | ⟨sp⟩ ⟨str⟩

⟨sp⟩ ::= maxLength | minLength | pattern ⟨p⟩

VOLUME 2019 VOLUME 13,13, N°N°1 1 2019

::= const | enum

Fig. 1. Simplified grammar of JSON Schema Algorithm 1 Type checking

Require: type1, type2 subtype ← true; if type1 is scalar then subtype ← type1! = type2||type1.p! = type2.p; else {type1 is object} for all type1.f do subtype ← subtype&&self (type1.f, type2[type1.f ]); end for end if 3.1. Algorithm of Subtype Checking Our type checking algorithm 1 veri�ies that every �ield from the type A is equal to the same one from the type B. Record type.p returns all predicates from the type type. Code type2[f ield] takes from type2 sub�ield with the name f ield and code type2.f takes all �ields from type2. The algorithm does not try to analyse predicates, it just checks identity of the name and the parameter. Types of the JSON Schema object are checking recursively. List of subtype required �ields must be equal to the parent type one. 3.2. Example of Subtyping

De�ine 2 types: A at listing 1 and B at listing 2.

�i��ng 1. Type A { "title ": " Person ", "type": " object ", " properties ": { " firstName ": { "type": " string "

}

} }, " required ": [ " firstName " ]

�i��ng 2. Type B { " title ": " Person ", "type": " object ", " properties ": { " firstName ": { "type": " string " }, " secondName ": { "type": " string " } }, " required ": [ " firstName ", " secondName " ] } Here type A requires document to contain �ield string f irstN ame and therefore any document containing f irstN ame is suitable for this schema. Type B also requires this string �ield to be in a descibing document. Thus we can assert that B is the subtype of A: A <: B. Though B <: A can not be true because B applies one more required restriction for a document: string �ield secondN ame.

�. �e�crip�on of Contract Concept

We introduce the concept of a contract to describe communication between services. A service contract is an analog of communication speci�ication which describes one remote call or one session of information transfer. List of contracts forms regular communication protocol (like OpenAPI or WSDL) if every item of the list is provided with the same service or the same endpoint. Interoperation of services divides into two categories: a synchronous and nonsynchronous one. The synchronous communication (RPC, REST) requires a protocol to de�ine the way of call, the way of response and optionally an error de�inition. Custom protocols can specify complex sequences of data units passing to an inter-service channel. The nonsynchronous one (event-driven design) requires a protocol to de�ine the only type of transmitting data. In order to level differences between the methods, we de�ine a contract as a sequence of message types. Thus, HTTP call would be a chain of two messages while an event would be a chain consisting of one element. A contract also contains: - an endpoint of service which provides contract realization (provider)

- an address to check the contract provider urgency

81

81


Journal Journal of of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

- a direction of every chain unit - is the message incoming or outgoing for provider

In opposite to provider of contract, the user one claims that a service needs any provider to work correctly. User contract has the same format as the provider one but does not specify endpoint. User contracts ensure that the system’s services have all dependencies and work correctly. The user contract is compatible with provider contract if the chains of the �irst one occur to be subtypes of the second one. It means that if A <: B is the valid judgment, then the provider takes type A at the input when a client can call it with type B. The provider service must be ready for input data with type B and must process it like data with type A. �.�. �esc�i��on o� Conce��u�l Con���c� �iscove�� �e�� vice

Services must declare their requirements themselves because they contain all related API information. There are two targets for pushing declarations: - all other services �e.g. broadcast noti�ication)

- central service delegated to manage contracts

�oti�ication of all other services requires broadcast messaging and storing information about the whole system in each one. �oreover, broadcast noti�ication would require implementation of type and contract checking in every service. Therefore, central control is preferable. Services which collect information about system components, provide their addresses and watch for their state are called “service discovery”. Since our tool manages contract providers we call it “contract discovery” service. Prospective realization should have following features: 1) register contract provider 2) register contract user

3) watch for providers and users to be alive

4) deliver on demand information about contract providers for contract users

5) verify that all dependencies are resolved and show dependency problems 6) warn after disabling all providers of the contract that is still used

Providers send information to the service at their startup moment or at their deploy moment. Users get their dependencies also at the start by registering their dependencies or by separate call.

�. �e�li���on o� Con���c� �iscove�� �n� �es� �n�

82

82

We implemented the �irst version of the contract discovery service as a proof-of-concept PHP daemon built on top of ReactPHP [6]. The daemon was used within a test suite containing stub services. After having proved the idea, we made the second realization with Golang. Service implements all requirements and all described functions. Daemon registers contract

VOLUME N°11 2019 2019 VOLUME 13,13, N°

providers and users perform regular alive checks and type checking. We used described service for managing dependencies and for type checking in the existing eventdriven system. Services in this system register their contracts at their start. They also gain their own requirements via contract discovery. While services use message broker and do not expect any result of the call, all registered contracts consist of no more than one schema. Users obtain routing keys for dispatching messages from matched provider contracts. Though the proposed approach does not suppose an improvement of some speci�ic algorithm or data passing technique, we cannot present any numeric metrics. However, after registering automatization had been made, we noticed that the process of adding new services to the system became easier. Advances that we found are: - inter-service integration became easier as the result of inter-service strong typing - service will not start while dependencies are not resolved - contract-�irst development makes positive in�luence on service building speed

- developers do not need to keep track of the service dependencies in con�iguration We also noticed several complicities:

- maintenance of all types consistency is complicated - there is no one place to store all actual contracts. Contract discovery stores only registered at present time items. - lack of information about actual data routes

- all system depends on the central component

- since contract discovery checks only online services it does not provide real static type system - contract discovery guarantees consistency of contract types but not identity of contracts and real interfaces

6. Conclusion

As the result we have replaced direct static services linking with detection of the most suitable contract provider. This kind of interaction allows us to ensure that enabled service would work correctly and have all required dependencies. Also, the usage of strong polymorphic typing enables us to ensure that APIs of interacting services are compatible. Contract discovery service ensures that a system does not have any dependency problems at the moment and helps to trace usage of an outdated interfaces. From the other side, the presented approach sophisticates control over current interaction of a system components. It also does not provide real static type checking for communication of an elements: contract discovery does not guarantee the identity of contracts and real interfaces and does not guarantee service compatibility before new service is enabled. Finally, we did not gain the main goal: stability of the system has not increased signi�icantly.


Journal of of Automation, Automation, Mobile Journal MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

Therefore, we have new ideas on how to provide strong control over the services interaction. We suggest specifying all data types in a single �ile or project with a description of the whole services communication design. Moreover, such de�inition is expected to resemble a source code on any functional programming language and can also introduce instructions for deploying services. We expect that such source code will be assembled into container con�iguration �iles and the translator would perform static type checking. The concept that we are developing now recalls behavioural and session types [8]. Replacing dynamic contract discovery with service de�inition compiler would save listed advantages and decrease described disadvantages. AUTHOR Nikita Gerasimov – Mathematics and Mechanics Faculty, Saint Petersburg University, Universitetsky prospekt, 28, Peterhof, St. Petersburg, Russia, e-mail: n.gerasimov@2015.spbu.ru.

REFERENCES

[1] Apache Software Foundation, “Apache Avro™ 1.9.0 Documentation”, http://avro.apache.org /docs/current/, Accessed on: 2018-11-10.

VOLUME 2019 VOLUME 13,13, N°N° 1 1 2019

[9] A. L. Lemos, F. Daniel, and B. Benatallah, “Web Service Composition: A Survey of Techniques and Tools”, ACM Comput. Surv., vol. 48, no. 3, 2015, 33:1–33:41, DOI: 10.1145/2831270.

[10] B. C. Pierce, Types and Programming Languages, The MIT Press: Cambridge, 2002. [11] R. Rodger, The Tao of Microservices, Manning Publications: Shelter Island, New York, 2017.

[12] Q. Z. Sheng, X. Qiao, A. V. Vasilakos, C. Szabo, S. Bourne, and X. Xu, “Web services composition: A decade’s overview”, Information Sciences, vol. 280, 2014, 218–238, DOI: 10.1016/j.ins.2014.04.054. [13] L. Sun, H. Dong, F. K. Hussain, O. K. Hussain, and E. Chang, “Cloud service selection: State-of-the-art and future research directions”, Journal of Network and Computer Applications, vol. 45, 2014, 134–150, DOI: 10.1016/j.jnca.2014.07.019. [14] A. Wright, H. Andrews, G. Luff, “JSON Schema Validation: A Vocabulary for Structural Validation of JSON”, http://json-schema.org/latest/jsonschema-validation.html, Accessed on: 2018-0426.

[2] Google, “Developer Guide | Protocol Buffers”, https://developers.google.com/protocolbuffers/docs/overview, Accessed on: 2018-1110. [3] W3C, “Web Services Description Language (WSDL) Version 2.0 Part 1: Core Language”, https://www.w3.org/TR/wsdl, Accessed on: 2018-04-26. [4] OpenAPI Initiative, “The OpenAPI Speci�ication”, https://github.com/OAI/OpenAPISpeci�ication/blob/master/versions/3.0.1.md, Accessed on: 2018-04-26.

[5] N. Dragoni, S. Giallorenzo, A. L. Lafuente, M. Mazzara, F. Montesi, R. Musta�in, and L. Sa�ina. “Microservices: Yesterday, Today, and Tomorrow”. In: M. Mazzara and B. Meyer, eds., Present and Ulterior Software Engineering, 195–216. Springer International Publishing, Cham, 2017.

[6] N. Gerasimov. “Contract checker”, http://github.com/tariel-x/cc, Accessed on: 2018-05-07.

[7] N. Gerasimov, “Static typing and dependency management for SOA”. In: Annals of Computer Science and Information Systems, vol. 16, 2018, 105–107.

[8] K. Honda, V. T. Vasconcelos, and M. Kubo. “Language primitives and type discipline for structured communication-based programming”. In: G. Goos, J. Hartmanis, J. van Leeuwen, and C. Hankin, eds., Programming Languages and Systems, volume 1381, 122–138. Springer Berlin Heidelberg, Berlin, Heidelberg, 1998.

83

83


Journal Systems Journalof ofAutomation, Automation,Mobile MobileRobotics Roboticsand andIntelligent Intelligent Systems

VOLUME 13, N° N°11 VOLUME 13,

2019 2019

������� �������������� ����� S����� ��� D������ ������� �������� �� UAV D������� ������ �ub���ed: 8th October 2018; accepted: 7th February 2019

João Pedro Carvalho, José Manuel Fonseca, André Damas Mora DOI: 10.14313/JAMRIS_1-2019/11 Abstract: �nowing how to iden�fy terrain types is especially important in the autonomous naviga�on, mapping, decision making and emergency landings areas. For example, an unmanned aerial vehicle (UAV) can use it to find a suitable landing posi�on or to cooperate with other robots to navigate across an unknown region. Previous works on terrain classifica�on from RGB images taken onboard of UAVs shown that only sta�c pixel-based features were tested with a considerable classifica�on error. This paper presents a computer vision algorithm capable of iden�fying the terrain from RGB images with improved accuracy. The algorithm complement the sta�c image features and dynamic texture pa�erns produced by UAVs rotors downwash e�ect (visible at lower al�tudes) and machine learning methods to classify the underlying terrain. The system is validated using videos acquired onboard of a UAV with a RGB camera. Keywords: Image processing, Texture, Machine Learning, Terrain �lassifica�on, �eural �etworks, UAV

1. ��trod�c�o�

84

84

Nowadays, due to UAVs’ higher availability and capabilities, there is a research trend to explore innovative applications of UAVs useful to the society. They are having a major impact on search and rescue missions, in logistics, in precision agriculture, among other applications. Key issues are to provide a safe and reliable operation and to perceptionate the surrounding area. This latter, within this paper, will be to identify the underlying terrain. Terrain classi�ication is a crucial functionality for a wide range of autonomous vehicles [12]: either for ground vehicles to avoid water bodies, aerial vehicles to determine suitable landing areas, or surface vehicles to detect safe passageways. As further explained in section 2, several approaches have been used for terrain classi�ication. �owever, there is still margin for improving accuracy by extracting more complex image features. When at lower altitudes, UAV’s rotors downwash effect create singular image texture patterns depending on the type of terrain, which can be used to differentiate them. The main goal of this paper is to propose a computer vision algorithm that using RGB images captured by a camera onboard of a UAV is capable of classifying a terrain by analysing static image features (colour and texture) and rotors downwash effect on the underlying surface. There are several issues that must be addressed in order to achieve this goal, namely:

(a)

(b)

(c)

(d)

Fig. 1. a) �oopera�on bet�een Unmanned Aerial Vehicle (UAV) and Unmanned Surface Vehicle (USV) to improve the autonomou� navi�a�on. b) �ho�� a plan made by only USV. c) and d) �ho� a coopera�ve UAV to improve the plan made by USV [12] - Which terrains can be more accurately classi�ied using the downwash effect? - Which are the texture and motion patterns of each terrain (water movement for example)?

- Which static and dynamic image features can be extracted to classify the terrain?

To address these challenges, new optimization procedures and techniques will be proposed in this paper, aiming the best possible performance. This paper is structured with six sections starting with an introductory section and followed by a presentation of related works. In the experimental setup section the system background, namely the hardware and the terrain types, are described. On the Terrain �lassi�ication Method the system architecture, the static and dynamic texture features and the machine learning classi�ier will be presented. The article �inishes with the experimental results and drawn conclusions.

2. Related Work

UAVs (Unmanned Aerial Vehicle) play an important role on the new generation of information technology and is predicted to have a major impact in the human life in the near future [1]. One of the areas is in computer vision, where it is possible to acquire, process, analyse and understand aerial images. Many researchers have proposed terrain classi�ication systems based on features derived from colour informa-


Journal of Journal of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

tion [2], texture patterns [4], [8], [11], [9] and from additional sensors, as is the case of laser scan systems [15], [14], [6]. Although many of these algorithms are for terrestrial unmanned ground vehicles, currently there is a shift towards UAVs, where the visual features have wider importance. One of the most recent works of terrain detection and classi�ication is presented in [13]. The authors use the concept of optical �low to detect the water texture direction in images acquired by an RGB camera onboard of a UAV. From the directions of the textural features, the algorithm determines if the terrain, where the UAV is �lying over, is water or non-water. One of the problems identi�ied is that the UAV must be stable over the target while identifying the type of terrain, which, in the best case, takes four seconds to execute. Another reason that requires the UAV to be stand still during calculations is that the computer vision algorithm does not compensate the UAV movement. Thus, when the directions of the features are calculated, the results do not represent the reality. A classi�ication method using colour features was proposed in [2]. The proposed method converts a RGB image into an image entitled ”normal RGB”, where each pixel is divided by the square root of the three colour channels. Thus, each terrain will emphasize the colour that represents it (for example, green for vegetation). The proposed method was limited due to the fact that it varies signi�icantly with illumination. Laser scanners have proven to be important to distinguishing between land and water as presented in [15], [14] and [6]. However, in low water depths the laser sensor produces incorrect results, due to the fact that it captures re�lections from the seabed and misclassi�ies it as non-water terrain. Therefore, this laser scan approach, by itself, reveals to be insuf�icient and requires additional equipment.

3. Experimental Setup

3.�. ��� �la��rm �esign The aerial vehicle used in the experiments and named as Vigil R6 (Figure 2), is a six-rotor solution that endows the system with graceful degradation, as it is able to land with one motor off, although without yaw control in that condition.

VOLUME 2019 VOLUME 13,13, N°N°1 1 2019

VRBRAIN connects to the Odroid-U2 embedded system via MAVlink protocol and the UAV’s battery. Lastly, to control the UAV’s motors it is connected to a UHF receiver;

- An Odroid-U2 is used to control the high level operation. It receives data from the distance sensor (LIDAR), the RGB camera and communicates with external devices via Wi�i link; - A RGB camera with a gimbal stabilizer, is installed capturing onboard images at a resolution of 640x480 pixels;

- The camera and lens speci�ications are known, allowing the �ield of view (FOV) and the pixel size in meters to be computed. For a better understanding of the communications between various system layers, in Figure 3 it is represented the system architecture.

Fig. 3. �a�er �o����i�a�o� 3.2. Terrain Types The dynamic of different terrains when exposed to wind provoke singular texture patterns that can be used in their identi�ication. In this paper we study the importance of static image features, such as colour and texture, when compared with the dynamic features exhibited by the downwash effect, for terrain classi�ication. In this work three different terrain types (water, vegetation and sand), which can bene�it from the downwash effect for their identi�ication (Figure 4) were identi�ied. It can be seen that the downwash effect produces: on water a circular dynamic texture; on vegetation a linear spread from inside outwards; and on sand it is almost stable or it moves outwards.

�. Terrain �lassi��a��n �et���

Fig. 2. Aerial robot Vigil R6 Some of the Vigil R6 speci�ications are: - A VRBRAIN hardware is used to control the low level operation. This hardware contains the IMU and GPS to know the UAV’s position and orientation. Also, the

If different types of terrain behave differently when exposed to UAV rotors downwash effect, then it should be possible to obtain unique information for their identi�ication. Based in this research hypothesis, it is possible to obtain some conclusions. When exposed to the downwash effect, water particles’ movement is always greater than in vegetation and sand terrains. Also, regarding static texture, usually vegetation has a more rough texture than sand or water terrains; water only presents roughness when exposed to

85

85


Journal Journal of of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

VOLUME N°11 2019 2019 VOLUME 13,13, N°

Start Raw Image Recti�ied image (a)

(b)

Motion Analysis

Texture Filter Threshold Projections

(c)

�lassi�ication

(d)

no

yes

Frames < n

Output (Terrain) Stop

Fig. 5. Proposed system architecture (e)

(f)

Fig. 4. Examples of terrain types: water (a)(b); �e�eta�on (c)(d); and sand (e)(f) wind and downwash effect; and sand (�ine grains) has a lower roughness. It can also be seen that sand depends on the patterns already in the terrain, showing usually a more irregular texture (�igures 4.e and 4.f) when compared with water that shows a unique signature and regular texture when exposed to wind (�igures 4.a and 4.b). 4.1. System Architecture

The proposed system architecture to classify the terrain using texture information is shown in Figure 5. As previously identi�ied in sections 1 and 4, two texture features are proposed to classify the terrain, namely, static and dynamic textures. At this stage it were also assessed the features that can be computed in parallel, in order to speedup the system execution time. Five main processes were identi�ied in the architecture (�igure 5), namely: - �ecti�ied ��a�e: Performs lens geometrical corrections; - Texture Filter: Extracts terrain’s static textural information using Gabor �ilters;

- Threshold: A thresholding is applied to the static texture image to highlight the terrain roughness;

- Projections: Vertical and horizontal projections were applied to the thresholded image, extracting unique features that help differentiate the different types of terrains; 86

86

- Motion Analysis: Extracts information from dynamic textures. Optical �low and thresholding techni-

ques are used to identify the moving parts;

- �lassi�ication: The extracted features are used as inputs of an automatic classi�ied to identify the type of terrain. Machine learning techniques already proved to be ef�icient for terrain classi�ication [10], [7], [5].

4.�. Sta�c �e�tures This section presents the proposed method for extracting terrain’s static textures, based on the Gabor �ilter to be able to choose multiple texture directions. This �ilter is the impulse response formed by a multiplication of a sinusoidal signal with a Gaussian envelope function and can be computed using the following complex equation: G(x, y, λ, θ, ψ, σ, γ) = e

(

−x

(

−x

′2 +γ 2 y ′2 2 σ2

) ( ( i

e

2 π xλ +ψ

))

(1) Its real and an imaginary components can be obtained by equations 2 and 3, respectively: ) ( ′2 2 y ′2 ) ( − x +γ 2 σ2 x′ cos 2 π + ψ G(x, y, λ, θ, ψ, σ, γ) = e λ (2) G(x, y, λ, θ, ψ, σ, γ) = e where:

′2 +γ 2 y ′2 2 σ2

)

) x′ sin 2 π + ψ λ (3)

x′ = x cos(θ) + y sin(θ)

y ′ = −x sin(θ) + y cos(θ)

(

(4)

(5)


Journal of of Automation, Automation, Mobile Journal MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

VOLUME 2019 VOLUME 13,13, N°N° 1 1 2019

These equations (1, 2 and 3) require as input parameters:

- x and y: Filter coordinates, where x represents the columns and y the rows; - Lambda (λ): Represents the sinusoid’s wavelength; - Theta (θ): De�ines the Gaussian envelope orientation; - Psi (ψ): Symbolizes the phase offset;

- Sigma (σ): Describes the Gaussian envelope size;

- Gamma (γ): Re�lects the shape of the ellipse in the gabor �ilter space.

In this work we used only the real component of the Gabor function (equation 2). After obtaining the multiplication of a Gaussian with a sinusoidal function, i.e. the kernel of the �ilter, it will be convolved with the original image (equation 6). The result of the Gabor �ilter applied over a water surface is presented in Figure 6. f [x, y] ∗ g[x, y] =

(a)

n1 ∑ n2 ∑

−n1 −n2

f [n1, n2] · g[x − n1, y − n2]

(6)

(b)

Fig. 7. �id�� �ro�ec�on of ��e e�am��e in �igure 6.d (white pixels in the binarized image in �igure 6.d). The next step was to translate this observed feature into a computational model. By calculating the local maxima and minima of the vertical projection in �igure 6.d, it is possible to calculate a line (red line in Figure 7) that most closely approximates these points. A polynomial regression was used. The Lowess’s theory calculates a line closest to a given set of points, through weights assigned to each point in a neighbouring window. In this work, to reduce the number of points, only the projection’s local maxima and minima were used. The number of points that this window can contain is de�ined by the user. �nce de�ined, it will traverse all points of the vector. In each set of n points, the best values for the slope (m) and the intercept (b) that minimize the sum squared residuals are calculated using equation 7: S(m, b) =

n ∑ i=1

wi ((m · xi + b) − yi )2

where, the weight function w is: { (1 − |d|3 )3 f or |d| < 1 w(d) = 0 f or |d| ≥ 1

(c)

(d)

Fig. 6. ��am��e of a s�a�c �e��ure e��rac�on� a) �a� image; b) c) �on�o�u�on �i�� ��e �abor ���er θ-0 degrees (b) and θ-90 degrees (c); d) Sum of images b) and c) a�er ��res�o�ding As can be seen in �igure 6, it is possible to obtain the texture of a water-type terrain when it is affected by the downwash effect of the UAV. From the binarized image, a width projection was made to see the singular features of this terrain type (Figure 7). From the observed vertical projection of water type terrain (�igure 7) it can be seen that it produces an undulatory effect with a local minimum in the centre of the downwash. This effect in water type terrains is due to the lower roughness in the centre of the downwash. However, due to the water movement, around the centre a higher roughness is observed

(7)

(8)

In equation 8, d is the distance between each window point and its neighbours. The �inal step is to obtain for each window the smoothed projection values from the obtained line equation (m an b), i.e., the red line in �igure 7. After obtaining this smoothed projection, new local minima and maxima are calculated and used to obtain two features: 1) Area measured between the local minimum and its respective two local maxima; 2) Integral between the local minimum and its two respective local maxima. The �irst has the advantage of being relative to minima and maxima values, while the integral gives an absolute value and will vary for lower and higher roughness.

4.3. Dynamic Textures This section presents the proposed method for extracting dynamic terrain textures.

87

87


Journal Journal of of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

VOLUME N°11 2019 2019 VOLUME 13,13, N°

tory) by each feature in a sequence of frames: T raveldistance =

n √ ∑

Ax (i) + By (i)

i=2

where:

Fig. 8. �i�th �ro�ec�on �ho�ing the area �gra� color) in rela�on �ith integral ��ello�) As mentioned in section 4, water-type terrain only exhibit dynamic texture when exposed to the downwash effect. However, in spite having a dynamic texture, when analysing the optical �low it is never stronger than the dynamic observed for sand and vegetation. As referred in section 3, the optical �low method can calculate the distance travelled by block matching features in a given frame sequence. In this paper, the Farneback algorithm [3] was used to detect the movement of these features. One of the advantages to using the Farneback algorithm is the direct �low, Fd , return of features between two frames. The �low displacement Fd between features in frame n and n − 1 can be obtained from equation 9: Fd = Sn − Sn−1

(9)

where Sn and Sn−1 are the sample pixels between two frames.

(a)

(b)

(c)

Fig. 9. �xam�le of a ���cal Flo� conce�t u�ing Farneback algorithm: a) Current frame; b) Next frame; c) ���cal Flo� re�ult

88

88

With the obtained �low, as shown in Figure 9 is then used to calculate the distance travelled (trajec-

]2 [ Ax (i) = x1 − xi−1 + Fdx ]2 [ By (i) = y1 − yi−1 + Fdy

(10) (11) (12)

and xi and yi are the positions in x and y in the most recent frame (n), x1 and y1 are the initial positions (n = 1) and Fdx and Fdy are the �low displacements between frames n and n − 1. We used normalized x and y coordinates for the calculations. To eliminate features that did not move or were almost static in a sequence of frames, it was imposed that: {

f or T rajectory ≥ T hreshold f or T rajectory ≤ T hreshold (13) From the equation 13 and knowing the maximum number of features (it is pre-de�ined the number of features in rows and colums in the image), we calculate the percentage of dynamic features that appear in the image (equation 14): Red f eature N othing

Dynamicf eature =

f iltered f eatures · 100% (14) T otal f eatures

From equation 14, an example showing this theory is shown in Figure 10.

�.�. ��a��i�ca��� The data from dynamic textures and static textures are quite alike. It’s not a trivial task for a human being to identify a terrain from the outputs of the two analyses. In order to merge the data while increasing certainty and automating the classi�ication of the type of terrain, a machine learning technique was used, namely a feed-forward neural network (NN). The architecture of the designed neural network, was composed by two layers, a hidden layer with 10 neurons and an output layer with 3 neurons (water, vegetation and sand). The NN received the integral and area between the two local maxima as mention in sub-section 4.2, and the number of dynamic features detected as shown in sub-section 4.3. A sigmoidal function (equation 6) was used as activation function and the �inal classi�ication was derived from the output neuron with highest activation value.

1 (15) 1 + e− θ T x The training dataset was composed by 251 samples, from which 70% were for training, 15% for testing and 15% for validation. Even thought we were Sθ (x) =


Journal of of Automation, Automation, Mobile Journal MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

VOLUME 2019 VOLUME 13,13, N°N° 1 1 2019

(a)

Fig. 11. �ta�c �exture � �ela�on in area with respect to the integral of minimum and maxima locals

(b)

against the area feature from the static texture. This feature shows the same discriminant level to separate the different terrains. From Figure 12, it can be seen that water type terrain obtained a lower dynamic texture value (< 45%), which can due to a higher concentration of these dynamic features in the downwash centre and outside hasn’t exceed the threshold. Sand and vegetation shown a more uniform pattern, obtaining a higher number of features. On average, sand presents a percentage between 55% to 90%. Finally, vegetation with a percentage of features between 90% and 100%, is the terrain with highest dynamic texture, i.e., moving features.

(c)

Fig. 10. ��namic textures detec�on �� �arne�ac� algorithm and distance tra�elled calcula�on� a� water� c� �egeta�on and d� sand using a quite simple NN, after training the NN, it was obtained 92.9% accuracy on the training set and 93.8% on the test dataset.

5. Experimental Results

To validate the proposed static and dynamic texture features for terrain classi�ication, a total 251 frames from several types of terrains were used to validate the proposed system. From these 90 frames were for water, 88 frames for vegetation and 73 frames for sand. Regarding the static texture feature the area and integral were calculated and displayed in Figure 11. It is possible to observe a clear separation between water, vegetation and sand, even with some outliers. In water type terrain, the three clusters that can be noticed for the integral feature, were mainly due to different water environments (lake and pool). To validate the dynamic texture feature it was calculated in a three frame period (n = 3) and plotted

Fig. 12. ��namic �exture � �ela�on in num�er of features with respect to the integral of minimum and maxima locals Finally, these features were extracted from the �igures 4.a-f and shown to the neural network classi�ier, which outputted the automated terrain classi�ication. The extracted features and the classi�ication result is shown in Table 1. As expected the proposed features and classi�ication method, shown good results by classifying correctly all the six examples, reinforcing the idea that a combination of static and dynamic texture can be used to automatically extract terrain type from RGB images.

6. Conclusion

The main objective of this paper was to design a computer vision system capable of extracting static and dynamic image features, such as optical �low and

89

89


Journal Journal of of Automation, Automation,Mobile MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

VOLUME N°11 2019 2019 VOLUME 13,13, N°

Tab. 1. Experimental Results Figure 1 2 3 4 5 6

90

90

Static Texture Area (%) Integral (%) 1.44 8.32 1.55 7.59 0.01 27.42 0.05 23.71 0.24 14.67 0.07 4.07

texture features, to identify the type of terrain with improved accuracy by taking advantage of the the UAV’s rotors downwash pattern effect. For this, it was necessary to conduct a research into detection methods already implemented and of interest to this work. Texture features, such as Gabor �iltering (static textures) and optical �low (dynamic textures), were studied to improve terrain classi�ication aiming the best possible performance. We emphasize that by implementing the static textures �ilter, vegetation-like terrains were found to have a higher texture than sand and water type terrains. On the other hand, water-type terrain, also presents a singular characteristic due to the downwash effect provoked by the UAV, which can be decisive to different it from other terrain types. Although the system takes only 90 ms to classify the type of terrain where the UAV is �lying over, the algorithm is not taking into account the drone movement which will bring some noise in the execution of the dynamic texture for the terrains classi�ication. The main goal will be, with the data from IMU, GPS, GLONASS, barometer and accelerometer, merged in a kalman �ilter, give more reliable position of the UAV and thus it will be possible to remove the movement of the drone and consequently increase the accuracy of the system for terrain classi�ication. One of the major problems encountered throughout this work was the irregularity that the sand presents. As this terrain presents irregular textures, the static part of the system may take on forms of all kinds (but never presents a high texture - almost white image), including the wave form of the water-type terrain. It is for this reason that it is necessary to join more parallel logic to this algorithm in order to avoid this type of problems. Classi�ication is an interesting and engaging topic on nowadays research community. On the future, we expect to explore different techniques in order to achieve higher accuracy meanwhile having the most generic model. As a start, we need to acquire a larger data set with unbiased and varied frames. From that we can obtain different static features using state of the art Deep Neural Networks. As input of the deep net, we can plug the each frame and the information from the Dynamic textures analyses. From that, we expect the network to �ind it’s own static features that represent a terrain, and increase the classi�ication precision from the motion data. Another approach we intend to explore is the in-

Dynamic Texture Number of Features (%) 32.88 36.70 98.32 98.69 63.74 59.30

Classi�ication water water vegetation vegetation sand sand

clusion of more drone �light data into the Machine Learning algorithm in use. For example, Altitude data gives a good clue to wish information can be extracted from each pixel in the image, and the drone drift can in�luence the image stabilization. All this precious information can be merged to increase precision and the resilient to unexpected behaviours and scenarios.

ACKNOWLEDGEMENTS

This work was partially funded by FCT Strategic Program UID/EEA/00066/203 of the Center of Technologies and System (CTS) of UNINOVA – Institute for the Development of new Technologies. In last, this work was not possible without the support and commitment of several fellow colleagues and friends, namely: Ricardo Mendonça, Francisco Marques, André Lourenço, Eduardo Pinto and José Barata.

AUTHORS

João Pedro Carvalho – Computational Intelligence Group of CTS/UNINOVA, FCT, University NOVA of Lisbon, e-mail: jp.carvalho@uninova.pt. José Manuel Fonseca – Computational Intelligence Group of CTS/UNINOVA, FCT, University NOVA of Lisbon, e-mail: jmf@uninova.pt. André Damas Mora – Computational Intelligence Group of CTS/UNINOVA, FCT, University NOVA of Lisbon, e-mail: atm@uninova.pt.

REFERENCES

[1] Y. Bestaoui Sebbane, Intelligent Autonomy of UAVs: advanced missions and future use, number no. 3 in Chapman � Hall/CRC arti�icial intelligence and robotics, Chapman and Hall/CRC: Boca Raton, FL, 2018.

[2] F. Ebadi and M. Norouzi, “Road Terrain detection and Classi�ication algorithm based on the Color Feature extraction”. In: ���� Arti�icial Intelli� gence and Robotics (IRANOPEN), 2017, 139–146, 10.1109/RIOS.2017.7956457. [3] G. Farnebä ck, “Two-Frame Motion Estimation Based on Polynomial Expansion”. In: J. Bigun and T. Gustavsson, eds., Image Analysis, 2003, 363– 370.

[4] Q. Feng, J. Liu, and J. Gong, “UAV Remote Sensing for Urban Vegetation Mapping Using Random Fo-


Journal of of Automation, Automation, Mobile Journal MobileRobotics Roboticsand andIntelligent IntelligentSystems Systems

rest and Texture Analysis”, Remote Sensing, vol. 7, no. 1, 2015, 1074–1094, 10.3390/rs70101074.

[5] A. Giusti, J. Guzzi, D. C. Cireşan, F. He, J. P. Rodrı́guez, F. Fontana, M. Faessler, C. Forster, J. Schmidhuber, G. D. Caro, D. Scaramuzza, and L. M. Gambardella, “A Machine Learning Approach to Visual Perception of Forest Trails for Mobile Robots”, IEEE Robotics and Automation Letters, vol. 1, no. 2, 2016, 661–667, 10.1109/LRA.2015.2509024.

[6] W. Gruszczyń ski, W. Matwij, and P. C� wi�ka�a, “Comparison of low-altitude UAV photogrammetry with terrestrial laser scanning as datasource methods for terrain covered in low vegetation”, ISPRS Journal of Photogrammetry and Remote Sensing, vol. 126, 2017, 168–179, 10.1016/j.isprsjprs.2017.02.015.

VOLUME 2019 VOLUME 13,13, N°N° 1 1 2019

ANS 2015 - MTS/IEEE Washington, 2015, 1–6, 10.23919/OCEANS.2015.7404458.

[14] L. Wallace, A. Lucieer, Z. Malenovský, D. Turner, and P. Vopě nka, “Assessment of Forest Structure Using Two UAV Techniques: A Comparison of Airborne Laser Scanning and Structure from Motion (SfM) Point Clouds”, Forests, vol. 7, no. 3, 2016, 62, 10.3390/f7030062.

[15] W. Y. Yan, A. Shaker, and N. El-Ashmawy, “Urban land cover classi�ication using airborne LiDAR data: A review”, Remote Sensing of Environment, vol. 158, 2015, 295–310, 10.1016/j.rse.2014.11.001.

[7] B. Heung, H. C. Ho, J. Zhang, A. Knudby, C. E. Bulmer, and M. G. Schmidt, “An overview and comparison of machine-learning techniques for classi�ication purposes in digital soil mapping”, Geoderma, vol. 265, 2016, 62–77, 10.1016/j.geoderma.2015.11.014. [8] Y. N. Khan, P. Komma, K. Bohlmann, and A. Zell, “Grid-based visual terrain classi�ication for outdoor robots using local features”. In: 2011 IEEE Symposium on Computational Intelligence in Vehicles and Transportation Systems (CIVTS) Proceedings, 2011, 16–22, 10.1109/CIVTS.2011.5949534.

[9] J. P. Matos-Carvalho, J. M. Fonseca, and A. D. Mora, “UAV downwash dynamic texture features for terrain classi�ication on autonomous navigation”. In: Annals of Computer Science and Information Systems, vol. 15, 2018, 1079–1083.

[10] A. Mora, T. M. A. Santos, S. Łukasik, J. M. N. Silva, A. J. Falcã o, J. M. Fonseca, and R. A. Ribeiro, “Land Cover Classi�ication from Multispectral Data Using Computational Intelligence Tools: A Comparative Study”, Information, vol. 8, no. 4, 2017, 147, 10.3390/info8040147. [11] M. Pietikä inen, A. Hadid, G. Zhao, and T. Ahonen, Computer Vision Using Local Binary Patterns, number Volume 40 in Computational Imaging and Vision, Springer: London, 2011.

[12] E. Pinto, F. Marques, R. Mendonça, A. Lourenço, P. Santana, and J. Barata, “An autonomous surface-aerial marsupial robotic team for riverine environmental monitoring: Bene�iting from coordinated aerial, underwater, and surface level perception”. In: 2014 IEEE International Conference on Robotics and Biomimetics (ROBIO 2014), 2014, 443–450, 10.1109/ROBIO.2014.7090371. [13] R. Pombeiro, R. Mendonça, P. Rodrigues, F. Marques, A. Lourenço, E. Pinto, P. Santana, and J. Barata, “Water detection from downwash-induced optical �low for a multirotor UAV”. In: OCE-

91

91


Journal of Automation, Mobile Robotics and Intelligent Systems Journal of Automation, Mobile Robotics and Intelligent Systems

VOLUME 13, N° 1 VOLUME 13, N° 1

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(i)

(j)

(k)

(l)

(m)

(n)

(o)

(p)

(q)

(r)

Fig. 13. ���m�l�� o� t�rr�in t��t�d �or th�ir cl���i�c��on u�in� �t��c t��tur� �l�orithm� �ith th� in�ut im��� (�r�t column) it i� �o��i�l� to u�� th� ���or �lt�r conc��t (��cond column) �nd th�r��or� th� �idth �ro��c�on c�lcul��on (third column) 92

92

2019 2019


Journal of Automation, Mobile Robotics and Intelligent Systems Journal of Automation, Mobile Robotics and Intelligent Systems

VOLUME 13, N° 1 2019 VOLUME 13, N° 1 2019

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(i)

(j)

(k)

(l)

(m)

(n)

(o)

(p)

(q)

(r)

Fig. 14. ���m�l�� o� ������n ������ �o� ����� cl�����c��on u��n� ��n�m�c ����u�� �l�o����m� ���� ��� �n�u� �m��� ����� column) �� �� �o����l� �o u�� ��� o��c�l �o� conc��� ���con� column) �n� ������o�� ��� �����l �����nc� c�lcul��on ������ column) 93

93



www.jamris.org 2019 VOLUME 13, N° 1, (ONLINE)

/ eISSN 2080-2145 (PRINT)

pISSN 1897-8649 Journal of Automation, Mobile Robotics and Intelligent Systems

Publisher: ŁUKASIEWICZ Research Network – Industrial Research Institute for Automation and Measurements PIAP

WWW.JAMRIS.ORG • pISSN 1897-8649 (PRINT) / eISSN 2080-2145 (ONLINE) • VOLUME 13, N° 1, 2019 Indexed in SCOPUS


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.