Mathematical Computation March 2015, Volume 4, Issue 1, PP.13-18
Novel Exponential Convergence of Networks on Time Scales Yanxia Tan, Zhenkun Huang# School of Science, Jimei University, Xiamen 361021, China #
Email: hzk94226@jmu.edu.cn
Abstract In this paper, we analyze exponential convergence of neural networks based on basic properties of general exponential function on time scales. By employing the time scales calculus theory and Lyapunov functional method, we establish sufficient conditions to ensure exponential convergence of networks. Our stability results on time scales can include corresponding continuous-time networks and discrete-time ones. Keywords: Neural Networks; On Time Scales; Exponential Convergence.
1 INTRODUCTION It is well known that a variety of models for neural networks have been successfully applied to signal processing, image processing, associative memories, patter classification and optimization [1-2]. For applications of neural networks, there is a shared requirement of raising the networks convergence speed in order to cut down the time of neural computing, so it is meaningful to study the exponential convergence and exponential stability of neural networks. Some relative results could be found in [4-5]. It is well known that stability results of discrete-time networks are important and it is troublesome to study the exponential convergence and exponential stability for continuous and discrete models, respectively. So it is significant to study networks on time scales which can unify the continuous and discrete situations. For the theory of time scales, we can refer to Stefan Hilger’s original works in [6]. The books by Bohner and Peterson [7-8] also summarize and organize much of time scales calculus. What is more, several authors have expounded on various aspects of this theory [9-11]. Consider the neural networks described by the following dynamical equation on time scale n
xi (t ) di xi (t ) Tij g j ( x j (t )) I i , i j 1
where xi (t ) :
and ( x1 , x2 ,…,xn )T
n
:
: 1, 2,…n
(1.1)
, di 0 . (Tij )nn is a constant matrix and gi (s) :
neural input-output activations for all s . Throughout this paper, we always assume that
0
are the k
[0, ] .
2 SOME PRELIMINARIES ON TIME SCALES Definition 1 A time scale is an arbitrary nonempty closed subset of . The forward jump operator on is defined by (t ) inf s : s t for all t . The backward jump operator on is defined by
(t ) sups : s t for all t . The then
k
Definition 2
{m} ; otherwise
k
For a function f :
k
. Then
is derived form
as follows: if
has a left-scattered maximum m ,
, , is called a time scale.
, t
k
, the delta derivative is defined by
f (t )
f ( (t )) f (t ) , (t ) t
if f is continuous at t and t is right-scattered. If t is not right-scattered then the derivative is defined by - 13 www.ivypub.org/mc
f (t ) lim s t
f ( (t )) f (t ) f (t ) f ( s) lim s t (t ) t ts
provided this limit exists.
Definition 3 [7] Let f :
and t
k
, f (t ) is the delta derivative of f , D f (t ) is said to be Dini
derivative of f (t ) , if given any 0 , there exists a right neighbour U U of t such that f ( (t )) f ( s) D f (t ) , (t ) s
A function F :
is called an anti-derivative of f :
s U , s t .
provided F (t ) f (t ) holds for all t
k
. Definite
the Cauchy integral of f by a f (t )t F (b) F (a) for a, b . p : is said to be regressive provided the set of all regressive and rd-continuous functions. Define the set of 1 (t ) p(t ) 0 for all t k . Denote by b
k
Definition 4
p
all positively regressive functions by
:1 (t ) p(t ) 0 for all t
For h 0 , we define the cylinder transformation h :
Log is the principal logarithm function. If p
h
t
k
If p, q
. And the function
Lemma 1
If p
.
h
by h ( z )
1 Log (1 zh) , where h
, then we define the exponential function by
ep (t , s) exp s ( ) ( p( )) for s, t
Definition 5
k
t
k
.
, we define the circle plus addition by ( p q)(t ) : p(t ) q(t ) (t ) p(t )q(t ) for all p(t ) for all t k . p is defined by ( p)(t ) : 1 (t ) p(t ) and fix t0
k
, then the exponential function ep (, t0 ) is the unique solution of the initial
value problem x p(t ) x , x(t0 ) 1 and
(i ) ep ( (t ), s) (1 (t ) p(t ))ep (t, s) ; (ii ) ep (t , s)eq (t , s) ep q (t , s) .
The following is the comparison theorem in [7]. Lemma 2
Let y, f Crd and p
, then y (t ) p(t ) y(t ) f (t ) implies
y(t ) y(t0 )ep (t , t0 ) t ep (t , ( )) f ( ) , for all t t
k
0
.
3 EXPONENTIAL CONVERGENCE Throughout this paper we assume that gi () is strictly monotonous and there exist some constants ki 0 (i such that for u, v k 0
( A1 ) :
Lemma 3
gi ( u ) gi ( (u )) gi (v) 1 1, 2ki gi (u ) gi (v)
Assume that ( A1 ) holds. For all u, v v [ gi (s) gi (v)]s u
k
uv.
, we have
1 [ gi (u ) gi (v)]2 , 2ki
(i
).
Proof : Define auxiliary functions as follows. Ei (u ) v [ gi (s) gi (v)]s u
1 [ gi (u ) gi (v)]2 , 2ki
Then we get
- 14 www.ivypub.org/mc
(i
).
)
g (u ) gi ( (u )) gi (v) E (u ) [ gi (u ) gi (v)] 1 1 2ki gi (u ) gi (v)
u v .
i
i
It follows from ( A1 ) that Ei (u) 0 if u v ; Ei (u) 0 if u v ; Ei (u) 0 if u v . Hence u v is the minimum point of Ei (u ) , so Ei (u) Ei (v) for any u v [ gi (s) gi (v)]s u
Assume that 0 and
Lemma 4
1 [ gi (u ) gi (v)]2 , 2ki
, t0
1 2
such that [e (t , t0 )] e (t , t0 ) for all t , t0
k
k
k
. Therefore, we have
(i
).
is fixed. Then there exists a (0, ) with 0 and
.
1
Proof : Let [e (t , t0 )]2 e (t , t0 ) , so we have e (t , t0 ) e (t , t0 )e (t , t0 ) = e( )( ) (t , t0 ) , i.e. ( ) ( ) . It follows form Definition 5 that [1 ( ) ] ( (t0 , t )) which leads to 2 ( ) 2 , then we have two cases to discuss. Case 1: ( ) 0 . We have
2
. Obviously, (0, ) .
Case 1: ( ) 0 . We have 0 It follows from Case 1, Case 2 and Lemma 5
[3]
2
, so (0, ) holds. that
If there exist constants i 0 (i
. This completes the proof.
) such that the matrix
T jTji i di ij i ij , 2 ki n n
1, i j 0, i j
ij
is negative definite, then (1.1) has a unique equilibrium point. Theorem 1
If there exists a (0, min1i n {di }) such that the matrix
( A2 ) :
T Tji di ij ij T , ki 2 n n 0 2 ( )d 2 1, 0
1, i j 0, i j
ij
i
is negative semi-definite, then (1.1) has an unique equilibrium ( x1 ,…,xn )T which is globally exponential stable. Proof : The matrix T is negative semi-definite implies that the matrix Tij Tji di ij 2 n n ki
is negative definite, by Lemma 5, (1.1) has an unique equilibrium point ( x1 ,…,xn )T . Next we definite a function n
V (t ) vi (t ) where vi (t ) x [ gi ( ) gi ( xi )] , (i i 1
xi ( t ) i
) . It follows from (1.1) that
n
[ xi (t ) xi ] di [ xi (t ) xi ] Tij [ g j ( x j (t )) g j ( xj )] , j 1
t
0
.
For delta derivative of vi (t ) , we have two cases to discuss. Case 1: t is right-scatter, i.e. (t ) 0 . (i ) If xi (t ) xi ( (t )) , then xi (t ) 0 . From (1.1) we have xi (t ) xi and vi (t ) 0 (i - 15 www.ivypub.org/mc
).
(3.1)
f (s)s f (t ) (t ) , then we have
(t )
(ii ) If xi (t ) xi ( (t )) , due to t
xi ( ( t ))
v (t ) i
x
i
[ gi ( ) gi ( xi )] x [ gi ( ) gi ( xi )] xi ( t ) i
(t ) t x ( t ) [ gi ( ) gi ( x )] [ gi ( xi (t )) gi ( xi )]xi (t ) (t ) t xi ( ( t ))
i
i
Case 2: t is right-dense, i.e., (t ) 0 . It is obvious that vi (t ) vi (t ) and we get vi (t ) vi (t ) [ gi ( xi (t )) gi ( xi )]xi (t ) [ gi ( xi (t )) gi ( xi )]xi (t ) .
Together with Case 1 and Case 2, we have n
V (t ) [ gi ( xi (t )) gi ( xi )][ xi (t ) xi ] i 1
n d Tij Tji i i 1 j 1 ki 2 n
[ gi ( xi (t )) gi ( xi )][ g j ( x j (t )) g j ( x j )]
It follows from Lemma 2 that V (t ) V (0)e (t ,0) , where t
0
. From the definition of V (t ) , it follows that
n
n
i 1
i 1
V (0) ki [ xi (0) xi ]2 || x(0) x ||2 ki
and 1 [ gi ( xi (t )) gi ( xi )]2 , 2ki
V (t ) x [ gi ( ) gi ( xi )] xi ( t ) i
(i
)
Hence 1
n
[ gi ( xi (t )) gi ( xi )] || x(0) x || 2ki kr [e (t ,0)]2 for all t r 1
0
.
Then by (3.1) , we have 1
n
n
j 1
r 1
D | xi (t ) xi | di | xi (t ) xi | || x(0) x || | Tij | 2k j kr [e (t ,0)] 2
Again by the Lemma 2, we have n
n
j 1
r 1
1
| xi (t ) xi ||| xi (0) xi || e d (t ,0) || x(0) x || | Tij | 2k j kr 0 [e (t ,0)] 2 e d (t , ( s))s i
for all t following
t
i
1
0
. From Lemma 4, we know that [e (t ,0)]2 e (t ,0) , where 0 di (i 1
0 [e (t ,0)]2 e d (t , ( s))s t
i
e d (t ,0) i
di
[e( )
( di )
e d (t ,0)e( ) i
) . Then we have the
(t ,0) 1] ( di )
di
(t ,0)
e (t ,0) di
Together with e d (t ,0) e (t ,0) , we have i
n 1 n | xi (t ) xi ||| x(0) x || 1 | Tij | 2k j kr e (t ,0) M i || x(0) x || e (t ,0) . j 1 r 1 d i
n
Let M M i , where M i 1 i 1
n 1 n | Tij | 2k j kr . So we have j 1 r 1 di
- 16 www.ivypub.org/mc
n
| xi (t ) xi | M || x(0) x || e (t ,0) . i 1
This completes the proof.
4 EXAMPLE We will present a three-dimensional neural network on time scale
, the form as follows:
2 x1 (t ) 3 x1 (t ) T11 f ( x1 (t )) T12 f ( x2 (t )) T13 f ( x3 (t )) I1 3 x2 (t ) x2 (t ) T21 f ( x1 (t )) T22 f ( x2 (t )) T23 f ( x3 (t )) I 2 4 4 x3 (t ) 5 x3 (t ) T31 f ( x1 (t )) T32 f ( x2 (t )) T33 f ( x3 (t )) I 3
where t P1,1 , P1,1 f ( s)
k 0
(5.1)
[2k , 2k 1] , Tij (i, j 1, 2) and I i (i 1, 2) are constants and
1 1 | s 1| | s 1| . Choose , we assume that 2 2
1 1 3 1 1 1 1 1 3 T11 , T12 , T13 , T21 , T22 , T23 , T31 , T32 , T33 . 3 3 2 3 2 3 3 3 2
Then, matrix T1/ 2 is negative semidefinite. The initial conditions associated with (5.1) are of the forms ( x1 (0), x2 (0), x3 (0))T (0.3, 1.2, 3.2) , t [0,1] . By Theorem 1, the neural network has a unique equilibrium which is globally exponentially stable. We can refer to Figure 1.
FIG. 1 EXPONENTIAL CONVERGENCE OF
(5.1)
5 CONCLUSIONS By applying the time scale calculus theory and the Lyapunov function method, we obtain some results to ensure exponential convergence of a class of neural networks on time scales. The stability criteria can be easily checked in practice by simple algebraic method and are applicable for continuous and discrete network systems.
ACKNOWLEDGEMENTS This research was supported by the National Natural Science Foundation of China 11101187, the Foundation for Young Professors of Jimei University, the Excellent Youth Foundation of Fujian Province 2012J06001, NCETFJ JA11144 and the Foundation of Fujian Higher Education under Grants JA10184 and JA11154.
- 17 www.ivypub.org/mc
REFERENCES [1]
L. Chua, L. Yang, Cellular neural networks: Theory, IEEE Trans. Circu. Syst. 35 (1988) 1257-1272
[2]
L. Chua, L. Yang, Cellular neural networks: Applications, IEEE Trans. Circu. Syst. 35 (1988) 1273-1290
[3]
D. Tank, J. Hopfield, Simple neural optimization networks: An A/D converter, signal decision circuit and a linear programming circuit, IEEE Trans. Circu. Syst. 33 (1986) 533-541
[4]
S. Guo, L. Huang, Exponential stability and periodic solutions of neural networks with continuously distributed delays, Phys. Rev. E 67 (2003) 001902
[5]
Q. Zhang, X. Wei, J. Xu, Global exponential stability for nonautonomous cellular networks with delays, Phys. Lett. A 315(2006) 153-160
[6]
S. Higer, Ein Mabkettenkallkulmit Anwendumgauf Zentrumsmannigfaltigkeiten. PhD thesis, Universitat Wurzburg, 1988
[7]
M. Bohner, A. Peterson, Dynamic Equations On Time Scales: An Introduction with Applications, Birkhauser, Boston, 2001
[8]
M. Bohner, A. Peterson, Advances in Dynamic Equations on Time Scales, Birkhauser, Boston, 2003
[9]
Z. Huang, Y. Raffoul, C. Cheng, Scal-limitec activating sets and Multiperiodicity for threshold-linear networks on time scales, IEEE Trans. Cybern. 44 (2014) 488-499
[10] A. Chen, D. Du, Global exponential stability of delayed BAM network on time scales, Nerual Comput. 71 (2008) 3582-3588 [11] Y. Li, X. Chen, L. Zhao, Stability and existence of periodic solutions to delayed Cohen-Grossberg BAM neural networks with delays with impulses on time scales, Neural Comput. 72 (2009) 1621-1630
AUTHORS 1
2
University, Xiamen, China. Her research
Science, Jimei University, Xiamen, China. His current research
interests are neural networks, applied
interests include nonlinear systems, neural networks, and
mathematics, etc.
stability analysis in dynamic systems including continuous,
Yanxia Tan is a master student of Jimei
Zhenkun Huang He is currently a Professor in the School of
discrete, and impulsive systems.
- 18 www.ivypub.org/mc