International Journal of Mathematics and Statistics Invention (IJMSI) E-ISSN: 2321 – 4767 P-ISSN: 2321 - 4759 www.ijmsi.org Volume 2 Issue 8 || August. 2014 || PP-36-46
On optimal computational algorithm and transition cardinalities for solution matrices of single–delaylinear neutral scalar differential equations. Ukwu Chukwunenye Department of Mathematics University of Jos P.M.B 2084 Jos, Nigeria
ABSTRACT : This paper established an optimal computationalalgorithm for solution matrices of single–delay autonomous linear neutral equations based on the established expressions of such matrices on a horizon of length equal to five times the delay. The development of the solution matrices exploited the continuity of these matrices for positive time periods, the method of steps, change of variablesand theory of linear difference equations to obtain these matrices on successive intervals of length equal to the delay h.
KEYWORDS: Algorithm, Equations, Matrices, Neutral, Solution. I.
INTRODUCTION
Solution matrices are integral components of variation of constants formulas in the computations of solutions of linear and perturbed linear functional differential equations, Ukwu [1]. But quite curiously, no other author has made any serious attemptto investigate the existence or otherwise of their general expressions for various classes of these equations. Effort has usually focused on the single – delay model and the approach has been to start from the interval [0, h] , compute the solution matrices and solutions for given problem instances and then use the method of steps to extend these to the intervals [kh, (k 1)h], for positive integral k , not exceeding 2, for the most part. Such approach is rather restrictive and doomed to failure in terms of structure for arbitrary k . In other words such approach fails to address the issue of the structure of solution matrices and solutions quite vital for real-world applications.With a view to addressing such short-comings, Ukwu and Garba[2] blazed the trail by consideringthe class of double – delay scalar differential equations:
x (t ) ax(t ) bx(t h) cx(t 2h), t R, where a, b and c are arbitrary real constants. By deploying ingenious combinations of summation notations, multinomial distribution, greatest integer functions, change of variables techniques, multiple integrals, as well as the method of steps, the paperderived the following optimal expressions for the solution matrices:
eat , t J 0 ; i Y (t ) k t ih ea (t ih ) eat bi i! i 1
k 2 k 2 j
j 1
bi c j i j t [i 2 j ]h eat [i 2 j ]h) ; t J k , k 1 i 0 i ! j !
where J k [ kh, ( k 1) h], k {0,1, }, . denotes the greatest integer function, and Y (t ) denotes a generic solution matrix of the above class of equations for t R. See also [3]. This article makes a positive contribution to knowledge bydevising a computational algorithmfor the solution matrices of linear neutral equations on , .
www.ijmsi.org
36 | P a g e
On Optimal Computational Algorithm… II. RESULTS AND DISCUSSIONS Observe that the above piece-wise expressions for Y (t ) may be restated more compactly in the form
Y (t )
k 2 k 2 j
d t [i 2 j ]h j 0
i j
ij
i 0
e at [i 2 j ]h ) sgn max k 1, 0 ; t J k , k 0
bi c j k where di j , j 0, , , i 0, , k 2 j i! j! 2 Let Yk i (t ih) be a solution matrix of
x (t ) a1 x (t h) a0 x(t ) a1 x(t h),
(1)
on the interval J k i [( k i )h, ( k 1 i ) h], k {0,1, }, i {0,1, 2}, where
1, t 0 Y (t ) 0, t 0
(2)
Note that Y (t ) is a generic solution matrix for any t R. The coefficients a1 , a0 , a 1 and the associated functions are all from the real domain. The following theorem is preliminary to the devising/construction of an optimal computational algorithm for
Y (t ). 2.1 Theorem on Y (t )
For t J k , k 0,1, 2,3, 4,5 , Y (t ) e
a0t
k
a1a0 a1
i 1
i!
i
t ih
i
a t ih e 0 sgn max 0, k
k 1
a t [i 1]h ai 1 a1a0 a1 (t [i 1]h)e 0 sgn max 0, k 1 i 1
a1 a1a0 a1 t 3h e 2
2
a0 t 3 h
sgn max 0, k 2
3 3 3 t 4h 2 2 a1 a1a0 a1 a21 a1a0 a1 t 4h e a0 t 4 h sgn max 0, k 3 3 2 4 4 t 5h a1 a1a0 a1 a0 t 5 h sgn max 0, k 4 8 e 3 2 2 2 a 2 a a a t 5h 2a 3 a a a t 5h 1 1 0 1 1 1 0 1
Proof
On 0, h J 0 , Y (t h) 0 Y (t ) a0Y (t ) a.e. on [0, h] Y (t ) Y1 (t ) ea0t C; Y (0) 1 C 1 Y (t ) ea0t on J 0 [0, h], as in the single - and double - delay systems. Consider the interval (h, 2h). Then
t h [0, h] Y (t h) ea0 (t h) Y (t h) A0 ea0 (t h) Y (t ) a0Y (t ) a1a0 a1 ea0 (t h)
d a0t e Y (t ) e a0t Y (t ) a0Y (t ) a1a0 a1 ea0 (t h ) dt www.ijmsi.org
37 | P a g e
On Optimal Computational Algorithm…
e
a0t
Y (t ) e
t
Y ( h) e
a0 h
a0 s
a1a0 a1 e
a0 ( s h )
h
Y (t ) e
Y (h) a1a0 a1 (t h)e
a0 ( t h )
t
ds e a0h a1a0 a1 ds h
a0 ( t h )
e
a0t
a1a0 a1 (t h)e a0 (t h ) , on J1.
t J 2 t h J1 Y (t h) e a0 (t h ) a1a0 a1 (t 2h)e a0 ( t 2 h ) , on J 2 .
d a0t e Y (t ) e a0t a1Y (t h) a1Y (t h) a.e. dt
On J k , k 2, the relation t
Y (t ) ea0 (t kh )Y (kh) ea0 (t s ) a1Y ( s h) a1Y ( s h) ds
(3)
kh
t J 2 t h J1 , 2h J1 Y (t h) e a0 (t h ) a1a0 a1 (t 2h)e a0 ( t 2 h ) , on J 2 . Y (2h) e a0 2 h a1a0 a1 he a0 h t
Y (t ) ea0 (t 2 h ) ea0 2 h a1a0 a1 he a0h e a0 (t s ) a1 e a0 ( s h ) a1a0 a1 (s 2h)e a0 ( s 2 h ) ds 2h
d ea0 (t s ) a1 ea0 ( s h ) a1a0 a1 ( s 2h)ea0 ( s 2 h ) ds ds 2h t
t
Y (t ) e a0t a1a0 a1 he a0 (t h ) (t 2h)a1e a0 ( t h ) a1 a1a0 a1 ( s 2h)e a0 ( t 2 h ) ds 2h
a1a0 (t 2h)e Y (t ) e
a0t
a0 ( t h )
2 t 2h a0 (t 2 h ) a1 a1a0 a1 (t 2h) a0 e 2
a1a0 a1 (t h)e a1 a1a0 a1 e
a0 ( t h )
a0 ( t 2 h )
a a a 1 0 1
2
2 t 2h ; t J 2 .
t 2h
2
e a0 (t 2 h )
i2
Thus the theorem is established for t J k , k 0, 1, 2 ; needless to say that (.) 0, if i 2 i1. i i 1
Note that the upper limit, k 1 in the second summation could be replaced explicitly by max{1, k 1}.
Now consider the interval J 3 ; s, t J 3 3h J 2 J 3 ={3h} and s h J 2 ; hence
Y (3h) e3a0h a1a0 a1 2he2 a0h a1a0 a1 e3a0 h 2 a1a0 a1 he 2 a0 h a1a0 a1 Y ( s h) e
a0 ( s h )
a1a0 a1 ( s 2h)e
a1 a1a0 a1 e
a0 ( s 3h )
a0 ( s 2 h )
2
2
h 2 a0h e a1 a1a0 a1 he a0h 2 h 2 a0h e a1 a1a0 a1 he a0h 2
a a a 1 0 1
s 3h ; s J 3 .
2
2
s 3h
2
e a0 ( s 3h )
From the relation (3), we obtain
www.ijmsi.org
38 | P a g e
On Optimal Computational Algorithm…
h 2 a0 (t 2 h ) Y (t ) e 2 a1a0 a1 he a1a0 a1 e a1 a1a0 a1 he a0 (t 2 h ) 2 2 a ( s h) a1a0 a1 2 t a0 ( s 2 h ) 0 a1a0 a1 ( s 2h)e s 3h e a0 ( s 3h ) a0 ( t s ) e a1 e 2 ds 3h a ( s 3 h ) a a a a s 3h e 0 1 1 0 1 2 a ( s h ) a1a0 a1 2 t a0 ( s 2 h ) 0 a1a0 a1 ( s 2h)e s 3h e a0 ( s 3h ) a0 ( t s ) d e a1 e 2 ds ds 3h a a a a s 3h e a0 ( s 3h ) 1 1 0 1 2 2 h Y (t ) e a0t 2 a1a0 a1 hea0 (t h ) a1a0 a1 e a0 (t 2 h ) a1 a1a0 a1 he a0 (t 2 h ) 2 2 a0 ( t 2 h ) (t 2h) e h 2e a0 ( t 2 h ) a0 ( t h ) a1 (t 3h)e a1 a1a0 a1 a1 a1a0 a1 2 2 a0 ( t h )
a0t
a1a0 a1
2
a1a1 a1a0 a1 (t 3h) 2 a0 (t 3h ) a1 e t 3h e 3! 2 a1a0 a1a0 a1 (t 2h) 2 a0 ( t 2 h ) a0 ( t h ) a0 ( t 2 h ) a1a0 (t 3h)e a1 a1a0 a1 (t 3h)e e 2 2 3 a1a0 a1a0 a1 h 2 a0 (t 2 h ) a1 a1a0 a1 t 3h a0 (t 3h ) 2 e t 3h a0 e 2 2 3 2
3
a0 ( t 3 h )
(t 3h) 2 a0 (t 3h ) a a1a0 a1 t 3h a0 e 2! 2 1
The evaluation of the integrals and skillful collection of like terms result in the following expression for Y (t ) :
Y (t ) e
a0t
a1a0 a1 (t h)e
a a a 1 0 1 3!
3
t 3h
3
a0 ( t h )
a a a 1 0 1 2!
2
t 2h
2
e a0 (t 2 h )
e a0 (t 3h ) a1 a1a0 a1 t 2h e a0 ( t 2 h ) a21 a1a0 a1 (t 3h)e a0 ( t 3h )
a1 a1a0 a1 t 3h e a0 (t 3h ) 2
2
Observe that for k {0,1, 2,3} and t J k ,
Y (t ) e
a0t
k a1a0 a1 i i a t ih t ih e 0 sgn max 0, k i 1 i! k 1
a t [ i 1]h ai 1 a1a0 a1 (t [i 1]h)e 0 sgn max 0, k 1 i 1
a1 a1a0 a1 (t 3h) 2 e a0 (t 3h ) sgn max 0, k 2 2
(4)
A definite pattern is yet to emerge; so the process continues. Now consider the interval J 4 ; s, t J 4 4 h J 3 J 4 ={4h} and s h J 3 ; hence
www.ijmsi.org
39 | P a g e
On Optimal Computational Algorithm… the relation (3) implies that 2 4 a h 3 a1a0 a1 i i a0 4 i h a [3i ]h 0 e 4 i h e ai 1 a1a0 a1 3 i he 0 a0 ( t 4 h ) i! Y (t ) e i 1 i 1 2 2 a0 h a1 a1a0 a1 h e
a s h 3 a a a i i a0 s4 i 1 h 1 0 1 0 4 e s i 1 h e 4 t i ! i 1 a1e a0 (t s4 ) ds4 2 4h 2 a s [ i 2]h ai 1 a1a0 a1 ( s4 [i 2]h)e 0 4 a1 a1a0 a1 (s4 4h) 2 e a0 ( s4 4 h ) i 1
t
a1e a0 (t s4 ) 4h
a s h 3 a a a i i a s i 1 h e 0 4 1 0 1 s4 i 1 h e 0 4 i 1 i ! d ds4 ds4 2 2 a s [ i 2]h ai 1 a1a0 a1 ( s4 [i 2]h)e 0 4 a1 a1a0 a1 ( s4 4h) 2 e a0 ( s4 4 h ) i 1 3
a1a0 a1 i
i 1
i!
Y (t ) e a0t
4 i h
i
2
a t [ i 1]h e a0 ( t ih ) ai 1 a1a0 a1 3 i he 0 i 1
a1 a1a0 a1 h e 2
a1 t 4h e 3
i 1
2 a0 ( t 3 h )
a a a a i 1 a t i 1 h 1 1 0 1 t i 1 h e 0 (i 1)! i 1 i
3
a0 ( t h )
a1 a1a0 a1 i 1 a t i 1 h [3 i]h e 0 (i 1)! i
([2 i ]h) 2 e a a a1a0 a1 2 i 1 2
a0 t i 2 h
i 1 1
a1a0 t 4h e
2
a1ai 1 a1a0 a1 i 1
a1a1 a1a0 a1
2
a t [ i 2]h (t [i 2]h) 2 e 0 2
(t 4h)3 a0 (t 4 h ) e 3
3 a a a a a a a a i a t i 1 h 1 1 0 1 t i 1 h e 0 1 1 0 1 i! i! i 1 i 1 i
3
a0 ( t h )
i
3 i h e
a1a0 a1 t i 1 h i 1 ea t i 1 h 3 a a a1a0 a1 3 i h i 1 e a t i 1 h 1 0 i 1! i 1! i 1 i 1 2 2 2 t i 2 h a t [i 2]h a t [ i 2]h i 1 i 1 a1 a1a0 a1 (t 4h)e a1 a0 a1a0 a1 e i
3
a1a0
i
0
0
0
0
i 1
i 1
2
ai 11a0 a1a0 a1
2 i h 2
i 1
a a a1a0 a1 2 1 0
2
t 4h 3
2
2
2 2 a t [ i 2]h e 0 a21 a1a0 a1 t 4h e a0 (t 4 h )
3
e a0 (t 4 h )
It is evident from change of variables and grouping techniques that
www.ijmsi.org
40 | P a g e
i
a0 t i 1 h
On Optimal Computational Algorithm… 3 a a a a i 1 a t i 1 h i 1 a t i 1 h a a a 1 1 0 1 t i 1 h e 0 a1a0 1 0 1 t i 1 h e 0 (i 1)! i 1! i 1 i 1 i
3
e
a0t
a1 t 4h e
i
a1a0 t 4h e
a0 ( t h )
3
a0 ( t h )
a1a0 a1
i 1
i
4 i h e a (t ih ) i
i!
3 a a a a a a a i 1 a t [ i 1]h 1 1 0 1 [3 i ]h e 0 a1a0 1 0 1 (i 1)! i 1! i 1 i 1 i
3
e
4
a0t
a1a0 a1
i
i!
i 1
t ih
i
i
0
3 i h
i 1
e
a0 t i 1h
a t ih e 0 ;
(5)
2 a t [i 2]h 2 t i 2 h a0 t [i 2]h (t [i 2]h) 2 e 0 i 1 a a a1a0 a1 a1 a0 a1a0 a1 e 2 2 i 1 i 1 2
i 1 1
a1 a1a0 a1
2
t 3h 2
2
ai 1 a1a0 a1
2
i 1
2
e
a0 t 3h
a1a0 a1
a
2 1
2
t 4h 2
2
a t 4h e 0
(6)
a t [ i 2]h (t [i 2]h) 2 e 0 ; 2
Also,
a1 a1a0 a1 h e 2
2
2 a0 ( t 3 h )
a a a1a0 a1 i 1 1 0
i 1
([2 i ]h) 2 e a a a1a0 a1 2 i 1 2
a0 t i 2 h
i 1 1
2 i h
2
2
2 a t [ i 2]h 2 h e 0 a1 a1a0 a1 e a0 (t 3h )
(7)
2
Furthermore, 2 (t 4 h) a t [ i 1]h ai 1 a1a0 a1 3 i he 0 a1a1 a1a0 a1 e a0 (t 4 h ) 3 i 1 3
2
2
2 2 a t [ i 2]h ai 11 a1a0 a1 (t 4h)e 0 a21 a1a0 a1 t 4h e a0 (t 4 h ) i 1
a a a1a0 a1 2 1 0
t 4h
2
3
e
3
a a a a 1 1 0 1 i! i 1 3
i
a0 ( t 4 h )
3 i h e i
a a a a i a t i 1 h 1 1 0 1 t i 1 h e 0 i! i 1 i
3
a0 t i 1 h
4 1
1 2 2 a t [ i 1]h ai 1 a1a0 a1 t i 1 h e 0 a1 a1a0 a1 t 3h e a0 (t 3h ) 2 i 1 a
2 1
a1a0 a1 t 4h 2
a1` a1a0 a1
2
2
e
a0 ( t 4 h )
a1 a1a0 a1
h 2 a0 (t 3h ) e 2
3
(t 4h)3 a0 (t 4 h ) e 2 (8)
www.ijmsi.org
41 | P a g e
On Optimal Computational Algorithm… Adding up expressions (5), (6), (7) and (8) yields
Y (t ) e
a0t
4 a1a0 a1 i i a t ih t ih e 0 i 1 i!
4 1
2 a t [ i 1]h ai 1 a1a0 a1 (t [i 1]h)e 0 a1 a1a0 a1 (t 3h) 2 e a0 (t 3h ) i 1
3 3 3 (t 4h) 2 a1 a1a0 a1 a21 a1a0 a1 (t 4h) 2 e a0 (t 4 h ) 2 2 Hence for t J k , k 0, 1, 2, 3, 4 ,
(9)
k a a a i i a t ih Y (t ) e a0t 1 0 1 t ih e 0 sgn max 0, k i 1 i! k 1
a t [ i 1]h ai 1 a1a0 a1 (t [i 1]h)e 0 sgn max 0, k 1 i 1
a1 a1a0 a1 (t 3h) 2 e a0 (t 3h ) sgn max 0, k 2 2
3 3 3 (t 4h) 2 a1 a1a0 a1 a21 a1a0 a1 (t 4h) 2 e a0 (t 4 h ) sgn max 0, k 3 (10) 2 2
1 , j 1, 2, , k ; and c i 1 1, i 1, 2, , k 1. The transformation j!
Observe that for t J k , c0 j
from Yk (t ) to Yk 1 (t ) requires only the computations of ci j , for i 1, 2, , k 1 2 , j 2,3, , k 1 i , k 3, such that i j k 1, since ci 1 is already known for each i. Therefore one need only determine k 1 new ci j values, namely c1k , c2 k 1 , c3 k 2 , , ck 1 2 .
On a positive note, the author has successfully devised an optimal computational algorithm for the solution matrices without recourse to the class of differential equations (1) and expression (3), using Y1 (t ) as a starting point. This is the focus of the next result. 3. Main Result: A computational Algorithm for transiting from Yk (t ) to Yk 1 (t ), k 1
Let t J k , let 1 , 2 0,1. Suppose that a1 a1a0 a1 0.
a a a j Y (t ) Yk 1 (t ) Y1 (t ) a1 1 0 1 t [ j 1]h ea t [ j 1]h j 2 ! 1 j 1 1 k 1 a t [ i 2]h i a1a0 a1 1 a1 (t [i 1]h) e sgn max 0, k 1 k
1
j 2
1
2
0
2
2
1
c 1
2 1
i 1 j 2
0
1 2
1 2 1 i 1 k 2 k i
2
i 1 i j 1
a
a1a0 a1
j 2
1 2 ( j 1) sgn( 2 )
(t [i j 1]h)
j 2
a t [ i j 1]h e 0 sgn max 0, k 2 (11)
for some real positive constants ci j secured from Yk (t ), with the process initiated at k 1. 3.1 Remarks on the optimal computational algorithm
www.ijmsi.org
42 | P a g e
On Optimal Computational Algorithm…
Observe that for t J k , Yk 1 (t ) can be expressed in the equivalent form Yk 1 (t ) Y1 (t )
k 1 k i
c 1
2 1
i 0 j 1
ij
i 1 1
a
a1a0 a1
j 2
(t [i j 1]h)
1 j sgn( 2 )
j 2
a t [ i j 1]h e 0 sgn max 0, k 1 (12)
for some real positive constants ci j secured from Yk (t ), with the process initiated at k 1. 1 , j 1, 2, , k 1 ; c i 1 1, i 1, 2,, k and the transformation from Yk (t ) j! to Yk 1 (t ) requires only the computations of ci j for i 1, 2,, k 1 2 , j 2,3,, k 1 i ,
Moreover c0 j
k 3, such that i j k 1.Therefore one need only determine k 1 new ci j values, namely c1 k , c2 k 1 , c3 k 2 ,, ck 1 2 . 3.2 Interpretation of the computational algorithm for Y(t) Stage 1:Transiting from Yk (t ) to Yk 1 Y1 (t ), k 1, 2, 3,
Y (t ) e : a0t
Perform the following operations on each term of
k
(i) Increment each power of a -1 by 1; preserve the power of a1a0 a1 (ii) Let t [.]h
power of a1a0 a1
t h [.]h
power of a1a0 a1
; e a0 t [.]h e a0 t h [.]h
The operations i and ii yield exactly the same number of terms as in (iii) Increment each exponent of (iv) Let t [.]h
a
1
Y (t ) e a0t
k
a0 a1 by 1; preserve the exponent of a 1
exponent of a1a0 a1
1+ old exponent of a1a0 a1
t h [.]h
; e
a0 t [.] h
e
a0 t h [.] h
(v) Divide each term by the new exponent of a1 a0 a1 resulting from operation (iii), where new exponent of a 1 a0 a1 1 old (preceding) exponent of a 1 a0 a1 , that is new exponent of a1 a0 a1 exponent of a 1 a0 a1 from the resulting term in Yk 1 (t ) 1 exponent of a1 a0 a1 from the term operated on, in Yk (t )
The operations iii , (iv) and v yield exactly the same number of terms as in Yk (t ) e
a0 t
(vi) Aggregate all terms resulting from operations (i) to (v) through appropriate groupings of common factors of powers of a1 and a1 a0 a1 .
Stage 2: Transiting from Yk 1 Y1 (t ) to Yk 1 (t ), k 1, 2, 3,
Secure Yk 1 (t ) by adding Y1 (t ) to the aggregated terms in (vi); in order words
the resultant expressions from the application of the algorithm to Yk 1 (t ) Y1 (t ) Yk (t ) ea0t ; k 1, 2, 3,
3.3 Cardinality and CardinalityTransition Analyses on Yk (t )
Denote the cardinality of . by . . Then Yk 1 (t ) Yk (t ) k 1 is the resulting autonomous
nonhomogeneous linear difference equation, for k 1, 2, 3, . Hence Yk 1 (t ) 1
k 1 k 2 , k 2
0, 2, 3, , noting that
www.ijmsi.org
Y1 (t ) 2.
43 | P a g e
On Optimal Computational Algorithm…
Furthermore, for k 1, 2, , the number of terms that need to be aggregated from Yk (t ) ea0t
to secure Yk 1 (t ) Y1 (t ) is k k 1 , derived from the fact that there are 2 Yk (t ) e a0t
2 Yk (t ) 1 such terms. Therefore k 2 k 2 terms must be aggregated in the transition from Yk (t ) to Yk 1 (t ). 3.4 Verification and illustrations of Algorithm 3
(a1a0 a1 )11 ([t h] h)e a0 ([t h ] h ) 11 a0 ( t h ) a0 ([ t h ] h ) (a1a0 a1 )(t h)e a1 (a1a0 a1 )([t h] h)e
t J 2 Y2 (t ) Y1 (t ) a1 (a1a0 a1 )([t h] h)e a0 ([t h ] h ) Y2 (t ) e a0t
(a1a0 a1 )11 ([t h] h)11 e a0 ([t h ] h ) 11 2 (a a a )11 (t ih)i e a0 ([t h ] h ) Y (t ) e a0t 1 0 1 a1 a1a0 a1 (t 2h)e a0 t 2 h , t J 2 . i ! i 1 It is straight-forward and easy to check that the algorithm verifies the rest of the computations for
Y (t ), for t J 3 J 4 3h,5h.
Finally, we apply the algorithm to extend the solution matrices to the interval J5 5h,6h. To achieve this, set k 4, so that k 1 5; so we need only obtain the k 1 4 1 3 new coefficients ci j , for i 1, 2,3 , j 2,,5 i : i j k 1, namely c1 4 , c23 and c 32 . The application of the algorithm from J 4 to J 5 yields the following expression for Y (t ), t J 5 :
j 4 a1a0 a1 j a t [ j 1]h Y (t ) Y1 (t ) a1 t [ j 1]h e 0 j 1 j! 4 1
2 a t [ i 1]h ai 11 a1a0 a1 (t [i 1]h)e 0 a21 a1a0 a1 (t 4h) 2 e a0 (t 4 h ) i 1
3 2 3 3 (t 5h) 2 a1 a1a0 a1 a31 a1a0 a1 (t 5h) 2 e a0 (t 5h ) 2 2 j 1 4 a a a j a t [ j 1]h 1 0 1 t [ j 1]h e 0 j 1 j 1 j ! 4 1
ai 1
a1a0 a1
i 1
2
2
a a a a t [ i 1]h (t [i 2]h)e 0 a1 1 0 1 (t 4h) 2 e a0 (t 4 h ) 3 3
4 3 2 4 (t 4h) 3 a1 a1a0 a1 a1 a1a0 a1 (t 5h) 2 e a0 (t 5h ) 2(4) 2(3)
Aggregation of like terms:Y1 (t ) the terms with 1 i j 4, together with c0 5
1 , c4 1 1, 5!
evaluate to
www.ijmsi.org
44 | P a g e
On Optimal Computational Algorithm…
5 a a a j 51 j a t jh a t [i 1]h ea0t 1 0 1 t jh e 0 ai 1 a1a0 a1 (t [i 1]h)e 0 j 1 j! i 1 3 3 2 3 (t 4h) 2 2 a0 ( t 3h ) a1 a1a0 a1 (t 3h) e a1 a1a0 a1 a21 a1a0 a1 (t 4h)2 e a0 (t 4 h ) 2 2 It follows from the aggregated terms that the values of the remaining three coefficients c14 , c23 and c23 are
1
1 1 1 3 1 1 , and respectively. Therefore c1 4 , c2 3 1 and c32 2. Hence 4! 8 2 2 2 2 6
5 a a a j 51 j a t jh a t [ i 1]h Y (t ) e a0t 1 0 1 t jh e 0 ai 1 a1a0 a1 (t [i 1]h)e 0 j 1 j! i 1 3 3 2 3 (t 4 h) 2 2 a0 ( t 3h ) a1 a1a0 a1 (t 3h) e a1 a1a0 a1 a21 a1a0 a1 (t 4h) 2 e a0 (t 4 h ) 2 2 4 4 (t 4h) 3 2 a1 a1a0 a1 a21 a1a0 a1 (t 5h)3 2a31 a1a0 a1 (t 5h) 2 e a0 (t 5h ) ; t J 5 . 3! 5
Moreover the general expresion for Y (t ), t J k can be stated as follows: k 0
5 a a a Y (t ) e a0t 1 0 1 j 1 j!
j
t jh
j
a t jh e 0 max k , 0
5 1
a t [ i 1]h ai 1 a1a0 a1 (t [i 1]h)e 0 max k 1, 0 i 1
a1 a1a0 a1 (t 3h) 2 e a0 (t 3h ) max k 2, 0 2
3 3 3 (t 4 h) 2 a1 a1a0 a1 a21 a1a0 a1 (t 4h) 2 e a0 (t 4 h ) max k 3, 0 2 2 4 4 (t 4 h) 3 a a a a a21 a1a0 a1 (t 5h)3 a (t 5h ) 1 1 0 1 3! e0 max k 4, 0 . 2 3 2 2a1 a1a0 a1 (t 5h)
IV. CONCLUSION This article obtained an optimal computational schemefor the structure of the solution matrices of single-delay linear neutral differential equations by leveraging on the established expressions for such matrices on the time interval of length equal to five times the delay, starting from time zero. The scheme is iteratively based on transitions fromone time interval of length equal to the delay to the next contiguous intervalof length h , with the coefficients from the preceding interval preserved, two new coefficients updatedand the rest obtained from the aggregation of the components resulting from the afore-mentioned transitions. This algorithm alleviates the computational burden associated with relying on the equation (1) and the expression (3) fraught with proneness to computational errors and resolves to a great extent the lack of a general expression for the solution matrices.The structure of the algorithm is so simple that the solution matrix transitions from one interval to the next contiguous interval can be obtained by inspection and addition of two terms for each new coefficient.
www.ijmsi.org
45 | P a g e
On Optimal Computational Algorithm… REFERENCES [1] [2] [3]
Ukwu, C. (1987). Compactness of cores of targets for linear delay systems., J. Math. Analy.and Appl., Vol. 125, No. 2, August 1. pp. 323-330. Ukwu, C. and Garba, E.J.D. (2014w).Construction of optimal expressions for transition matrices of a class of double – delay scalar differential equations. African Journal of Natural Sciences (AJNS). Vol. 16, 2014. Ukwu, C. and E. J. D. Garba (2014e). Derivation of an optimal expression for solution matrices of a class of single-delay scalar differential equations. Journal of Nigerian Association of Mathematical Physics. Vol. 26, March 2014.
www.ijmsi.org
46 | P a g e