Design Collocation Neural Network to Solve Singular Perturbed Problems with Initial Conditions

Page 1

International Journal of Modern Engineering Sciences, 2014, 3(1): 29-38 International Journal of Modern Engineering Sciences ISSN: 2167-1133 Florida, USA Journal homepage:www.ModernScientificPress.com/Journals/IJMES.aspx Article

Design Collocation Neural Network to Solve Singular Perturbed Problems with Initial Conditions Luma. N. M. Tawfiq* & K. M. M. Al-Abrahemee Department of Mathematics, College of Education for pure science / Ibn Al-Haitham, University of Baghdad * Author to whom correspondence should be addressed; Email: drluma_m@yahoo.com Article history: Received 2 April 2014, Received in revised form 15 May 2014, Accepted 18 May 2014, Published 20 May 2014.

Abstract: The aim of this paper is to design artificial neural networks to solve singular perturbation problems with initial conditions. We design a multi-layer collocation neural network having one hidden layer with 5 hidden units (neurons) and one linear output unit the sigmoid activation function of each hidden unit is ridge basis function where the network trained by back propagation with different training algorithms such as quasi-Newton, Levenberg-Marquardt, and Bayesian Regulation. Finally the results of numerical experiments are compared with the exact solution in illustrative examples to confirm the accuracy and efficiency of the presented scheme. Keywords: Artificial neural network, back propagation training algorithm, singular perturbed problems.

1. Introduction Singularly perturbed problems (SPP) with initial conditions in ordinary differential equations (ODE) are characterized by the presence of a small parameter that multiplies the highest derivative. These problems are stiff and are characterized by an initial layer or inner solution which satisfies the initial condition and an outer solution which matches the inner one. These problems have been treated numerically by means of exponential-fitting, adaptive meshes, and ideas based on the method of matched Copyright Š 2014 by Modern Scientific Press Company, Florida, USA


Int. J. Modern Eng. Sci. 2014, 3(1): 29-38

30

asymptotic expansions [1, 2]. Because of their stiff nature, the use of explicit numerical methods for SPP with initial conditions requires very small time steps because the time step is controlled by the eigenvalue whose magnitude is largest. On the other hand, the use of stable, implicit methods for nonlinear, SPP with initial conditions may require a large number of iterations for convergence within each time step. In addition, most of standard finite difference methods are not uniformly convergent in the small perturbation parameter [3]. Perturbation problems are problems which rely on there being a dimensionless parameter in the problem that is relatively small: ∈<< 1. The most common example you may have seen before is that of high-Reynolds number fluid mechanics, in which a viscous boundary layer is found close to a solid surface. Note that in this case the standard physical parameter Re is large: our small parameter is ∈= đ?‘…đ?‘’ −1[4]. This paper is organized as follows: the next section introduce definition of the singularly perturbed problems with initial conditions, in section three, we define neural network and its architect, section four, described the structure of neural network, in section five, the Illustration of the design network is introduced, in section eight, report our numerical finding accuracy of the suggested network, finally conclusions are introduced in the last part of the paper.

2. Singularly Perturbed Problems In this section we consider a system of ordinary differential equations (to gather with appropriate initial conditions) in which the highest derivative is multiplied by a small, positive parameter (usually denoted by Ďľ). In what follows we give the general form of the 2nd order singularly perturbed problems (SPPs) are: đ?œ– đ?‘Ś ′′ = đ?‘“(đ?‘Ľ, đ?‘Ś, đ?‘Ś ′ ) with IC:

đ?‘Žâ‰¤đ?‘Ľâ‰¤đ?‘?

(1)

đ?‘Ś(đ?‘Ž) = đ??´ , đ?‘Ś ′ (đ?‘Ž) = đ??´â€˛

The above problem is called a singular perturbation problem (SPP) with initial condition, and is characterized by the fact that its order reduces when the problem parameterĎľ equals zero. In such a situation, the problem becomes singular since, in general, not all of the original initial conditions can be satisfied by the reduced problem [5].

3. Artificial Neural Networks An Artificial Neural Network (Ann) is an information processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information. The key element of this Copyright Š 2014 by Modern Scientific Press Company, Florida, USA


Int. J. Modern Eng. Sci. 2014, 3(1): 29-38

31

paradigm is the novel structure of the information processing system. It is composed of a large number of highly interconnected processing elements (neurons) working in unison to solve specific problems. Ann's, like people, learn by example. An Ann is configured for a specific application, such as pattern recognition or data classification, through a learning process. Learning in biological systems involves adjustments to the synaptic connections that exist between the neurons. This is true of Ann's as well. That is Ann's are relatively crude electronic models based on the neural structure of the brain. The brain basically learns from experience. It is natural proof that some problems that are beyond the scope of current computers are indeed solvable by small energy efficient packages. This brain modeling also promises a less technical way to develop machine solutions. This new approach to computing also provides a more graceful degradation during system overload than its more traditional counterparts. These biologically inspired methods of computing are thought to be the next major advancement in the computing industry. Even simple animal brains are capable of functions that are currently impossible for computers. Computers do rote things well, like keeping ledgers or performing complex math. But computers have trouble recognizing even simple patterns much less generalizing those patterns of the past into actions of the future [6]. Artificial neural networks characterized by [7]: I:

Architecture: its pattern of connections between the neurons.

II:

Training Algorithm: its method of determining the weight on the connections.

III:

Activations function. In an Ann expressions structure, architecture or topology, express the way in which

computational neurons are organized in the network. Particularly, these terms are focused in the description of how the nodes are connected and in how the information is transmitted through the network. As it has been mentioned, the distribution of computational in the following[8]: Number of layers: neurons in the neural network is done forming levels or layers of a determined number of nodes each one. As there are input, output and hidden neurons, we can talk about an input layer, an output layer and single layer or multilayer hidden layers. By the peculiarity of the behavior of the input nodes some authors consider just two kinds of layers in the ANN, the hidden and the output. Connection patterns: Depending on the links between the elements of the different layers. the ANN can be classified as: totally connected, when all the outputs from a level get to all and each one of the nodes in the following level, if some of the links in the network are lost, then we say that the network is partially connected. Information flow: Another classification of the ANN is obtained by considering the direction of the flow of the through the layers, when any output of the neurons is input of neurons of the same level or preceding levels, the network is described as feed forward. In counter position if there is at least one

Copyright Š 2014 by Modern Scientific Press Company, Florida, USA


Int. J. Modern Eng. Sci. 2014, 3(1): 29-38

32

connected exit as entrance of neurons of previous levels or of the same level, including themselves, the network is denominated of feedback.

4. Description of the Structure Network In this section we will explain how this approach can be used to find the approximate solution the singular perturbation with initial conditions (SPIVPs) ,

(2)

where a subject to certain IC’s ďƒŽD ďƒŒ R denotes the domain and y(x) is the solution to be computed. If yt(x, p) denotes a trial solution with adjustable parameters p, the problem is transformed to a discrete form: (3) subject to the constraints imposed by the IC’s. In the proposed approach, the trial solution yt employs a Ann and the parameters p corresponding to the weights and biases of the neural architecture. We choose a form for the trial function yt(x) such that it satisfies the IC’s. This is achieved by writing it as a sum of two terms: (4) wheređ?‘ (đ?‘Ľ, đ?‘?) is a single-output Ann with parameters đ?‘?and đ?‘›input units fed with the input vector đ?‘Ľ. The term (đ?‘Ľ) contains no adjustable parameters and satisfies the ICs, the second term đ??şis constructed so as not to contribute to the ICs, since đ?‘Ś(đ?‘Ľ) satisfy them. This term can be formed by using an Ann whose weights and biases are to be adjusted in order to deal with the minimization problem.

5. Illustration of the Design Network In this section we describe solution of SPIVPs using ANN. To illustrate the method, we will consider the 2ndorder SPIVPs: with initial condition (5) where đ?‘Ľâˆˆ [0,1] and the IC: đ?‘Ś(0) = đ??´, đ?‘Śâ€™(0) = đ??´â€™; a trial solution can be written as: đ?‘Śđ?‘Ą (đ?‘Ľ, đ?‘?) = đ??´ + đ??´â€˛ đ?‘Ľ + đ?‘Ľ 2 đ?‘ (đ?‘Ľ, đ?‘?)

(6)

Copyright Š 2014 by Modern Scientific Press Company, Florida, USA


Int. J. Modern Eng. Sci. 2014, 3(1): 29-38

33

wheređ?‘ (đ?‘Ľ, đ?‘?) is the output of an Ann with one input unit for đ?‘Ľand weights đ?‘?. Note that (đ?‘Ľ) satisfies the IC by construction .The error quantity to be minimized is given by

(7) where the đ?‘Ľđ?‘–∈ [0, 1]. Since, đ?‘‘đ?‘Śđ?‘Ą (đ?‘Ľ, đ?‘?) đ?‘‘đ?‘ (đ?‘Ľ, đ?‘?) = đ??´â€˛ + đ?‘Ľ 2 + 2đ?‘Ľđ?‘ (đ?‘Ľ, đ?‘?) đ?‘‘đ?‘Ľ đ?‘‘đ?‘Ľ đ?‘‘2 đ?‘Śđ?‘Ą (đ?‘Ľ,đ?‘?) đ?‘‘đ?‘Ľ 2

= đ?‘Ľ2

đ?‘‘2 đ?‘ (đ?‘Ľ,đ?‘?) đ?‘‘đ?‘Ľ 2

+ 2đ?‘ (đ?‘Ľ, đ?‘?) + 4đ?‘Ľ

đ?‘‘đ?‘ (đ?‘Ľ,đ?‘?) đ?‘‘đ?‘Ľ

(8)

It is straight forward to compute the gradient of the error with respect to the parameters p.

6. Numerical Results In this section we report some numerical result and the solution of number of model problem. In all cases we used a multi-layer Ann having one hidden layer with 5 hidden units (neurons) and one linear out output unit. The sigmoid activation of each hidden is tansig. (ridge basis function). For each test problem the exact analytic solution ya(x) were known in advance. Therefore we test the accuracy of obtained solutions computing the mean square error (MSE). Example 1 Consider the following 2nd order singular perturbed problem: 2

ďƒŽ2 đ?‘Śď‚˛ + ďƒŽ3ex y + 4(2 − x) = x 2 ex With I.C:

1

đ?‘Ś(0) = 1 , đ?‘Śâ€˛(0) = ďƒŽ , x ďƒŽ[0,1] and the analytic solution is: −đ?‘Ľ −2đ?‘Ľ đ?‘Ľ 2đ?‘’ đ?‘Ľ đ?‘Ś= + 3đ?‘’ ďƒŽ − 2đ?‘’ ďƒŽ 2−đ?‘Ľ

According to the equation (6) the trial neural form of the solution is: đ?‘Ľ

đ?‘Śđ?‘Ą (đ?‘Ľ) = 1 + + đ?‘Ľ 2 đ?‘ (đ?‘Ľ, đ?‘?). ďƒŽ

The ANN trained using a grid of ten equidistant points in [0,1], we getďƒŽ =10-4. Figure (1) display the analytic and neural solution with different training algorithms. The neural result with different types of training algorithms such as: Levenberg – Marquardt (trainlm), quasi – Newton ( trainbfg ), Bayesian Copyright Š 2014 by Modern Scientific Press Company, Florida, USA


Int. J. Modern Eng. Sci. 2014, 3(1): 29-38

34

Regulation (trainbr)are introduced in Table (1) and its errors are giving in Table (2), Table (3) gave the performance of the train for epoch and time, Table (4) gave the initial weight and bias of the design network.

Figure 1: Analytic and neural solution of example 1 using different training algorithm

Table 1: Analytic and Neural solution of example 1 Input Analytic solution x ya(x)

Out of suggested Ann yt(x) for different training algorithm Trainlm Trainbfg Trainbr

0.0

1

0.999640976175370

1.00000060781947

0.000490306650163041

0.1

0.00581668904250341

0.00584858735571990

0.00581748398780408

0.00581682218438316

0.2

0.0271422835146705

0.0262235919926366

0.0260890754780599

0.0270766508893611

0.3

0.0714631133422590

0.0720379285133504

0.0714627704412003

0.0714583209579391

0.4

0.149182469764127

0.148860377994119

0. 149182398207817

0.149199034176640

0.5

0.274786878450021

0.273944920310229

0.284179642578833

0.274774551828534

0.6

0.468544834386131

0.469737313059561

0.478528518159394

0.468532552107991

0.7

0.759029866661949

0.762296279887764

0.759030222208309

0.759056497736270

0.8

1.18695516186265

1.18678423417625

1.17927557251113

1.18693667442537

0.9

1.81116229094284

1.81124271936535

1.81116317468505

1.81116773829719

1.0

2.71828182845905

2.72006218179475

2.71828275379187

2.71828042684631

Copyright Š 2014 by Modern Scientific Press Company, Florida, USA


Int. J. Modern Eng. Sci. 2014, 3(1): 29-38

35

Table 2: Accuracy of the Solutions for example 1 The error E(x)  | yt(x) ya(x) | w h e r e yt(x) computed by the following training algorithm Trainlm Trainbfg Trainbr 0.000359023824629823

6.07819470532789e-07

0.999509693349837

3.18983132164890e-05

7.94945300671215e-07

1.33141879750387e-07

0.000918691522033870

0.00105320803661058

6.56326253093281e-05

0.000574815171091411

3.42901058700273e-07

4.79238431989881e-06

0.000322091770007971

7.15563100717187e-08

1.65644125127151e-05

0.000841958139792765

0.00939276412881179

1.23266214873685e-05

0.00119247867343003

0.00998368377326325

1.22822781396525e-05

0.00326641322581489

3.55546359687153e-07

2.66310743208820e-05

0.000170927686401257

0.00767958935151758

1.84874372832766e-05

8.04284225008889e-05

8.83742201640558e-07

5.44735434981902e-06

0.00178035333569948

9.25332822809821e-07

1.40161273609607e-06

Table 3: The performance of the Train with epoch and time Train Function

Performance of train

Epoch

Time

MSE

Trainlm

4.21e-31

121

0:00:01

1.583069324864220e-06

Trainbfg

4.38e-19

140

0:00:02

2.254393657527956e-05

1.87e-10

404

0:00:04

0.090819966644612

Trainbr

Table 4: Initial weight and bias of the network for different training algorithm Weights and bias for trainbfg Net.IW{1,1} Net.LW{2,1} Net.B{1}

Weights and bias for trainlm Net.IW{1,1} Net.LW{2,1} Net.B{1} 0.6393

0.4332

0.6240

0.4429

0.0323

0.8110

0.9173

0.8842

0.3279

0.0530

0.5570

0.1386

0.1615

0.3930

0.802

0.0878

0.7198

0.8818

0.7156

0.1789

0.9994

0.7979

0.1104

0.9235

0.5777

0.6333

0.9809

0.6555

0.2166

0.0127

Weights and bias for trainbr Net.IW{1,1} Net.LW{2,1} Net.B{1} 0.5502

0.2865

0.1692

0.1805

0.0773

0.4304

0.6784

0.9005

0.4162

0.0556

0.8466

0.7287

0.0340

0.3956

0.4064

Copyright © 2014 by Modern Scientific Press Company, Florida, USA


Int. J. Modern Eng. Sci. 2014, 3(1): 29-38

36

Example 2 Consider the following singular perturbed problem:

ďƒŽy = −y (y − 1)cosx , with IC:

y(0) = 0.5 , and the analytic solution: đ?‘Ś =

xďƒŽ[0,1]

1

1+đ?‘’

đ?‘Ľ −đ?‘ đ?‘–đ?‘› ďƒŽ

According to the equation (6) the trial neural form of the solution is taken to be: đ?‘Śđ?‘Ą (đ?‘Ľ) = 0.5 + đ?‘Ľ đ?‘ (đ?‘Ľ, đ?‘?) The Ann trained using a grid of ten equidistant points in [0, 1], we get ďƒŽ=10-3.Figure (2) display the analytic and neural solution with different training algorithms. The neural result with different types of training algorithms such as: Levenberg – Marquardt (trainlm), quasi – Newton ( trainbfg), Bayesian Regulation (trainbr) are introduced in Table (5) and its errors are giving in Table (6), Table (7) gave the performance of the train for epoch and time, Table (8) gave the initial weight and bias of the design network.

Figure 2: Analytic and neural solution of example 2 using different training algorithms. Table 5: Analytic and Neural Solution of Example 2 Input

x

Analytic solution ya(x)

Out of suggested Ann yt(x) for different training algorithm Trainlm Trainbfg Trainbr

0.0

0.500000000000000

0.499999391546145

0.500000016587229

0.500000073700336

0.1

1

0.999999562513364

0.999995208249550

0.860924156041039

0.2

1

0.999999867097354

0.999999999408296

0.999999492367816

0.3

1

0.999999410570524

0.999999999452262

1.00000137667018

0.4

1

1.00000130846392

0.999999999452262

0.999997822698515

0.5

1

0.999998770607436

0.999999999452274

1.00071494191095

0.6

1

0.999999257209658

0.999999999476350

1.00001114205217

0.7

1

1.00000028720025

1.00000001845903

0.999971407600468

0.8

1

1.00000075775730

1.00000024150463

1.00003481605803

0.9

1

1.00000067346563

0.999999760627069

0.999978315495493

1.0

1

1.00000026649971

0.999966382793339

1.00000556423777

Copyright Š 2014 by Modern Scientific Press Company, Florida, USA


Int. J. Modern Eng. Sci. 2014, 3(1): 29-38

37

Table 6: Accuracy of solution for example 2 The error E(x)  | yt(x) ya(x) | w h e r e yt(x) computed by the following training algorithm Trainlm Trainbfg Trainbr 6.08453854966662e

1.65872284796365e

7.37003359452260e-08

4.37486635584961e

4.79175044976188e

0.139075843958961

1.32902646221567e

5.91704463204223e

5.07632184287843e-07

5.89429475672532e

5.47737855072228e

1.37667018362819e-06

1.30846391876993e

5.47737855072228e

2.17730148532880e-06

1.22939256419485e

5.47726308752772e

0.000714941910947564

7.42790341901056e

5.23649568151541e

1.11420521733407e-05

2.87200250914665e

1.84590307483745e

2.85923995323456e-05

7.57757301572681e

2.41504630515976e

3.48160580319057e-05

6.73465634593029e

2.39372931254422e

2.16845045073466e-05

2.66499710077639e

3.36172066610629e

5.56423776654214e-06

Table 7: The performance of the train with epoch and time Train Function Trainlm

Performance of train

Epoch

Time

MSE

2.52e-33

478

0:00:05

5.348345849460722e-13

Trainbfg

1.65e-14

1074

0:00:17

1.048357906680465e-10

2.96e-10

1214

0:00:04

0.001758418561522

Trainbr

Table 8: Initial weight and bias of the network for different training algorithm Weights and bias for trainlm Net.LW{2, Net.B{1} 1}

Net.IW{1,1} 0.2904

0.7302

0.8797

0.6171

0.3439

0.8178

0.2653

0.5841

0.2607

0.8244

0.1078

0.5944

0.9827

0.9063

0.0225

Weights and bias for trainbfg Net.IW{1,1} Net.LW{2,1} Net.B{1} 0.1017

0.2982

0.0899

0.9954

0.0464

0.0809

0.3321

0.5054

0.7772

0.2973

0.7614

0.9051

0.0620

0.6311

0.5338

Weights and bias for trainbr Net.IW{1,1} Net.LW{2,1} Net.B{1} 0.0474

0.6862

0.1955

0.3424

0.8936

0.7202

0.7360

0.0548

0.7218

0.7947

0.3037

0.8778

0.5449

0.0462

0.5824

Copyright © 2014 by Modern Scientific Press Company, Florida, USA


Int. J. Modern Eng. Sci. 2014, 3(1): 29-38

38

7. Conclusion This paper present new design of Ann to solve 1st and2nd order nonlinear singular perturbed problems with initial condition, the suggested architecture of the network is efficient and more accurate than other numerical method and the practical results show which contain up to a few hundred weights the Levenberg-Marquardt algorithm (trainlm) will have the fastest convergence, then trainbfg and then trainbr. However, trainbr it does not perform well on function approximation on problems. The performance of the various algorithms can be affected by the accuracy required of the approximation.

Reference [1]

OReilly, M.J., A uniform scheme for the singularly perturbed Riccati equation, Numer. Math. J.,50(1987): 483-501.

[2]

Salvekumar, K., A computational method for solving singular perturbation problems using exponentially fitted finite difference schemes, Appl. Math. Comput. J., 66(1994): 277-292.

[3]

Salvekumar, K., Optimal uniform finite difference schemes of order one for singularly perturbed Riccati equation, Commun. Numer. Methods Eng., 13(1997): 1-12.

[4]

Mohan, K. K., Arora, P., Gupta, V., Collocation method using artificial viscosity for solving stiff singularly perturbed turning point problem having twin boundary layers, Computers and Mathematics with Applications, 61(2011): 1595-1607.

[5]

Abdelhameed, N. A., Numerical Solution of Stiff and Singularly Perturbed Problems for Ordinary Differential and Volterra-type Equations, Springer, 2011.

[6]

Stergiou, C., and Siganos, D., Neural Networks, IEEE Neural Networks Council, 2008.

[7]

Hristev, R. M., The ANN Book, Edition 1, GNU Public License, 1998.

[8]

Haykin, S., Neural networks: A comprehensive foundation, Edition1, Springer, 1993.

Copyright Š 2014 by Modern Scientific Press Company, Florida, USA


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.