OPTIMAL DISTRIBUTED KALMAN FILTER N. D. ASSIMAKIS, G. TZIALLAS, A. KOUTSONIKOLAS Technological Educational Institute of Lamia, Department of Electronics, 35100 Lamia, Greece ABSTRACT In this paper, a new method to define the optimal distributed Kalman Filter is proposed. The method is based on the a-priori determination of the measurements' optimum distribution in parallel processors using the criterion of minimizing the computation time. The resulting optimal filter presents high parallelism speedup. This is very important due to the fact that, in most real-time applications, it is essential to obtain the estimate in the shortest possible time. Keywords: Estimation, Distributed Kalman Filter 1. INTRODUCTION Estimation plays an important role in many fields of science and engineering, such as filtering [1]-[13], optimal control [13]-[15], radiative transfer [16], mathematics [17]-[24] and many others. The discrete time Kalman Filter [2], famous paper [4], historical survey [11] which is the most well known algorithm that solves the estimation/filtering problem. Many real world problems have been successfully solved using the Kalman Filter ideas; filter applications to aerospace industry, chemical process, communication systems design, control, civil engineering, filtering noise from 2-dimentional images, pollution prediction, power systems are mentioned in [1] (see references [6]-[21], pp.60-61). The applicability of the previous algorithm was until recently restricted by the computing power needed for its implementation. Real time problems in areas like control require fast and accurate computation of large amount of data in order to deal with larger and more realistic models. The advances in the technology of integrated multisensor network systems allow the use of decentralized signal processing algorithms. A typical multisensor environment consists of several sensors observing a dynamic system, where each sensor is attached to a local processor. In these decentralized structures some amount of processing is done at the local processors of the network and the results are then communicated to a central processor, also referred to as a fusion center. In this paper we use the hierarchical approach for signal processing, in the case where the sensors are both collocated and dispersed [12]. In this paper, we propose a method to define the optimal distributed Kalman Filter by determining the measurements' optimum distribution in parallel local processors using the criterion of minimizing the algorithms' computation time.
1
The paper is organized as follows: In section 2 the mathematical description of the discrete time Kalman Filtering problem is stated. In sections 3 and 4 the centralized and distributed Kalman Filters are briefly presented. The computational requirements are discussed in section 5. The proposed method to define the optimal distributed Kalman Filter is developed in section 6. Simulation results are presented in section 7. Finally, section 8 summarizes the conclusions. 2. PROBLEM FORMULATION The estimation/filtering problem is associated with time varying systems described by the following state space equations: x(k 1) F(k 1,k)x(k) w(k) z(k 1) H(k 1)x(k 1) v(k 1) where - x(k) is the n-dimensional state vector at time k - z(k) is the m-dimensional measurement vector - F(k+1,k) is the system transition matrix - H(k) is the output matrix - {w(k)} and {v(k)} are Gaussian zero-mean white and uncorrelated random processes - Q(k) and R(k) are the plant and measurement noise covariance matrices respectively - x(0) is a Gaussian random process with mean x 0 and covariance P0 - x(0), {w(k)} and {v(k)} are independent For a multisensor environment we assume that the measurement vector z(k) is partitioned into R parts [12] i.e. z(k) [z1T (k) z T2 (k) ... zRT (k)]T where R
ÂŚm
i
m
i=1
Partitioning the measurement noise process into R statistically uncorrelated parts as: v(k) [v1T (k) vT2 (k) ... vRT (k)]T where E{v(k)vT (k)} block diag {R1 (k) R2 (k) ... RR (k)} and the output matrix as: H(k) [H1T (k) HT2 (k) ... HRT (k)]T we obtain for the measurement equation: z (k 1) H (k 1)x(k 1) v (k 1),i = 1,2,...,R i i i In this framework, our objective is to solve the estimation/filtering problem: to produce an estimate x(L/L) at time L of the state vector x(L) using measurements till time L.
2
3. CENTRALIZED KALMAN FILTER In this section we present the discrete time Kalman Filter in a centralized form. This means that all the required computations are carried out in one central processor. The discrete time Kalman Filter [2], famous paper [4], historical survey [11] which is the most well known algorithm that solves the filtering problem for time varying systems, is summarized in the following: Time Varying Centralized Kalman Filter x(k 1 / k) F(k 1,k)x(k / k) P(k + 1 / k) F(k + 1,k)P(k / k)FT (k + 1,k) + Q(k) K(k + 1) = P(k + 1 / k)HT (k + 1)[H(k + 1)P(k + 1 / k)HT (k + 1) + R(k + 1)]-1 x(k 1 / k 1) x(k 1 / k) K(k 1)[z(k 1) H(k 1)x(k 1 / k)] P(k 1 / k 1) P(k 1 / k) K(k 1)H(k 1)P(k 1 / k) for k = 0,1,... with initial conditions x(0 / 0) x0 and P(0 / 0) P0 For time invariant systems, where F(k 1,k) F H(k 1) H Q(k) Q R(k 1) R for all k the discrete time Kalman Filter is summarized in the following: Time Invariant Centralized Kalman Filter x(k 1 / k) Fx(k / k) P(k + 1 / k) FP(k / k)FT + Q K(k + 1) = P(k + 1 / k)HT [HP(k + 1 / k)HT + R]-1 x(k 1 / k 1) x(k 1 / k) K(k 1)[z(k 1) Hx(k 1 / k)] P(k 1 / k 1) P(k 1 / k) K(k 1)HP(k 1 / k) for k = 0,1,... with initial conditions x(0 / 0) x0 and P(0 / 0) P0
3
It is well known [1] for time invariant systems that if the signal process model is asymptotically stable (i.e. all eigenvalues of F lie inside the unit circle), then there exists a steady state value Pe of the estimation error variance P(k/k), which is reached at time k T where P(T 1 / T 1) P(T / T) H and Ă‚ is a small positive real number, whose selection depends on the physical problem the system describes. In the steady state case, the resulting discrete time steady state Kalman Filter is described by the following equations: Steady State Centralized Kalman Filter x(k 1 / k 1) A(KF )x(k / k) B(KF )z(k 1) A(KF ) F KHF B(KF ) K for k T,T 1,... where Kis the steady state gain Remark. The steady state gain K as well as the time k T where the steady state solution is reached, are calculated by off-line solving the corresponding discrete time Riccati equation [1], [21]-[24]. 4. DISTRIBUTED KALMAN FILTER In this section we present the discrete time Kalman Filter in a distributed form. As was pointed out earlier, the main drawback of the above centralized approaches is that they require a large amount of computations to be carried out in the central processor, demanding therefore large computational power. Moreover in the case of very large m, there is a tremendous computational burden in the processor. In the following, the results of the previous section are extended and the corresponding distributed algorithms are introduced. The distributed algorithms are decomposed into two parts: the Local Level and the Central Level (fusion center) [12]. At the Local Level each processor computes its local estimate using its own measurement. The data of each local processor is communicated to the fusion center where the global estimate is computed. The local processors can operate concurrently, since there is no need for communication among local processors and no communication is needed from the central processor downwards in the hierarchy of the local processors. The generation of the global estimate can be thought of as overhead, due to the fact that the central processor needs information from the local processors, but not vice-versa.
4
The discrete time Distributed Kalman Filter for time varying systems [12] is summarized in the following: Time Varying Distributed Kalman Filter Central Level x(k 1 / k) F(k 1,k)x(k / k) P(k + 1 / k) F(k + 1,k)P(k / k)FT (k + 1,k) + Q(k) R
P(k + 1 / k + 1) [P-1 (k + 1 / k) + ÂŚ Bi (k + 1)]-1 i=1
R
x(k + 1 / k + 1)
P(k + 1 / k + 1)[P-1 (k + 1 / k)x(k + 1 / k) + ÂŚ bi (k + 1)] i=1
for k = 0,1,... with initial conditions x(0 / 0)
x0 and P(0 / 0)
P0
Local Level Bi (k + 1) = HiT (k + 1)R-1i (k + 1)Hi (k + 1) bi (k + 1) = HiT (k + 1)R-1i (k + 1)zi (k + 1) for k = 0,1,... and i = 1,2,...,R The discrete time Distributed Kalman Filter for time invariant systems is summarized in the following: Time Invariant Distributed Kalman Filter Central Level x(k + 1 / k) = Fx(k / k) P(k + 1 / k) = FP(k / k)FT + Q R
P(k + 1 / k + 1) [P-1 (k + 1 / k) + ÂŚ Bi ]-1 i=1
R
x(k + 1 / k + 1)
P(k + 1 / k + 1)[P-1 (k + 1 / k)x(k + 1 / k) + ÂŚ bi (k + 1)] i=1
for k = 0,1,... with initial conditions x(0 / 0) x0 and P(0 / 0) P0 where Bi = HiTR-1i Hi for i 1,2,...,R are off-line calculated
5
Local Level bi (k + 1) = HiTR-1i zi (k + 1) for k = 0,1,... and i = 1,2,...,R In the steady state case, the resulting discrete time steady state Distributed Kalman Filter is described by the following equations: Steady State Distributed Kalman Filter Central Level R
x(k 1/k 1) Ax(k/k) Pe ÂŚ bi (k 1) i=1
for k T,T 1,... where A =F KHF is off-line calculated Kis the steady state gain and Pe is the staedy state estimation variance Local Level bi (k + 1) = HiTR-1i zi (k + 1) for k T,T 1,... and i = 1,2,...,R Remarks. 1. We note that the estimates obtained by the central processor are identical to the global centralized ones. Thus, the distributed algorithms are mathematically equivalent to the centralized algorithms. 2. In the steady state case, the steady state gain, the steady state estimation variance as well as the time k T where the steady state solution is reached, are calculated by off-line solving the corresponding discrete time Riccati equation [1], [21]-[24]. 3. The local processor equations are simple memoryless non-recursive multiplications and can be implemented using a very simple processor. 4. There is no two way communication between the local and the central processor. Therefore the local processors can operate in parallel.
6
5. COMPUTATIONAL REQUIREMENTS In the Centralized Kalman Filter case, all the required computations for the generation of the global estimate are carried out in one central processor. In the Distributed Kalman Filter case, each local processor computes its local estimate using its own measurement (the local processors operate in parallel, since there is no need for communication among local processors). The data of each local processor is communicated to the central processor (there is no two way communication between the local and the central processor), where the global estimate is computed. The computational analysis is based on the analysis in [24] where the following result was used: scalar addition, multiplication, division and square root operations are involved in matrix manipulation operations (which are needed for the implementation of the Kalman Filter). The following corresponding computation times are defined (for both Centralized and Distributed Kalman Filters): - t(+) is the time needed to calculate the sum of two scalar operands - t(*) is the time needed to calculate the product of two scalar operands - t(/) is the time needed to calculate the quotient of two scalar operands - t(—) is the time needed to calculate the square root of a scalar operand and the following communication time is defined (for Distributed Kalman Filter): - t(com) is the time needed to transfer a memory word from a local processor to the central processor All these operation times are normalized with respect to the scalar addition operation time by defining the parameters (ratios): - a=t(*)/t(+)=1 - b=t(/)/t(+)=2.7 - c=t(—)/t(+)=5 - d=t(com)/t(+)=0.1 where t(+)=0.2 nsec as resulting from an analysis performed on a Pentium 100 MHz processor. Due to the fact that Kalman Filter is a recursive algorithm, the total computation time of each centralized algorithm equals to: t c CB c ˜ s ˜ t op while the total computation time of each distributed algorithm equals to: t d CB d ˜ s ˜ t op {CBL d CBC d } ˜ s ˜ t op where - CB c is the per recursion calculation burden required for the on-line calculations of each centralized algorithm - CB d is the per recursion calculation burden required for the on-line calculations of each distributed algorithm
7
- CBL d is the per recursion calculation burden required for the on-line calculations needed for the local estimates computation in the local processors, taking into account the communication burden required for transferring all needed information from the local processors to the central processor - CBC d is the per recursion calculation burden required for the on-line calculations needed for the global estimate computation in the central processor - s is the number of recursions (steps) that each centralized or distributed algorithm executes (which equals to the time k=L where estimates are required) - t op is the time required to perform a scalar addition operation, i.e. t op =t(+) Thus, the total computation time of the filter is analogous to its per recursion calculation burden. The calculation burden of the off-line calculations (initialization process for steady state centralized filter and for time invariant and steady state distributed filters) is not taken into account. Table 1 summarizes the computational requirements for the implementation of the Centralized Kalman Filter, i.e. the per recursion calculation burden required for the global estimate computation CB c . Table 2 summarizes the computational requirements for the implementation of the Distributed Kalman Filter, i.e. the per recursion calculation burden required for the local estimates computation in each local processor CBl d (i)for i=1,2,...,R , the per recursion communication burden needed to transfer data from each local processor to the central processor CBc d and the per recursion calculation burden required for the global estimate computation in the central processor CBC d . Table 1 Centralized Kalman Filter Computational Requirements System
Per recursion calculation burden
Time Varying
2.5(1 a)n3 (6.5a 2.5)n2 ( 2a 4b 2c 4)n 3b 0.5(1 a)n2m (1.5a 0.5)nm (1 a)nm 2 0.5(1 a)m3 1.5am 2 ( a 2b c 0.5)m
Time Invariant
2.5(1 a)n3 (6.5a 3)n2 ( 2a 4b 2c 3.5)n 2b (1 a)nm
Steady State
(1 a)n2 (1 a)nm n
8
Table 2 Distributed Kalman Filter Computational Requirements System
Per recursion calculation burden
Time Varying
local processors 0.5(1 a)m3i 1.5am i2 ( a 2b c 0.5)mi b 0.5(1 a)n2mi (1.5a 0.5)nm i (1 a)nm i2 0.5n2 1.5n for i 1,2,...,R communication burden d(0.5n2 1.5n) for each local processor central processor 2.5(1 a)n3 (6.5a 2.5)n2 ( 2a 4b 2c 4)n 2b (0.5n2 1.5n)R
Time Invariant
local processors (1 a)nm i n for i 1,2,...,R communication burden dn for each local processor central processor 2.5(1 a)n3 (6.5a 2.5)n2 ( 2a 4b 2c 4)n 2b (0.5n2 1.5n)R
Steady State
local processors (1 a)nm i n for i 1,2,...,R communication burden dn for each local processor central processor 2(1 a)n2 2n nR
9
6. OPTIMAL DISTRIBUTED KALMAN FILTER DEFINITION In this section, a method to define the Optimal Distributed Kalman Filter is developed. Our aim is to minimize the computation time of the distributed filter. From the previous section, it becomes obvious that in order to minimize the computation time t d , we have to minimize the summation CBL d CBC d , where CBC d is taken form Table 2. In order to calculate CBL d we make the following assumptions: Assumptions. 1. We have a multisensor problem where m>>n (n and m are given). 2. Each local processor can transfer data to the central processor after its local estimate is computed in the local processor; this communication process can be performed while the other local processors compute their local estimates. 3. The local processors communicate to the central processor in a increasing sorted order with respect to the number of measurements attached to each local processor; this results due to the fact that the per recursion calculation burden required for the local estimate computation in each local processor is an increasing function of the number of measurements attached to each local processor. Thus, the algorithm for selecting which local processor will communicate to the central processor at every moment complies to the following rule: "first ends first sends", i.e. the local processor which is selected to send data to the central processor, is the one that first calculates its local estimate. 4. The global estimate is computed in the central processor after all needed information is transferred from the local processors to the central processor. Then, we have: CBL d cbl d (R) which is computed by the following recursive formula: cbl d (i) CBc d max {cbl d (i 1),CBl d (i)} for i 1,2,...,R with initial condition cbl d (0) 0 where cbl d (i) is the calculation burden required for the local estimates computation in the first i local processors, taking into account the communication burden required for transferring all needed information from the i local processors to the central processor (the local processors are sorted in an increasing order with respect to the number of measurements attached to each local processor) and CBc d and CBl d (i)for i=1,2,...,R are taken from Table 2.
10
From the above recursive formula and the results of the previous section, we can easily see that the computation time of the Distributed Kalman Filter depends on the distribution of m measurements into R parallel local processors. Thus, the proposed method to a-priori (before the filters' implementation) define the Optimal Distributed Kalman Filter, uses the criterion of minimizing the computation time of the Distributed Kalman Filter, in order to determine the optimum distribution of m measurements into R parallel processors: Method. Optimal Distributed Kalman Filter Definition Input: System (Time Varying / Time Invariant / Steady State) Model Order (n,m) Output: Optimum distribution of m measurements into R processors Algorithm. for R=2 to m do begin find the distributions of m measurements into R processors for each distribution do find the minimum computation time of the Distributed Kalman Filter (Time Varying / Time Invariant / Steady State) end Comments. 1. For R=1 the Distributed Kalman Filter is equivalent to the Centralized Kalman Filter. 2. The local processors are sorted in an increasing order with respect to the number of measurements attached to each local processor. Finally, we define the parallelism speedup of the Distributed Kalman Filter as: speedup t c /t d CB c /CB d where the computation time of the Centralized Kalman Filter tc is constant (it depends on the state vector dimension n and the measurement vector dimension m) and the computation time of the Distributed Kalman Filter t d depends on the distribution of m measurements into R parallel local processors. Thus, the maximum parallelism speedup is achieved for the optimum distribution of m measurements into R parallel processors (in this case, the computation time of the Distributed Kalman Filter is minimum).
11
7. SIMULATION RESULTS In this section it is pointed out that the proposed Optimal Distributed Kalman Filter presents high parallelism speedup. This conclusion is verified through the following examples: Example 1. Time varying, time invariant and steady state systems for various values of the model order (n=4, m=10, 20, 30, 40, 50) are assumed in this example. The maximum parallelism speedup and the optimum distribution of measurements into parallel local processors (processors x measurements) required for the implementation of the Optimal Distributed Kalman Filter are presented in Table 3. From Table 3 we are able to derive the following conclusions: 1. The maximum parallelism speedup increases as the measurement vector dimension m increases (and the state vector dimension n remains constant), for time varying, time invariant and steady state systems. This is very important for multisensor problems. 2. The maximum parallelism speedup for time varying systems is higher than the maximum parallelism speedup for time invariant and steady state systems (for the same state vector dimension n and the same measurement vector dimension m). This is very important for time varying systems which require a large computation time for the implementation of the Centralized Kalman Filter, as results from Table 1. 3. The maximum parallelism speedup can be achieved for more than one distributions of measurements into parallel processors. In this case, we propose to determine the optimum distribution using the criterion of minimizing the idle time for the local processors (except the criterion of minimizing the computation time). For example, as we can see from Table 3, in the case of the steady state system with n=4 and m=30, the maximum speedup is achieved for the following two distributions: - 6 processors x 5 measurements and - 2 processors x 3 measurements + 6 processors x 4 measurements For the first distribution, there in no idle time for the local processors. For the second distribution, the 2 processors (with 3 measurements attached to them) remain idle for the 28.5% of the time the 6 processors (with 4 measurements attached to them) require to calculate their local estimates. Thus, we decide that the optimum distribution is the first one.
12
Table 3 Optimal Distributed Kalman Filter Maximum Parallelism Speedup and Optimum Distribution Example 1 Time Varying
Time Invariant
Steady State
n=4 m=10
maximum speedup
4.363
1.043
1.2
distributions 42
optimum distribution
5x2
2x5
5x2
n=4 m=20
maximum speedup
17.667
1.125
1.774
distributions 627
optimum distribution
10 x 2
4x5
5x4
n=4 m=30
maximum speedup
45.015
1.211
2.264
distributions 5604
optimum distribution
15 x 2
5x6
6x5 2x3+6x4
n=4 m=40
maximum speedup
89.399
1.305
2.736
distributions 37338
optimum distribution
20 x 2
5x8
8x5
n=4 m=50
maximum speedup
156.848
1.394
3.147
distributions 204226
optimum distribution
1 x 2 + 16 x 3
5 x 10
10 x 5
13
4. The maximum parallelism speedup is achieved, in general, for uniform distributions: we use R parallel local processors in which mi,for i=1,2,...,R are of the same size, i.e. the following relation holds: Rm i m,for i=1,...,R This uniform distribution has the following advantages: - All local processors do the same calculations concerning quantities of the same type and dimensionality. Thus, all processors have the same structure and therefore, low hardware cost is required for the implementation of the Distributed Kalman Filter. - There is no idle time for the local processors. The maximum parallelism speedup can, however, be achieved for non-uniform distributions. For example, as we can see from Table 3, in the case of the time varying system with n=4 and m=50, the maximum speedup is achieved for the following distribution: 1 processors x 2 measurements + 16 processors x 3 measurements For this optimum distribution, the 1 processor (with 2 measurements attached to it) remains idle for the 51.1% of the time the 16 processors (with 3 measurements attached to them) require to calculate their local estimates. In Figures 1-3, plots are given for the parallelism speedup with respect to the number of processors required for the implementation of the Distributed Kalman Filter, for time varying, time invariant and steady state systems, respectively, where n=4 and m=50. 200
Speedup
150
100
50
0 0
10
20
30
Processors
Distributed Kalman Filter Parallelism Speedup Time Varying system n=4, m=50 Figure 1
14
40
50
Speedup
2
1
0 0
10
20
30
40
50
40
50
Processors
Distributed Kalman Filter Parallelism Speedup Time Invariant system n=4, m=50 Figure 2
4
Speedup
3
2
1
0 0
10
20
30
Processors
Distributed Kalman Filter Parallelism Speedup Steady State system n=4, m=50 Figure 3
15
Example 2. In a typical multisensor problem, seismic signal processing for oil exploration, a wavelet is used to describe the signal received by the seismic sensors [25]. Usually, a convolution summation of order n is used to describe the wavelet and the number of sensors (geophones) m used to capture the seismic reflection varies from a few hundred to several thousand. The model order is assumed to be n=4 and m=1000 in this example. A uniform distribution of m measurements into R local subsystems (geophone clusters) is also considered. The maximum parallelism speedup and the optimum uniform distribution of measurements into parallel local processors (processors x measurements) required for the implementation of the Optimal Distributed Kalman Filter are presented in Table 4. Table 4 Optimal Distributed Kalman Filter Maximum Parallelism Speedup and Optimum Uniform Distribution Example 2
n=4 m=1000
maximum speedup
uniform distributions 16
optimum uniform distribution
Time Varying
Time Invariant
Steady State
265027.504
7.143
18.757
125 x 8
25 x 40
40 x 25
From Table 4 we are able to see that the Optimal Distributed Kalman Filter for the time varying system presents very high maximum parallelism speedup: the Optimal Distributed Kalman Filter can be implemented 265 thousand times faster than the Centralized Kalman Filter. In fact, for 1 million recursions, the total computation time of the Time Varying Centralized Kalman Filter is equal to 56.08 hours, while the total computation time of the Time Varying Optimal Distributed Kalman Filter is equal to 0.76 seconds.
16
The parallelism speedup with respect to the number of processors required for the implementation of the Distributed Kalman Filter, for time varying system, where n=4 and m=1000, is plotted in Figure 4. 300000
Speedup
250000 200000 150000 100000 50000 0 0
100
200
300
400
500
600
700
800
900
1000
Processors
Distributed Kalman Filter Parallelism Speedup Time Varying system n=4, m=1000 Figure 4 8. CONCLUSIONS Centralized and distributed approaches to the solution of the discrete time estimation/filtering problem for multisensor environment were presented in this paper. The discrete time Centralized and Distributed Kalman Filters were analyzed. It was pointed out that both the discrete time Centralized and Distributed Kalman Filters calculate the same estimates, thus the filters are equivalent with respect to their behavior. The computational requirements of both filters were discussed. It was also proposed a method to a-priori (before the filters' implementation) define the Optimal Distributed Kalman Filter. The method is based on the a-priori determination of the measurements' optimum distribution in parallel processors using the criterion of minimizing the computation time. It was verified through simulation results that the proposed Optimal Distributed Kalman Filter presents high parallelism speedup. This result is very important due to the fact that, in most real-time applications, it is essential to obtain the estimate in the shortest possible time.
17
REFERENCES 1. ANDERSON B.D.O. & MOORE J.B., Optimal Filtering, Prentice Hall inc. (1979). 2. KALMAN R.E. & BUCY R.S., New results in linear filtering and prediction theory, Trans. ASME (J. Basic Eng.), ser. D, vol. 83, pp. 95-107 (1961). 3. KAILATH T., Some new algorithms for recursive estimation in constant linear systems, IEEE Trans. Inform. Theory, vol. IT-19, pp. 750-760 (1961). 4. KALMAN R.E., A new approach to linear filtering and prediction problems, J. Bas. Eng., Trans. ASME, Ser. D, Vol. 82, no. 1, pp. 34-45 (1960). 5. LAINIOTIS D.G., Estimation: A brief survey, J. Inf. Sci., vol. 7, pp. 191-202 (1974). 6. LAINIOTIS D.G., Partitioned estimation algorithms, II: Linear estimation, J. Inf. Sci., vol. 7, no. 3, pp. 317-340 (1974). 7. LAINIOTIS D.G., Optimal 1137-1148 (1971).
nonlinear estimation, Int. J. Cntr., vol. 14, pp.
8. LAINIOTIS D.G., A unifying framework for linear estimation: Generalized partitioned algorithms, J. Inform. Sci,. vol. 10, no. 3, pp. 243-278 (1976). 9. LAINIOTIS D.G., Partitioned linear estimation algorithms: Discrete case, IEEE Trans. on AC, Vol. AC-20, pp. 255-257 (1975). 10. MORF M., SIDHU G.S. & KAILATH T., Some new algorithms for recursive estimation in constant, linear, discrete-time systems, IEEE Trans. on AC, Vol. AC-19, no. 4, pp. 315-323 (1974). 11. SORENSON H.W., Least-squares estimation: from Gauss to Kalman, IEEE Spectrum, vol. 7, no. 7, pp.63-68 (1970). 12. HASHEMIPOUR H.R., ROY S. & LAUB A.J., Decentralized structures for parallel Kalman filtering, IEEE Trans. on Automatic Control, vol. 33, no. 1, (1988). 13. MEDITCH J.S., Stochastic Optimal Linear Estimation and Control, New York, McGraw-Hill (1969). 14. LAINIOTIS D.G., Adaptive Control: Partitioned Algorithms, Proc. Johns Hopkins Conf. Information Sciences and Systems, Baltimore, Md. (1975).
18
15. KUCERA V., The discrete Riccati equation of optimal control, Kybernetika, vol. 8, no. 5, pp. 430-447 (1972). 16. CHANDRASEKHAR S., Radiative Transfer, New York: Dover (1960). 17. KENNEY C.S. & LEIPNIK R.B., Numerical integration of the differential matrix Riccati equation, IEEE Trans. on Aut. Cntrl., vol. AC-30, no. 10, pp. 962-970 (1987). 18. HITZ K.L. & ANDERSON B.D.O., Iterative methods of computing the limiting solution of the matrix Riccati differential equation, Proc. IEE, vol. 119, pp. 1402-1406 (1972). 19. LAINIOTIS D.G., Generalized Chandrasekhar algorithms: time varying models, IEEE Trans. Aut. Contr., vol. AC-20, pp. 728-732 (Oct. 1976). 20. REID W.T., Riccati differential equations, New York, Academic (1972). 21. LAINIOTIS D.G., Discrete Riccati equation solutions: Partitioned algorithms, IEEE Trans. on AC, Vol. AC-20, pp. 555-556 (1975). 22. LAINIOTIS D.G., ASSIMAKIS N.D. & KATSIKAS S.K., A new computationally effective algorithm for solving the discrete Riccati equation, J. of Mathematical Analysis and Applications, vol. 186, no. 3, pp. 868-895 (1994). 23. LAINIOTIS D.G., ASSIMAKIS N.D. & KATSIKAS S.K., Fast and numerically robust recursive algorithms for solving the discrete time Riccati equation: The case of nonsingular plant noise covariance matrix, Neural Parallel and Scientific Computations, vol. 3, no. 4, pp. 565-583 (1995). 24. ASSIMAKIS N.D., LAINIOTIS D.G., KATSIKAS S.K. & SANIDA F.L., A survey of recursive algorithms for the solution of the discrete time Riccati equation, 2nd WFNA Conf., (1996) and J. of Nonlinear Analysis, accepted for publication (1997). 25. LAINIOTIS D.G., KATSIKAS S.K. & LIKOTHANASSIS S.D., Optimal seismic deconvolution, Signal Processing, vol. 15, pp. 375-404 (1988).
19