129 4 2MB
English Pages [129] Year 2007
SOLUTIONS MANUAL FOR
Optimal and Robust Estimation - With an Introduction to Stochastic Control Theory, Second Edition by Frank L. Lewis Xie Lihua Dan Popa
6942X.indd 1
11/3/08 11:20:02 AM
6942X.indd 2
11/3/08 11:20:02 AM
SOLUTIONS MANUAL FOR Optimal and Robust Estimation - With an Introduction to Stochastic Control Theory, Second Edition
by Frank L. Lewis Xie Lihua Dan Popa
Boca Raton London New York
CRC Press is an imprint of the Taylor & Francis Group, an informa business
6942X.indd 3
11/3/08 11:20:02 AM
CRC Press Taylor & Francis Group 6000 Broken Sound Parkway NW, Suite 300 Boca Raton, FL 33487-2742 © 2009 by Taylor & Francis Group, LLC CRC Press is an imprint of Taylor & Francis Group, an Informa business No claim to original U.S. Government works Printed in the United States of America on acid-free paper 10 9 8 7 6 5 4 3 2 1 International Standard Book Number-13: 978-1-4200-6942-6 (Softcover) This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint. Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers. For permission to photocopy or use material electronically from this work, please access www.copyright.com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Visit the Taylor & Francis Web site at http://www.taylorandfrancis.com and the CRC Press Web site at http://www.crcpress.com
6942X.indd 4
11/3/08 11:20:03 AM
1
Answers to exercises of chapters 6-8
The authors would like to express their appreciation to Jun Xu, Shuai Liu, Nan Xiao and Keyou You for their help in preparing the answers to the exercises in chapter 6-8.
A NSWERS TO E XERCISES OF C HAPTER 6 1. Since
ΔTk Δk
≤ I, we have −1 ¯ ˜ ¯T T ¯ ˜ ¯T T ε−1 k I − Δk F Σk F Δk − Δk (εk I − F Σk F )Δk ≥ 0
Applying Schur complements, the following inequality holds −1 ˜ k F¯ T )−1 ˜ k A¯T ˜ k A¯T −ΔTk (εk − F¯ Σ F¯ Σ F¯ Σ k k ≥0 ¯T ¯T ¯ ˜ ¯T T D D −Δk ε−1 k k k I − Δk F Σk F Δk
(1)
(2)
˜ k is the solution to the difference Riccati equation Since Σ ˜ k F¯ T )−1 F¯ Σ ¯k Q ˜ k+1 − A¯k Σ ˜ k A¯T − A¯k Σ ˜ k F¯ T (ε−1 I − F¯ Σ ˜ k A¯T − ε−1 D ¯ kD ¯T − G ¯G ¯T = 0 Σ k k k k k k
(3)
˜ k is the solution to the Lyapunov inequality (6.15), which means Adding equation (3) to the inequality (2) yields that Σ ˜ k > Σk = E(ξk ξ T ). Σ k 2. Applying matrix inversion lemma, we have T −1 F Pk Yk = Pk + Pk F T (ε−1 k − F Pk F )
Then we can write
T −1 APk AT + APk E T ε−1 EPk AT k I − EPk E T AYk AT I AYk H T + ε−1 D1 D2T I k = T T Ω Ω1 HYk AT + ε−1 D D R + HY H 1 2 1 εk k k T T T −1 T −1 T I A Pk + Pk F (εk − F Pk F ) F Pk A AYk H T + ε−1 I k D1 D2 = T T Ω1 Ω HYk AT + ε−1 D D R + HY H 1 2 ε k k 1 k ⎤T ⎡ ⎤ ⎡ ⎤⎡ −1 T T T T T I 0 APk F AYk H + εk D1 D2 APk A I 0 I T ⎣ Ω2 0 ⎦ ⎣ ⎦ ⎣ Ω2 0 ⎦ I = F Pk AT ε−1 0 k − F Pk F Ω1 Ω1 −1 T T T 0 I 0 I 0 Rεk + HYk H HYk A + εk D2 D1 ⎡ ⎤T ⎡ ⎤T ⎡ ⎤⎡ ⎤ ⎤⎡ T APk F T APk H T + ε−1 APk AT I I I0 0 I0 0 k D1 D2 T ⎦ ⎣ 0 I Ω3 ⎦ ⎣ Ω 2 ⎦ = ⎣ Ω2 ⎦ ⎣ 0 I Ω3 ⎦ ⎣ F Pk AT −ε−1 F Pk H T k + F Pk F T Ω1 00 I 00 I Ω1 HPk AT + ε−1 HPk F T Rεk + HPk H T k D2 D1 ⎡ ⎤T ⎡ ⎤⎡ ⎤ −1 T T T T APk F APk H + εk D1 D2 APk A I I T T ⎦ ⎣ Ω 2 + Ω3 Ω 1 ⎦ = ⎣ Ω2 + Ω3 Ω1 ⎦ ⎣ F Pk AT −ε−1 + F P F F P H k k k T Ω1 Ω1 HPk AT + ε−1 HPk F T Rεk + HPk H T k D2 D1 T −1 T F Pk AT F Pk AT −ε−1 F Pk H T k + F Pk F = APk AT − T T HPk AT + ε−1 HPk F T Rεk + HPk H T HPk AT + ε−1 k D2 D1 k D2 D1
where
−1 T AYk H T + ε−1 Ω1 = − Rεk + HYk H T k D1 D2 −1 Ω2 = Rεk + HYk H T F Pk AT T −1 Ω3 = −(ε−1 F Pk H T k − F Pk F )
˜k, D ˜ 1,k , D ˜ 2,k , R ˜ k and D ¯ 2,k as in (6.22)-(6.23), then (6.21) follows after simple manipulation. If we define H
2
0I ˜ k in (6.17) in where I is n×n identity matrix. Then we consider the solution Σ 3. (i). Let the transformation matrix T = I I ˜ ˜ ¯k = Σ ˜ k = Σ11,k Σ12,k , where all blocks are n×n matrices, and we define X ˜ 11,k +Σ ˜ 12,k +Σ ˜ 21,k +Σ ˜ 22,k . the partitioned form Σ ˜ 21,k Σ ˜ 22,k Σ Multiplying both sides of RDE (6.17) from the left and the right by T , respectively, we have ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ (4) ¯ k+1 = ∗ AX ¯ k AT + ∗ AX ¯ k F T )−1 X ¯ k AT + ∗ ε−1 D1 DT + ∗ GQGT ¯ k (ε−1 I − F X ∗ X 1 k k ¯ 0 = Px0 . It is obvious that the (2,2) block of (4) coincides where ’∗’ denotes entries which are irrelevant. Further, note that X T T ¯ ¯k. ¯ ˜ ¯ with the RDE (6.19) and I − ε k F Xk F = I − εk F Σk F > 0. Hence, Xk = X Furthermore, note that (6.19) can be rewritten as ⎡ ⎤⎡ ⎤T ⎡ ⎤ AT AT Xk 0 0 ⎦ ⎣ D1T ⎦ + AXk F T ε−1 I − F Xk F T −1 F Xk AT Pk+1 = ⎣ D1T ⎦ ⎣ 0 ε−1 (5) k k I 0 1 1 0 0 I Q 2 GT Q 2 GT Considering the assumption6.1.2, we know that X k+1 > 0 if Xk > 0. Since X0 = Px0 > 0, then Xk > 0 (∀k > 0). (ii). Consider the finite horizon bounded real lemma, if there exists a bounded solution X k > 0 to (6.19) over [0, N ], then for the system
1 −1 x ˜k+1 = A˜ ˜k , xk + GQ 2 εk 2 D1 w 1
˜k z˜k = εk2 F x the following worst-case performance measure is satisfied: 2
˜ zk 2
sup
n ×l [0,N ] 0=(˜ x0 ,w)∈R ˜ 2
0. If for the linear system (7)-(9), there exists an H ∞ filter such that the worst-case performance measure 2
˜ zk − z˜e,k 2
sup n ×l [0,N ]×l [0,N ] 0=(˜ x0 ,w,tildev)∈R ˜ 2 2
2
2
x ˜T0 Px0 x ˜0 + w ˜k 2 + ˜ vk 2
0 to the RDE (6.20) over [0, N ]. In order to show that P k > 0, by the Schur complements, it follows form (6.20) that P k+1 > 0 if and only if −1 T T T T T AYk AT + ε−1 k D1 D1 + GQG AYk A + εk D1 D1 + GQG >0 Ξ= ∗ Rεk + HYk H T which is equivalent to
⎡
AT ⎢ D1T Ξ=⎢ ⎣ Q 21 GT 0
⎤T HT D2T ⎥ ⎥ 0 ⎦ 1
R2
⎤⎡ AT Yk 0 0 0 −1 ⎢ 0 ε I 0 0 ⎥ ⎢ D1T k ⎥⎢ ⎢ ⎣ 0 0 I 0 ⎦ ⎣ Q 12 GT 0 0 0I 0 ⎡
⎤ HT D2T ⎥ ⎥ 0 ⎦ 1
R2
Under the assumption 6.1.2 and noting that R > 0, it is ovbious that Ξ >=. Hence, P k+1 > 0 (k = 0, 1, . . . , N − 1). We shall prove that X k ≥ Pk > 0 over [0, N ]. The proof is by induction. First, we assume that at a given k, X k ≥ Pk > 0. Then it follows from (6.19) and (6.20) that Xk+1 − Pk+1 = A (Xk−1 − εk F T F )−1 − (Pk−1 − εk F T F )−1 AT +(AYk H T + ε−1 D1 D2T )(HYk H T + Rεk )−1 (AYk H T + ε−1 D1 D2T )T
3
Obviously, (X k−1 − εk F T F )−1 − (Pk−1 − εk F T F )−1 ≥ 0 when Xk ≥ Pk , which implies Xk+1 ≥ Pk+1 > 0. Besides, X0 = P0 = Px0 > 0, thus by induction, X k ≥ Pk > 0 over [0, N ]. 4. For scalar system, minimizing trace(P k ) degenerates as minimizing P k . By considering the monotonicity of the RDE (6.21), we propose the following optimal procedure: ∗ Pk∗ ← {min(Pk ), Pk−1 ← {min(Pk−1 ), · · · , P1∗ ← {min(P1 ), given P0 } · · · }} εk−1
εk−2
ε0
subject to I − εi F Pk F > 0, ∀i ∈ [0, k − 1]. So we can only design ε k−1 to minimize Pk , which is to say we can calculate the optimal estimate iteratively. What we need to show is the convex of the optimization over ε k−1 . Let us abbreviate ε k−1 by ε and define T
−1 − εF T F )−1 , D(ε) = AY (ε)H T + ε−1 D1 D2T , S(ε) = Rε + HY (ε)H T Y (ε) = (Pk−1
Then, (6.20) can be rewritten as Pk = AY (ε)AT − D(ε)S −1 (ε)DT (ε) + ε−1 D1 D1T + GQGT Recall that Pk is a continuous and differentiable matrix function of ε for a given P k−1 > 0. Hence, we have −1 −1 2 −1 ¨ −1 DT − DS −1 D ˙ −1 D˙ T − D d S DT + 2ε−3 D1 DT ¨ T − 2D˙ dS DT − 2D dS D˙ T − 2DS P¨k = AY¨ AT − DS 1 dε dε dε2 Then some straightforward but tedious algebraic manipulations lead to T ˜T 1¨ D2T S −1 H εI − D2T S −1 D2 W1 W1 Pk = ˜ −1 H ˜ −1 D2 ˜T W2 W2 F Y F T − HS HS 2
˜ −1 DT , A˜ = F Y AT , H ˜ = F Y H T . Note that εI − D2T S −1 D2 > 0, and where W1 = ε−2 (D1T − D2T S −1 DT ), W2 = A˜ − HS ˜ −1 H ˜ −1 D2 (εI − DT S −1 D2 )−1 DT S −1 H ˜ T − HS ˜T F Y F T − HS 2 2 T −1 T −1 T ˜ − ε D2 D2 ) H ˜ = F Y F − H(S = F Y F T − F Y H T (R + HY H T )−1 HY F T = F (Y −1 + H T R−1 H)−1 F T ≥ 0 Therefore, P¨k ≥ 0, i.e., Pk is a convex function of ε. 5. The robust a priori filter can be obtained as (6.24)-(6.29). The results are compared in Fig. 1.
3 x xr xk
2
1
0
−1
−2
−3
−4
Fig. 1.
0
10
20
30
40
50
60
70
80
90
100
4
6. The manipulation in the proof is similar to that in Theorem 6.4, so we only show the equivalence between the first constraint and (6.65). In order to simplify the notation, we use Δ instead of Δ k . By applying Lemma 6.2, the inequality is equivalent to ⎡ ⎤ ⎡ ⎤ ⎡ T ⎤T ⎡ T ⎤ ⎡ ⎤T ⎤ ⎡ ¯T ¯ 1 ΔF¯ T Σ H ¯ +D ¯ 2 ΔF¯ T 0 F¯ F¯ 0 −Σ A¯T Σ H −Σ A¯ + D ¯ 1 ⎦ Δ ⎣ 0 ⎦ + ⎣ 0 ⎦ ΔT ⎣ ΣD ¯1 ⎦ ≤ 0 ⎣ Σ A¯ + D ⎦ = ⎣ ∗ −Σ 0 ⎦ + ⎣ ΣD ¯ 1 ΔF¯ −Σ 0 ¯ ¯ ¯ ¯ ¯ ∗ ∗ −I 0 0 D2 D2 0 −I H + D2 ΔF By applying Lemma 6.3, the above holds if ⎡ ⎤ ⎡ ⎤T ⎡ T ⎤ ⎡ T ⎤T ⎤ ⎡ ¯T −Σ A¯T Σ H F¯ 0 F¯ 0 −1 ¯ ¯ ¯ ⎣ ΣA −Σ 0 ⎦ + ⎣ ΣD1 ⎦ Γ1 ⎣ ΣD1 ⎦ + ⎣ 0 ⎦ Γ2 ⎣ 0 ⎦ ≤ 0 ¯2 ¯2 ¯ 0 −I 0 0 D D H which can be written as
⎡
⎤ ¯ ¯T ¯ T 0 −Σ + F¯ T Γ−1 2 F A ΣH ¯1 ⎥ ⎢ ∗ −Σ 0 ΣD ⎢ ⎥≤0 ¯ ⎣ ∗ ∗ −I D2 ⎦ ∗ ∗ ∗ −Γ−1 1
Decompose Σ and Σ −1 as
Σ=
and denote
(11)
X M Y1 W −1 , Σ = WT V MT U
I 0 , T = diag{Φ, Φ, I, I} Φ= Y W
Then, by multiplying (11) from the left by T and from the right by T T , we obtain ⎤ ⎡ T T T ¯ −X + F¯ T Γ−1 −I + F¯ T Γ−1 AT LT − H T J T 0 2 F 2 F Y1 A X + H K ⎥ ⎢ ¯ Y1 AT Y1 AT Y (LT − LT ) 0 ∗ −Y1 + Y1 F¯ T Γ−1 2 F Y1 ⎥ ⎢ ⎥ ⎢ ∗ ∗ −X −I 0 XD + KD 1 2⎥ ⎢ ⎥ 0, X − Y > 0 which implies I − XY −1 is invertible. 8. A stationary robust a priori filter with ε = 0.01 can be obtained as (6.34) with the following parameters: 0 0.00353835 0.0286107 0 0.115262 , He = , L = I, |δ| ≥ 0.3. , K= Ae = 0 0.0275018 0.218837 We characterize the uncertainty by a polytope with two vertices: −0.5 0 −0.5 0 , A2 = A1 = 1 0 1 0.6 The optimal robust a priori filter can be computed by Theorem 6.5 as 0.120582 −0.0601375 1.55154 0.184592 −0.0301021 ˆ ˆ , L= , K= A= −0.0769389 0.0766934 −0.29013 1.16502 −0.23048 with the worse-variance 2.50. We design the standard stationary Kalman a priori filter using the Matlab command “Kalman” and obtain the filter error variance as shown in the figure below.
5
100
90
80
Filtering Error Variance
70
60
50
40
30
20
10
0 −0.4
Fig. 2.
−0.3
−0.2
−0.1 0 Uncertain Parameter
0.1
0.2
0.3
Kalman filter error variance versus the uncertain parameter δ
Since the standard Kalman filter is less general than the filter form (6.7)-(6.8) or (6.48)-(6.49) with J = 0, the Kalman filter error variance is bigger than that of the optimal robust a priori filter. 9. We assume the state-space of the filter is (6.89)-(6.90). Then the augmented system including estimated state and the filter error is as follows: ˙ = (A¯ + DΔ(t) ¯ ¯ ξ(t) F¯ )ξ(t) + Gη(t)
where ξ(t) =
A − KH A − Aˆ − KH e(t) ¯ = D1 − KD2 , F = F F ¯ = G −K , D , A¯ = , G ˆ 0 K x ˆ(t) KD2 KH A + KH
The covariance matrix Σ of ξ satisfies ¯ ¯ ¯Q ¯G ¯T = 0 (A¯ + DΔ(t) F¯ )Σ + Σ(A¯ + DΔ(t) F¯ )T + G ¯ = Q 0 . We introduce the follow ARE where Q 0R
˜ +Σ ˜ A¯T + ε(t)Σ ˜ F¯ T F¯ Σ ¯D ¯T + G ¯Q ¯G ¯T = 0 ˜ + ε−1 (t)D A¯Σ ˜ ≥ Σ. Note that for x It is easy to verify that Σ ˆ(t) to be optimal, it should be orthogonal to the estimation error e(t). Thus, it follows that ˜ 11 0 ˜ 11 Σ ˜ 12 Σ Σ ˜ = Σ= ˜ ˜ ˜ 22 Σ21 Σ22 0 Σ T By multiplying both sides of the above equation from the left and from the right by I 0 and I 0 , respectively, it follows ˜ 11 satisfies that Σ ˜ 11 + Σ ˜ 11 (A − KH)T + ε(t)Σ ˜ 11 F T F Σ ˜ 11 + ε−1 (t)(D1 − KD2 )(D1 − KD2 )T + GQGT + KRK T = 0 (A − KH)Σ By completion of squares, we have T ˜ −1 ε−1 (t)D1 DT + Σ ˜ 11 AT + ε(t)Σ ˜ 11 F T F Σ ˜ 11 + ε−1 (t)D1 DT + GQGT − ε−1 (t)D1 DT + Σ ˜ 11 H T R ˜ 11 H T ˜ 11 + Σ AΣ 1 2 2 T −1 T T −1 ˜ −1 T T ˜ ˜ ˜ ˜ RK − ε (t)D1 D2 − Σ11 H + RK − ε (t)D1 D2 − Σ11 H R =0 ˜ is as defined in (6.94). Because we want to minimize Σ ˜ 11 , we choose K to eliminate the last term. Then we have where R T ˜ 11 + Σ ˜ −1 ε−1 (t)D1 DT + Σ ˜ 11 AT +ε(t)Σ ˜ 11 F T F Σ ˜ 11 +ε−1 (t)D1 DT +GQGT − ε−1 (t)D1 DT + Σ ˜ 11 H T R ˜ 11 H T AΣ =0 1 2 2 ˜ 11 = P > Σ11 , where By simply manipulations we see that the above equation coincides with ARE (6.92), which means Σ Σ11 is the (1, 1) block of the matrix Σ. According to the definition of Σ, we see that Σ 11 is the covariance of the filter error. So we have proved that P is an upper bound of it.
6
10. The optimal stationary robust Kalman filter with γ = 1.21 has the following parameters: Ae = −0.351, He = 0.351, K = −0.973 The actually filter error variance is shown below. 1.25
1.2
Filtering Error Variance
1.15
1.1
1.05
1
0.95
0.9 0.05
Fig. 3.
0.1
0.15
0.2
0.25 0.3 Uncertain Parameter
0.35
0.4
0.45
0.5
The optimal stationary filter error variance versus the uncertain parameter δ
For the standard Kalman filter, we have x ˆ(t + 1) = −ˆ x(t) A NSWERS TO E XERCISES OF C HAPTER 7 1. First, we have thus
P˙ (t) = (γ −2 − 1)P (t)2 , P (0) = Px0 , 0
which implies P (t) =
t
1 dP = P (t)2
0
t
(γ −2 − 1)dt,
Px0 , K(t) = P (t). 1 + (1 − γ −2 )Px0 t
A finite horizon H ∞ filter is given by x ˆ˙ (t) = K(t)(z(t) − x ˆ(t)), x ˆ(0) = 0, sˆ(t) = x ˆ(t). To ensure P (t) ≥ 0 for any t ∈ [0, T ], γ ≥ 1. If γ = 1, then P (t) = P x0 , K(t) = Px0 ; if γ > 1, then K(t) = P (t) → 0 as t → ∞ for any Px0 . 2. Consider x(t) ˙ = Ax(t) + B[w v]T z(t) = Hx(t) + D[w v]T s(t) = Lx(t) with A = −1, B = [1 0], H = 1, D = [0 1], L = 1. (a) The DRE is P˙ (t) = −2P (t) + P (t)2 + 1 = (P (t) − 1)2 , which gives P (t) =
Px0 − t(Px0 − 1) . 1 − t(Px0 − 1)
7
For 0 < Px0 < 1, P (t) is bounded and positive. An H ∞ filter is x ˆ˙ (t) = −ˆ x(t) + K(t)(z(t) − x ˆ(t)), x ˆ(0) = 0, sˆ(t) = x ˆ(t), where K(t) = P (t). For Px0 > 1, there exists a finite escape time t = Px 1−1 . 0 (b) The ARE is −2P + (γ −2 − 1)P 2 + 1 = 0, which gives
2 − γ −2 1 ± −2 . P = −2 γ −1 |γ − 1|
To ensure nonnegative stabilizing solution, 2 − γ −2 > 0, i.e., γ > √12 . √ 1− 2−γ −2 For √12 < γ < 1, Ps = γ −2 −1 , and As = −1 + Ps (γ −2 − 1) = − 2 − γ −2 , which is stable; for γ = 1, Ps = 12 , which √ 2−γ −2 is obviously a stable solution; for γ > 1, P s = γ −21−1 + 1−γ −2 , and As = −1 + Ps (γ −2 − 1) = −1 + 1 − 2 − γ −2 = − 2 − γ −2 , which is stable as well. Thus, we obtain γ inf = √12 . 3. (a) The optimal stationary H ∞ filter is
5.4325 K= −0.5677
with the optimal γ = 0.9826. (b) The stationary Kalman filter is given as follows: x ˆ˙ = Aˆ x + K(z − H x ˆ), K =
6.9691 . −3.9640
Hence the transfer function G e (s) from v to e satisfied Ge (s)∞ ≤ 2. Here we assume that E[w(t)w T (τ )] = δ(t − τ ) and E[v(t)v T (τ )] = δ(t − τ ). 4. A finite horizon H∞ filter: p1 p2 . According to 7.9, we have Choose P = p2 p3 p˙ 1 = 2p2 + γ −2 p22
p˙ 2 = p3 + γ −2 p2 p3 p˙ 3 = 1 + γ −2 p23 with the initial condition p 1 = p2 = p3 = 0. Solving the differential equation above, together with 7.16, we obtain the finite horizon H∞ filter. A stationary H∞ filter: 00 is singular, we choose the LMI approach stated in Section 7.8 and we have the following Since R = DD T = 01 parameters: 2.6174 0.3598 0.5396 −0.3323 . , K= , Y = X= 3.1678 0.2698 −0.3323 0.3598 5. The DRE is
P˙ (t) = −2P (t) + 1
and the solution is given by
1 − (1 − 2Px0 )e−2t , 2 can be any nonnegative value.
P (t) = which is bounded and positive for any P x0 . Thus, Px0 An stationary H∞ filter is
x ˆ˙ (t) = −ˆ x(t) + K(z(t) − xˆ(t)), x ˆ(0) = 0, sˆ(t) = xˆ(t), where K = P =
1 2.
8
6. Set Z(t) = P2 (t) − P1 (t), then ˙ Z(t) = A[P2 (t) − P1 (t)] + [P2 (t) − P1 (t)]AT + P2 (t)(γ −2 LT L − H T H)P2 (t) −P1 (t)(γ −2 LT L − H T H)P1 (t) 1 = A + (P2 (t) + P1 (t))(γ −2 LT L − H T H) [P2 (t) − P1 (t)] 2 T 1 +[P2 (t) − P1 (t)] A + (P2 (t) + P1 (t))(γ −2 LT L − H T H) 2 T = X(t)Z(t) + Z(t)X(t) ,
(13)
where X(t) = A + 12 (P2 (t) + P1 (t))(γ −2 LT L − H T H), and Z(0) = P2 (0) − P1 (0) = M2 − M1 ≥ 0. Denote by Φ(t, τ ) the transition matrix associated with X(t), then the solution of (13) is given by Z(t) = ΦT (t, 0)Z(0)Φ(t, 0), which is positive semidefinite since Z(0) ≥ 0, and implies P 2 (t) ≥ P1 (t). 7.
A − jwI B = n + p, for all w ∈ (−∞, ∞) H I A − BH − jwI 0 I −B A − jwI B =n+p = Rank ⇔ Rank H I H I 0 I ⇔ Rank(A − BH − jwI) = n
Rank
⇔ The matrix A − BH has no purely imaginary eigenvalues.
8. Suppose for a given γ there exists a linear causal filter such that the filtering error dynamic is asymptotically stable and Gs˜w (s) < γ. It follows from Theorem 7.7, there exists a stabilizing solution P = P T ≥ 0 to the ARE: (A − BH)P + P (A − BH)T − P H T HP + γ −2 P LT LP = 0.
(14)
By applying a state-space transformation we can have that 0 0 A1 0 , P = , H = [H1 H2 ], L = [L1 L2 ], A= A21 A2 0 P2 and P2 > 0, then the ARE (14) gives A2 P2 + P2 AT2 − P2 H2T H2 P2 + γ −2 P2 LT2 L2 P2 = 0. Let X = P2−1 > 0, hence
XA2 + AT2 X − H2T H2 + γ −2 LT2 L2 = 0,
which implies X = S − γ −2 T > 0 based on (7.76) and (7.77). 9. If H = [1 0], there is no solution for LMI problem. If we change H = [1 1], the optimal γ = 3.3507 × 10 −5 . If we further change L = [1 1], the optimal γ = 1. 10. For any given γ > 0 there exists an H ∞ stationary filter that achieves the H ∞ performance γ, if and only if there exists P = P T > 0 and K such that (A − KH)P + P (A − KH)T + γ −2 P LT LP + BB T < 0, i.e.,
AP + P AT + BB T − KHP − P H T K T + γ −2 P LT LP < 0.
Choose Q = QT ≥ BB T , Q > 0, then there always exist P = P T > 0 such that AP + P AT + Q < 0,
(15)
9
since A is stable. Further set K = 1/2γ −2P H T , and the left hand side of inequality (15) becomes AP + P AT + BB T − KHP − P H T K T + γ −2 P LT LP = AP + P AT + BB T + γ −2 P (LT L − H T H)P ≤ AP + P AT + Q + γ −2 P (LT L − H T H)P < 0, where the last inequality is due to L T L ≤ H T H and AP + P AT + Q < 0. This completes the proof. A NSWERS TO E XERCISES OF C HAPTER 8 1 (a) By Theorem 8.3, the existence of an H ∞ a priori filter is equivalent to the existence of a solution P k = PkT > 0, k = 0, 1, · · · , N to the following DRE: Pk+1 = (
1 + Pk Pk − γ −2 )−1 = , Pk 1 + (1 − γ −2 )Pk
(16)
where P0 = Px0 , which holds if and only if γ > 1. (b)For a given γ > 1, an H ∞ a priori filter is x ˆk+1 = x ˆk +
Pk (zk − x ˆk ), 1 + Pk
with Pk given by Eq.(16). The steady-state H ∞ a priori filter will be: ˆk , as lim Pk = 0. x ˆk+1 = x k→∞
Modeling vk as the white Gaussian noise with covariance r, the associated Kalman a priori filter is x ˆ− ˆ− k+1 = x k +
Pk− (zk − x ˆ− k ), r + Pk−
and the prediction error variance is computed by the DRE − Pk+1 =
The steady-state Kalman filter is:
Pk− , P0− = Px0 . 1 + r−1 Pk−
− x ˆ− ˆ− k+1 = x k , as lim Pk = 0. k→∞
√ (c) For γ = 2, the observed based an H ∞ a priori filter takes the form: x ˆk+1 = x ˆk + where Pk is computed by Pk+1 =
2. Proof:
Pk (zk − x ˆk ), 1 + Pk
Pk , P0 = Px0 . 1 + 0.5Pk
Denote w k = w, the problem can be formulated as an estimation problem for the system wk+1 = wk
(17)
zk = φTk wk
+ vk
sk = wk
(18) (19)
The DRE in Eq. (8.18) in Theorem 8.3 is given by −1 (Pk+1 + γ −2 I)−1 = Pk − Pk φk (I + φTk Pk φk )−1 φTk Pk
(20)
+
(21)
= (Pk−1
For γ = 1, we have
φk φTk )−1 .
−1 Pk+1 + I = Pk−1 + φk φTk ≥ Pk−1 .
Now, we can choose P 0 = 0.5I, in this case, P1 ≤ −0.5I < 0. By Theorem 8.3, the H ∞ filter is not solvable. 3.
10
0.2
0.1
0
−0.1
Filter gains
−0.2
−0.3
−0.4
−0.5
−0.6
−0.7
K
1,k
K
2,k
−0.8
Fig. 4.
0
2
4
6
8
10 k
12
14
16
18
20
The H∞ a priori filter gain
7.6856e − 002 . The filter gain K k The gain of the H ∞ a priori filter with form (8.16-8.17) at N = 20 is gain K = −7.7049e − 001 versus k is shown in Figure 4. 0.0708372 . The optimal stationary H ∞ a priori filtering performance is γ = 1.53 with the gain K = −1.01048 The singular value of the optimal H ∞ filter error transfer function is shown in Figure 5.
•
• •
350
300
Singular value
250
200
150
100
50
0
Fig. 5.
0
1
2
3
4 5 6 Frequency(rad/sec)
7
8
9
10
The singular value plots of the optimal H∞ filter error transfer function
4. Proof: By Theorem 8.6, the existence of an H ∞ a posteriori filter is equivalent to the existence of a solution P k = PkT > 0 to the DRE: Mk+1 = Pk , P0 = Px0 −1 Pk+1
−1 = Mk+1
Hence, we have Pk+1 =
+1−γ
(22) −2
.
(23)
Pk . 1 + (1 − γ −2 )Pk
Now, it is clear that there exists an H ∞ a posteriori filter if and only if γ > 1, which the filtering performance is the same as that of the H∞ a priori filter.
5. •
The gain of the H ∞ a priori filter with form (8.16-8.17) at N = 20 is gain K = k is shown in Figure 6.
0.78 . The filter gain K k versus −0.0012
11
0.8
0.7
0.6
Filter gains
0.5
0.4
0.3
0.2
0.1
0
K
1,k
K
2,k
−0.1
Fig. 6.
0
2
4
6
8
10 k
12
14
16
18
20
The H∞ a priori filter gain
•
The optimal stationary H ∞ a priori filtering performance is γ = 1.00 with the gain K =
•
The singular value of the optimal H ∞ filter error transfer function is shown in Figure 7.
0.96406 . −0.086011
1.4
1.2
Singular value
1
0.8
0.6
0.4
0.2
0
Fig. 7. •
0
50
100
150 Frequency(rad/sec)
200
250
300
The singular value plots of the optimal H∞ filter error transfer function
A is a (not strict) stable matrix, hence γ inf −→ 1.
6. Proof: By Theorem 8.8, the stationary H ∞ a posteriori filtering problem is solvable if and only if there exists a stabilizing solution M = M T > 0 to the ARE: ¯ T (R ¯ + HM ¯ H ¯ T )−1 HM ¯ AT + GGT = 0. AM AT − M − AM H Since H = L, the ARE is rewritten as M = AM AT − AM H T [(1 − γ −2 )−1 I + HM H T ]−1 HM AT + GGT
(24)
For γ > 1, Eq.(24) always admits a positive-definite stabilizing solution because we may rewrite Eq.(24) by
˜ T [I + HM ˜ H ˜ T ]−1 HM ˜ AT + GGT , M = AM AT − AM H
(25)
˜ = 1 − γ −2 H. As such, the minimum achievable attenuation level is γ inf ≤ 1. where H (1) Unstable A If γ = 1, Eq. (24) reduces to M = AM A T + GGT , which does not admit a stabilizing solution for the unstable A, suggesting that γinf = 1.
12
(2) Stable A Define η −2 = γ −2 − 1, the ARE Eq.(24) can be rewritten in the following way: M = AM AT + η −2 AM H T [I − η −2 HM H T ]−1 HM AT + GGT
(26)
By the equivalence of (a) and (c) in Theorem 8.2, we have η > μ. That is, μ . γinf = 1 + μ2
7. The optimal H ∞ a priori and a posteriori filtering performances are 1.97 and 1.00 respectively. The Kalman filter can be obtained by setting γ → ∞. Hence we get the Kalman a posterior filter and a priori filter provide H ∞ filtering performances of 1.42 and 3.44, respectively. 8. Proof : By the initial condition M 0 > M0 , the result holds for k = 0. Assuming that it holds up to k, k ≥ 0, i.e., Mk ≥ Mk . We consider the result for k + 1. Using the matrix inversion lemma, the Eq. (8.53) is rewritten as ¯TR ¯ −1 H] ¯ −1 AT + GGT . Mk+1 = A[Mk−1 + H By the induction hypothesis, (M k )−1 ≤ Mk−1 , then it can be easily verified that ¯TR ¯TR ¯ −1 H] ¯ −1 ≥ [M −1 + H ¯ −1 H] ¯ −1 , [(Mk )−1 + H k which follows that M k+1 ≥ Mk+1 .
9.Follows a similar line of computation as Example 8.6. 10. Define xk = [sk−3 sk−2 sk−1 ]T , wk = [sk vk ]T , a state space representation of the model and the measurement is by ⎤ ⎡ ⎡ ⎤ 010 00 xk+1 = ⎣ 0 0 1 ⎦ xk + ⎣ 0 0 ⎦ wk 000 10 zk = [0.5 − 0.5 0.5]xk + [1 1]wk sk−2 = [0 1 0]xk The rest follows the Example 8.8.
given
(27) (28) (29)
6942X.indd 4
11/3/08 11:20:11 AM
6942X ISBN 1-420-06942-X
an informa business
www.taylorandfrancisgroup.com
6942X.indd 5
6000 Broken Sound Parkway, NW Suite 300, Boca Raton, FL 33487 270 Madison Avenue New York, NY 10016 2 Park Square, Milton Park Abingdon, Oxon OX14 4RN, UK
11/3/08 11:20:11 AM