A MIXED Ц LOWER BOUND
The exact value of the mixed s. s.v. can be obtained as the solution of a non convex optimization problem, whose global maximum coincides with the exact value of p. The idea of the power algorithms by (Packard et al., 1988; Young and Doyle, 1990) is to write the necessary conditions of optimality as f (x) = x. Power iterations x(k + 1) = f ‘(x(k)) are then used to asymptotically solve f (x) = x. Note that this algorithm only provides (at least a priori) a local maximum of the non convex optimization problem, i. e. a p lower bound.
The optimization problem is first detailed in the following two propositions. As a preliminary, the real spectral radius of a complex matrix A is defined as the magnitude of the largest real eigenvalue of A:
Pr(A) = sup(|A| / det(A — XI) = 0 and А є R) (5.18)
Pr(A) is zero if matrix A has no real eigenvalue.
PROPOSITION 2..2
p(M) = гам pr(AM)
Proof: a structured model perturbation Д is searched, for which there exists a non-zero vector x satisfying:
(I – AM)x = 0
Д can be rewritten as:
so that /? is an eigenvalue of matrix A°M. Since the smallest size destabilizing model perturbation is searched, the maximal value of (3 is thus to be obtained, i. e. the maximal value of the real spectral radius of matrix AqM over Д° Є BA.
Proposition 2..3 Let (see equation (5.10)):
Q = {A / SI є [-1,1], 6f6Cj = 1 and AcqHAcq = 1} (5.23)
Then:
ц{М) = шxpR(QM)
It is worth emphasizing the coherence between the definition of Q and the counterexamples of (Ackermann, 1992; Holohan and Safonov, 1993). When considering complex uncertainties, it suffices to search the destabilizing model perturbation over the unit sphere. However, as proved in (Ackermann, 1992; Holohan and Safonov, 1993), no assumption can be made about the real perturbations <5[, which are thus simply assumed to lie inside their unit ball (i. e. S Є [—1,1]).
The issue is now to solve the optimization problem, noting as a preliminary that a local minimum will be necessarily obtained in the general case (since the problem is non convex) and that the computational burden should remain minimal. As said above, a classical solution (Packard et al., 1988; Young and Doyle, 1990) is to rewrite the necessary conditions of optimality under the form f (x) = x, where f is a vectorial function of vector x. A fixed point method finds a limit x* of the series xk+l = f(xk)’
When the series converges, x* satisfies indeed the necessary conditions of optimality and a local minimum has been obtained, i. e. a /і lower bound. Nevertheless, the value of x* depends on the initial condition жо of the series, because of the non convexity of the problem. On
the other hand, the series does not necessarily converge: a limit-cycle may especially appear inside the power algorithm.
In practice, the algorithm by (Young and Doyle, 1990) generally presents good convergence properties. The computational burden is low and the quality of the ц lower bound is usually good. However, it is worth emphasizing that this power algorithm exhibits poor convergence properties in the case of a purely real model perturbation (see below). Note finally that the power algorithm by (Young and Doyle, 1990) is not detailed here, since a generalization of this power algorithm to the skewed ц problem will be presented in section 3. of chapter 8.