Chance Constraint Formulations

Chance constraints leave some flexibility with respect to the inequality restrictions (cf. Ref. [37]) . The inequality restrictions are only required to hold with a certain probability P0

min f(y, p, s(Z))dP (Z) (29)

y, p

n

s. t. c(y, p,s(Z))= 0, VZ Є n (30)

P({Z Ih(y, p,s(Z)) > 0}) > P0 (31)

Chance Constraint Formulations Chance Constraint Formulations

So far, chance constraints are used mainly for weakly nonlinear optimization prob­lems Ref.[22, 20] . In the context of structural optimization (which is typically a bilinear problem), this formulation is also called reliability-based design optimiza­tion. For more complex problems, we need again some simplification. In Ref.[38] this is performed by applying a Taylor series expansion about a nominal set-point s0 := E(s), which is assumed to be equal to the expected value of the random vector s. Suppressing further arguments (y, p) for the moment, the Taylor approximation of 2nd order of f in (29) gives

Integrating this, we observe

E(/) = / f(s)dsP(Z) = f(s°) + £ ^p. Var{Si)

n i=1 i

where Var(si) is the variance of the ith component of s. Obviously, a first order Taylor series approximation would not give any influence of the stochastic inform­ation, which is the reason, why we use an approximation of second order for the objective. In order to deal with the probabilistic chance constraint (31), we also have to approximate its probability distribution. Since the uncertainties are all as­sumed to be Gaussian and truncated Gaussian, respectively, we use a first order Taylor approximation of the inequality constraint, since we know that this is again Gaussian and truncated Gaussian distributed (unlike the second order approxima­tion) (cp. Ref.[17] )

«И:=й(Л + ^(*-Л ~ («(/), ОЙ 1„,

where we assume for simplicity that h is scalar valued.

Chance Constraint Formulations
Now we can put the Taylor approximations together and achieve a deterministic optimization problem. Since the flow model (30) depends also on the uncertainties s, we should be aware that the derivatives with respect to s mean total derivatives. We express this by reducing the problem in writing y = y(p, s) via (30).

P({Z 1 h(y(p, s(Z)),s(Z)) > 0}) > po
^ P({Z 1 h(y(p, s(Z)),s(Z)) < 0}) < 1 – p0

The propagation of the input data uncertainties is estimated by the combination of a Second Order Second Moment (SOSM) method and first order Taylor series ap­proximation presented for example in Ref.[38]. Since there is no closed form solu­tion for the integral, the chance constraint is evaluated by a numerical quadrature formula.

Considering geometry uncertainties, a high amount of computational effort arises due to the high dimensionality of the resulting robust optimization problem. In the following, we will introduce two techniques in order to reduce the complex­ity of the problem: a goal oriented choice of the Karhunen-Loeve basis reducing the dimension of the probability space and (adaptive) sparse grid methods in order to efficiently evaluate the high dimensional integrals.