# Reduction of the Dimension of the Probability Space Using a Goal-Oriented Karhunen-Loeve Basis

The evaluation of the objective function in the robust optimization problem (23) requires the computation of the mean, i. e. the computation of the integral of the random field with respect to its probability measure. Applying the introduced Karhunen-Loeve-Approximation, the objective function can be written as the fol­lowing d-dimensional integral

E (f (y, p, V(x Z)))= / ••• / (f (y, p, V(x Yi(Z),…,Yd (Z)))dYi(Z) • •• dYi(Z) n n

(36)

where d/і (Z) is the one-dimensional Gaussian measure. So, one term more in the truncated Karhunen-Loeve expansion to increase the approximation accuracy res­ults in an integral of one-dimension higher. In order to reduce the computational effort, the orthogonal basis functions {zi} will be chosen goal-oriented, i. e. the indi­vidual impact of the eigenvectors on the target functional will be taken into account. This method is well established in the model reduction methods of dynamic sys­tems and the adaptive mesh refinement (cf. ). The idea is to develop an error indicator for the individual eigenvectors reflecting the influence on the drag. The in­troduced error analysis of the Karhunen-Loeve-Expansion in section 2.2 only gives the approximation error of the random field ^, but not of the function of interest     f (y, p, y). We propose to use sensitivity information to capture the local sensitivit­ies of the drag with respect to the eigenvectors

where X solves the adjoint equation. The adjoint equation is independent of i, hence it has to be solved only once and the indicator pi is numerically cheap to evaluate. Now, the reduced basis {z} can be automatically selected, the eigenvector zi with a large value n have to be kept in the reduced basis, whereas a small value indicates that the basis vector can be rejected from the basis.