Discretisation Techniques

In the following, (Q, B, P) denotes a probability space, where Q is the set of ele­mentary events, B is the а-algebra of events and P is the probability measure. The symbol ю always specifies an elementary event ю e Q.

The random field k(x, ю) needs to be discretised both in the stochastic and in the spatial dimensions. One of the main tools here is the Karhunen-Loeve expansion (KLE) [16]. By definition, KLE of a random field к(х, ю) is the following series [16] „

к(х, со) = к(х) + X уД^ф((х)^(со), (6)

t=

where are uncorrelated random variables and k(x) is the mean value of

к(х, ю), Xt and фе are the eigenvalues and the eigenvectors of problem

Tфе = Хф фе e L2(G),е e N, (7)

and operator T is defined like follows

T : L2(G) ^ L2(G), (Tф)(х) := / cov*(x, у)фШу,

where covK is a given covariance function. Throwing away all unimportant terms in KLE, one obtains the truncated KLE, which is a sparse representation of the random field к(х, ю). Each random variable %e can be approximated in a set of new independent Gaussian random variables (polynomial chaos expansions (PCE) of Wiener [7, 25]), e. g.

&(ю)= E $Р)ни(в(®))’

^J

where 9(a) = (0i (a), 92(a),…), %(в) are coefficients, H are multivariate Hermite polynomials, в є J is a multiindex, J := {fifi = (p1,…,Pj,…), f5j є N0} is a multi-index set [18].

For the purpose of actual computation, truncate the polynomial chaos expansion after finitely many terms, e. g.

в є Jm, p := {в є J y(p) < M, P<P}, r(p) := max{j є Nв > 0}.

Since Hermite polynomials are orthogonal, the coefficients <^e) can be computed by projection

ф = ^Івщ{ете)те).

This multidimensional integral over 0 can be computed approximately, for ex­ample, on a sparse Gauss-Hermite grid with nq grid points

1 nq

* ji%Hp(8i)ZiWwi, (8)

where weights wi and points 9 і are defined from sparse Gauss-Hermite integration rule. After a finite element discretisation (see [10] for more details) the discrete eigenvalue problem (7) looks like

МСМф = Cij = covK(x, yj). (9)

Here the mass matrix M is stored in a usual data sparse format and the dense mat­rix C є Rnxn (requires O(n2) units of memory) is approximated in the sparse H – matrix format [10] (requires only O(n log n) units of memory) or in the Kronecker low-rank tensor format [9]. To compute m eigenvalues (m C n) and corresponding eigenvectors we apply the Lanczos eigenvalue solver [11, 22].