Quasi-Monte Carlo Quadrature
Quasi-Monte Carlo (QMC) quadrature [7] samples at a low discrepancy set of points generated by deterministic number-theoretic formulas. The “discrepancy” here is a measurement of how much the distribution of this set of points deviates from the underlying pdf. A low discrepancy set of points achieves a higher degree of uniformity with respect to a given pdf than a pseudo-random set of points does. So QMC is usually much more efficient than a Monte Carlo quadrature. The error bound of QMC is of order O(N-1 (logN)d) in which d is the number of variables. In many cases this is quite a loose upper bound of the error, i. e. QMC often performs better than that.
A variety of low discrepancy point sets exist, e. g. Van der Corput, Halton, Sobol, Hammersley and Niederreiter point set. The last one is used in this work as it is considered the most efficient when d is large [19]. The point set is generated by the program from [4]. The statistics of the SRQ are directly computed from the samples.
3.1 Gradient-Enhanced Radial Basis Functions
The radial basis function (RBF) method [6] approximates an unknown function by a weighted linear combination of radial basis functions each being radially symmetric about a center. An RBF approximation takes the form
N
f(I) = Xwiфг(||| -1®||)
i=1
where фі are radial basis functions, || • || denotes the Euclidean norm, and are the N sample points each taken as the center of a radial basis function. Making f (|) interpolate the N samples leads to N linear equations. The coefficients wi are determined by solving this linear system.
Denoting the Euclidean distance from the center as r, popular types of ф(г) include Vr2 + a2 (multiquadric), 1 /s/r1 + a2 (inverse multiquadric), exp(-a2r2) (Gaussian) and r2 ln(ar) (thin plate spline), in which a is a parameter to be fine-tuned for a particular set of samples. Gradient-employing versions of RBF were proposed in [11, 20] where first-order derivatives of the SRQ are exploited and second-order derivatives of RBF are involved in the system.
In this work we propose a different gradient-employing RBF method that involves only the first-order derivative of RBF, termed gradient-enhanced RBF (GERBF). To accommodate the gradient informations of the SRQ, this method introduces additional RBF that are centered at non-sampled points, i. e. an GERBF approximation is
f(I) = X Wi Фі(ііі – I{i>l), with N < K < N(1 + d)
i=1
The I(i’> with i < N are sampled points, those with i > N are non-sampled points which can be chosen randomly as long as none of them duplicates the sampled ones. The coefficients w = {w0, w1, • • •, wK}T are determined by solving the following system,
Фw = f
in which
-ф^)) |
Ф2(||1)) |
… Фк(||1)) – |
7(I|1)) |
|
ф(!^) ф1(1)(І|1)) |
Ф2(^)) ф21)(||1)) |
… Фк (IlN)) … фРц^) |
g • • • S’ |
|
Ф^^) |
ф21)(|^)) |
… Ф^Ц^) |
, f= |
f (1)(I|N)) |
фРИ^) |
ф2Рі«) |
… Ф^ЦШ) |
f МЦЫ) |
|
ФРц^) |
Ф^ЦМ) |
… 4d)aiN))_ |
f (d)(sm)_ |
with фР = дФі/d^j, = df/d^j. We chose. K = ^(1 +d) in this work, which results in an over-determined system that is solved by a Least Squares method.
A numerical comparison of the accuracy of the aforementioned four types of RBF in approximating this CFD model f (|) was made by the author. The result favors the inverse multiquadric RBF which is therefore used in this work for the comparison with other UQ methods. The internal parameter a is fine-tuned by a leave-one-out error minimizing procedure as in [3].
For this UQ job we first establish a GERBF surrogate model of f (|) based on QMC samples of the CFD model, and integrate for the target statistics and pdf by a large number (105) of QMC samples on the surrogate model.