Numerical Boundary Conditions

These types of inlet and exit boundary conditions are typical for turboma­chinery cases. There was some uncertainty about the specification of the wall boundary conditions. As a best possible assumption, the thermal wall bound­ary conditions had been set to a constant wall temperature inside the entire NGV as well as on the rotor blade surface and hub. All other walls within the domain were treated as adiabatic. Considering the very short measurement times (approx. 500ms) this simplification seems justified.

2. Computational Grid

The numerical domain was discretized using a structured multi-block grid. Compared to an unstructured tetrahedral approach structured grids usually pro­vide a higher numerical accuracy. Consequently, emphasis was laid on a high grid quality in order to minimize numerical errors, particularly inside the cool­ing holes and their immediate vicinity. The grid in these regions is locally highly refined. This high level of refinement would have led to an overall number of grid points, far beyond any reasonable limits. In order to reduce the problem size coarser grid blocks are located around the highly resolved grid regions. The coarse and fine grid areas are connected by means of a non­congruent block-to-block connection using a fully conservative interpolation technique. The application of this technique in film cooling configurations had been described by Hildebrandt (2001).

Around the blades as well as in the front and rear plenum and inside the cooling holes HOH-topologies had been applied (Fig.1, Fig. 2). The grid is composed of 651 grid blocks with a total number of 2.1 Mio. Grid points.

About 75% of the grid points are located in the immediate vicinity of the cool­ing holes. The refined areas around the rows of cooling holes are visible in Fig.2. These areas are resolved about four times finer in each spatial direction than the surrounding regions of the main ft>w.

The non-dimensional wall distance y+ varies typically around 1 and 2, de­pending on the local fbw conditions. The laminar sub-layer, important for any prediction of wall shear stress or heat transfer, is therefore well captured.

Figure 3. Mass Flow Convergence History

Table 3. Resource requirements

Source Term

Full Discretization

Iterations for full convergence

6.200

10.000

Grid points

1.500.000

2.100.000

Blocks

16

651

Relative CPU time

1.0

2.4

Relative RAM

1.0

1.55