-
To check the performance of the adaptive model in improving the computational efficiency, the widely used benchmark test cases are carried out in this section. The performance of the MCV dynamical core on a fixed grid has been validated in Li et al. (2013) and the numerical results are comparable to those of existing advanced models. In this study, we run the MCV dynamical core on adaptive grids with different configurations. The adaptive grid is denoted by
$N \times M \times l \times \lambda $ . N and M represent the number of cells along the horizontal and vertical directions respectively, which means the base block (the coarsest level) consists of$N \times M$ computational cells, the finest level is level$l$ and the refinement ratio between two adjacent levels is$\lambda $ . It is noted that the minimum grid spacing is allowed by the finest level l. The normalized${l_1}$ ,${l_2}$ and${l_\infty }$ errors (Williamson et al., 1992) and computational costs are examined. Definitions of the normalized errors are given as follows:where
$\varOmega$ is the computational domain,$\overline q $ and$ {\overline q _{\text{e}}} $ are the numerical and reference solutions in terms of cell-integrated averages. All numerical tests in this study are carried out on the AMD EYPC 7702 CPU (single processor).The initial hydrostatic state is specified as
where the Exner pressure is given by
$\Pi = {\left( {\frac{p}{{{p_0}}}} \right)^{\frac{{{R_d}}}{{{c_p}}}}}$ . The constants are$ {c_p} = 1004.5{\text{ J k}}{{\text{g}}^{ - 1}}{\text{ }}{{\text{K}}^{ - 1}} $ and${R_d} = 287{\text{ J k}}{{\text{g}}^{ - 1}}{\text{ }}{{\text{K}}^{ - 1}}$ .The basic state of potential temperature,
$\theta $ , is chosen to be a constantor specified with a constant Brunt-Väisälä frequency as
where
$N_0^2 = g {{{\rm{d}}{\text{ln}}\overline \theta }}/{{{\rm{d}}z}}$ . The hydrostatic density$\overline \rho $ is then computed by:where
$\overline T = \frac{{\overline \theta }}{{\overline \Pi }}$ .The refinement criterion is chosen as the variation of the potential temperature perturbation for the rising thermal bubble, density current flow, and internal gravity waves, as well the variation of horizontal velocity for the Schär mountain case. Considering a two-dimensional cell
$C_{i,j,k}^{}$ on level$k$ , it is flagged to be refined ifwhere
$\delta $ is a prescribed threshold, and$q$ can be the potential temperature perturbation,$\theta '$ , or the horizontal velocity,$u$ . -
Atmospheric motions caused by thermodynamic effects (Carpenter et al., 1990; Wicker and Skamarock, 1998; Ahmad and Lindeman, 2007; Norman et al., 2011) are common phenomena in nature. They are extensively used to verify the performance of dynamical cores. In this test, a thermal bubble that is warmer than the ambient air is initialized by specifying a positive potential temperature perturbation as
where
$r = \sqrt {{{\left( {x - {x_0}} \right)}^2} + {{\left( {z - {z_0}} \right)}^2}} $ and$ R = 2000 $ m (the radius of the bubble). The initial center of the thermal bubble is located at$ \left( {{x_0},{\text{ }}{z_0}} \right) = \left( {10000{\text{ m}},{\text{ }}2000{\text{ m}}} \right) $ .In this case, the uniform background potential temperature is specified as
$\overline \theta = 300{\text{ }}K$ , the maximal perturbation is$\Delta \theta = 2{\text{ }}K$ and the computational domain is [0, 20] km$ \times $ [0, 10] km. The simulation runs for 1000 s and all boundaries are slippery walls. The explicit dissipation filter (Li et al., 2013) with a viscosity coefficient,$\mu = 10{\text{ }}{{\text{m}}^2}{\text{ }}{{\text{s}}^{ - 1}}$ , is used to eliminate numerical noise. The threshold for refinement is set as$\delta = 0.04$ . During the simulation, the thermal bubble rises and finally attains a mushroom-like shape. The numerical results on a uniform grid, with a resolution of 25 m$ \times $ 25 m, are used as a reference solution to calculate the normalized errors.Normalized errors and elapsed CPU time for different grids are given in Table 1, where the results are grouped according to the finest resolution on an adaptive grid. The normalized CPU time is also computed by dividing the CPU time on the coarsest uniform grid. From Table 1, it can be found that the normalized errors of the numerical results on the uniform and AMR grids within each group are quite similar, noting that much less CPU time is consumed by the AMR model. Contour plots of the potential temperature perturbation
$\left( {\theta '} \right)$ and the absolute errors on a$100 \times 50 \times 3 \times 2$ grid at different times are shown in Fig. 7. The solid thick lines represent the interfaces between different levels. The symmetric distribution of$\theta '$ is perfectly reproduced on the adaptive grid as in our previous studies (Li et al., 2013). It is observed that the fine levels are dynamically adjusted to fit the change of flow field. AMR grids properly locate the disturbed region and put the fine blocks there to assure numerical accuracy. In undisturbed regions, the coarse blocks are applied to save computational costs. The relative total mass error along the simulation is also recorded, which supports that the numerical conservation property is well achieved through the flux correction procedure.Resolution (m) Grid configuration $l_1$ $l_2$ $ {l_\infty }$ CPU time (s) 200 × 200 100 × 50 × 1 × 1 0.23 0.21 0.28 1 (187.34) 100 × 100 200 × 100 × 1 × 1 6.40 × 10−2 6.86 × 10−2 0.11 8.45 (1584.21) 100 × 50 × 2 × 2 6.44 × 10−2 6.87 × 10−2 0.11 2.84 (532.55) 50 × 50 400 × 200 × 1 × 1 8.82 × 10−3 1.09 × 10−2 2.43 × 10−2 75.00 (14051.62) 200 × 100 × 2 × 2 9.34 × 10−3 1.15 × 10−2 2.43 × 10−2 10.17 (1905.17) 100 × 50 × 2 × 4 8.90 × 10−3 1.07 × 10−2 2.53 × 10−2 19.54 (3660.69) 100 × 50 × 3 × 2 9.28 × 10−3 1.13 × 10−2 2.35 × 10−2 13.63 (2544.14) 25 × 25 800 × 400 × 1 × 1 − − − 631.39 (118285.12) Table 1. Normalized errors of the potential temperature perturbation (
$\theta '$ ) and elapsed CPU time for the thermal bubble test running on different grids. -
In the density current test case (Straka et al., 1993; Giraldo and Restelli, 2008), a cold bubble is put in a neutrally stratified atmosphere by specifying a negative potential temperature perturbation as
where
$ \overline \theta = 300{\text{ K}} $ ,$\Delta \theta = - 15{\text{ K}}$ ,$\left( {{x_0},{\text{ }}{z_0}} \right) = \left( {0 , 3000 } \right)\;{\rm{m}}$ and$\left( {{x_r},{\text{ }}{z_r}} \right) = \left( {4000 , 2000 } \right)\;{\rm{m}}$ .During the simulation, the cold bubble drops to the ground and spreads out in the horizontal direction to form three Kelvin-Helmholtz shear instability rotors along the cold frontal surface. The computational domain of this case is [–26.5, 26.5] km
$ \times $ [0, 6.4] km and the simulation time is 900 s. All of the boundaries are slippery walls. A dissipation filter with a coefficient of$\mu = 10{\text{ }}{{\text{m}}^2}{\text{ }}{{\text{s}}^{ - 1}}$ is used to meet the requirement of a physical process. The threshold of the refinement criterion of this case is$\delta = 0.2$ . Again, the numerical results on a uniform grid with a resolution of 25 m × 25 m are adopted as reference solutions.Normalized errors of
$\theta '$ and elapsed CPU time on different grids are given in Table 2. It can be observed that the AMR model consumes much less computational costs at the price of slightly larger normalized errors. Contour plots of$\theta '$ and the absolute errors on a$132 \times 32 \times 2 \times 4$ grid are shown in Fig. 8, which have the finest resolution of 50 m$ \times $ 50 m. Only half of the computational domain is depicted due to the symmetry of the flow field. The numerical result agrees well with those on the uniform grid with the same resolution given in Li et al., (2013).Resolution (m) Grid configuration ${l_1}$ ${l_2}$ $ {l_\infty } $ CPU time (s) 200 × 200 132 × 32 × 1 × 1 0.28 0.33 0.94 1 (235.36) 100 × 100 265 × 64 × 1 × 1 0.13 0.17 0.60 8.69 (2045.44) 132 × 32 × 2 × 2 0.13 0.17 0.64 3.44 (809.19) 50 × 50 530 × 128 × 1 × 1 2.63 × 10−2 3.59 × 10−2 0.15 69.93 (16458.58) 265 × 64 × 2 × 2 2.72 × 10−2 3.67 × 10−2 0.17 11.10 (2613.85) 132 × 32 × 2 × 4 2.64 × 10−2 3.59 × 10−2 0.15 25.99 (6117.59) 132 × 32 × 3 × 2 2.68 × 10−2 3.66 × 10−2 0.18 20.00 (4047.29) 25 × 25 1060 × 256 × 1 × 1 − − − 640.80 (150820.42) Table 2. Normalized errors of the potential temperature perturbation (
$\theta '$ ) and elapsed CPU time for the density current test running on different grids. -
The internal gravity waves test involves the evolution of a potential temperature perturbation in a channel with periodic boundary conditions in the horizontal direction. Initial conditions used in Skamarock and Klemp (1994) are adopted. The potential temperature field is initialized as
where
$ {\theta _0} = 300{\text{ K}} $ ,$H = 10{\text{ km}}$ ,$\Delta \theta = 0.01{\text{ K}}$ ,${x_0} = 100{\text{ km}}$ , and$a = 5{\text{ km}}$ .The background state of potential temperature
$ \overline \theta \left( z \right) $ is obtained by using a constant Brunt-Väisälä frequency. A constant mean flow of$\overline u = 20{\text{ m }}{{\text{s}}^{ - 1}}$ for the advection of the internal gravity waves is adopted. The bottom and top boundaries are slippery walls. The computational domain of this case is [0, 300] km$ \times $ [0, 10] km, and the simulation time is 3000 s.$\delta = 1.8 \times {10^{ - 4}}$ is chosen for grid refinement. Numerical results on a 250 m$ \times $ 25 m grid are taken as the reference solutions. It is noted that the computational cells are no longer square, an aspect ratio of grid spacing$\Delta x = 10\Delta z$ is adopted in this case.Normalized errors of
$\theta '$ and elapsed CPU times on various grids are given in Table 3. Comparing to the first two test cases, the effect of AMR grids in saving computational costs is less obvious. The reason is that the internal gravity waves are spreading in the horizontal direction during the simulation and the disturbed regions are continuously growing as shown in Fig. 9. Maximum and minimum values of vertical velocity and potential temperature perturbation on the AMR grids with the finest resolution of$\Delta z = 100{\text{ m}}$ after 3000 s are given in Table 4, which is the same as that obtained by Li et al (2013). The distribution of the absolute errors of$\theta '$ on a uniform grid and a 75 × 25 × 3 × 2 grid at$t = 3000{\text{ s}}$ is depicted in Figs. 10a, b. No obvious increase of errors is found around the boundaries between different levels, which reveals that the computational mode due to the change of grid resolution is effectively suppressed in the proposed AMR model. The general errors in the AMR model are also affected by the prescribed refinement threshold. More accurate solutions are obtained when a larger area of the computational domain is refined with a smaller refinement threshold.Resolution (m) Grid configuration ${l_1}$ ${l_2}$ $ {l_\infty } $ CPU time (s) 4000 × 400 75 × 25 × 1 × 1 0.25 0.29 0.32 1 (48.60) 2000 × 200 150 × 50 × 1 × 1 8.20 × 10−2 0.11 0.15 9.77 (474.99) 75 × 25 × 2 × 2 7.95 × 10−2 0.11 0.16 4.13 (200.48) 1000 × 100 300 × 100 × 1 × 1 1.48 × 10−2 2.12 × 10−2 3.58 × 10−2 78.87 (3833.06) 150 × 50 × 2 × 2 2.77 × 10−2 3.60 × 10−2 6.09 × 10−2 31.37 (1524.60) 75 × 25 × 2 × 4 1.62 × 10−2 2.14 × 10−2 3.57 × 10−2 27.58 (1340.39) 75 × 25 × 3 × 2 2.84 × 10−2 3.61 × 10−2 6.08 × 10−2 26.33 (1279.57) 250 × 25 1200 × 400 × 1 × 1 − − − 6375.50 (309849.29) Table 3. Normalized errors of potential temperature perturbation (
$\theta '$ ) and elapsed CPU time for internal gravity waves test running on different grids.Figure 9. Contour plots of potential temperature perturbation (
$\theta '$ ) for the internal gravity waves on$75 \times 25 \times 3 \times 2$ grid at (a) t = 0 s, (b) t = 750 s, (c) t =1500 s, (d) t = 2250 s, and (e) t = 3000 s.Grid wmax (m s−1) wmin (m s−1) $\theta {'_{\max }}$ (K) $ \theta {'_{\min }} $ (K) 75 × 25 × 2 × 4 2.47 × 10−3 −2.42 × 10−3 2.80 × 10−3 −1.52 × 10−3 75 × 25 × 3 × 2 2.47 × 10−3 −2.42 × 10−3 2.80 × 10−3 −1.52 × 10−3 150 × 50 × 2 × 2 2.47 × 10−3 −2.43 × 10−3 2.80 × 10−3 −1.52 × 10−3 Table 4. Maximum and minimum of vertical velocity,
$w$ , and potential temperature perturbation,$\theta '$ , for the internal gravity waves test on the AMR grids with the finest resolution,$\Delta z = 100{\text{ m}}$ after 3000 s. -
The Schär mountain case (Schär et al., 2002) is used to test the ability of a model to deal with the effects of complex terrain. A mountain with five peaks is used as topography which is specifically defined as
where
${h_0} = 250{\text{ m}}$ ,${a_0} = 5000{\text{ m}}$ , and${\lambda _0} = 4000{\text{ m}}$ . An initial background state of potential temperature is obtained by using a constant Brunt-Väisälä frequency of${N_0} = {10^{ - 2}}{\text{ }}{{\text{s}}^{ - 1}}$ and${\theta _0} = 280{\text{ K}}$ . A constant mean flow of$\overline u = 10{\text{ m }}{{\text{s}}^{ - 1}}$ along the horizontal is also initialized. This case is running on a domain of [–25, 25] km$ \times $ [0, 21] km for 10 hours. Non-reflecting boundary conditions are implemented by putting sponge layers in the area of$z \geqslant 9$ km for the top boundary and$ \left| x \right| \geqslant 15 $ km for the lateral inflow and outflow boundaries. The bottom boundary is a slippery wall. A refinement threshold,$\delta = 0.3$ , is applied to this case.The profile of the Schär mountain is depicted in Fig. 11e, noting that only part of the computational domain is displayed. Contour plots of the horizontal velocity,
$u$ , at various stages in the simulations are displayed in Figs. 11a–d. Since the disturbed regions quickly extended to cover the entire computational domain, it is not expected that the computational costs can be significantly saved by AMR models. Numerical results on a uniform fine grid with a resolution of 125 m × 105 m are used as reference solutions and the corresponding normalized errors are given in Table 5. The elapsed CPU time of the AMR model is about 44% of that on the uniform grid. The AMR grid does not affect accuracy since the numerical errors after 10 hours are almost the same as those on the uniform grid with the same resolution.Figure 11. Contour plots of horizontal velocity ( u ) of the Schär mountain case at (a) t = 7200 s, (b) t = 16800 s, (c) t = 26400 s, (d) t = 36000 s, and (e) the shape and size of the Schär mountain.
Resolution (m) Grid configuration ${l_1}$ ${l_2}$ $ {l_\infty } $ CPU time (s) 500 × 420 100 × 50 × 1 × 1 2.97 × 10−3 5.66 × 10−3 2.19 × 10−2 1 (4028.53) 250 × 210 200 × 100 × 1 × 1 4.31 × 10−4 8.66 × 10−4 9.41 × 10−3 8.66 (34899.12) 100 × 50 × 2 × 2 8.61 × 10−4 1.28 × 10−3 1.03 × 10−2 3.83 (15429.89) 125 × 105 400 × 200 × 1 × 1 − − − 83.10 (334779.90) Table 5. Normalized errors of horizontal velocity (
$u$ ) and elapsed CPU time for the Schär mountain test running on different grids.
Resolution (m) | Grid configuration | $l_1$ | $l_2$ | $ {l_\infty }$ | CPU time (s) |
200 × 200 | 100 × 50 × 1 × 1 | 0.23 | 0.21 | 0.28 | 1 (187.34) |
100 × 100 | 200 × 100 × 1 × 1 | 6.40 × 10−2 | 6.86 × 10−2 | 0.11 | 8.45 (1584.21) |
100 × 50 × 2 × 2 | 6.44 × 10−2 | 6.87 × 10−2 | 0.11 | 2.84 (532.55) | |
50 × 50 | 400 × 200 × 1 × 1 | 8.82 × 10−3 | 1.09 × 10−2 | 2.43 × 10−2 | 75.00 (14051.62) |
200 × 100 × 2 × 2 | 9.34 × 10−3 | 1.15 × 10−2 | 2.43 × 10−2 | 10.17 (1905.17) | |
100 × 50 × 2 × 4 | 8.90 × 10−3 | 1.07 × 10−2 | 2.53 × 10−2 | 19.54 (3660.69) | |
100 × 50 × 3 × 2 | 9.28 × 10−3 | 1.13 × 10−2 | 2.35 × 10−2 | 13.63 (2544.14) | |
25 × 25 | 800 × 400 × 1 × 1 | − | − | − | 631.39 (118285.12) |