Advanced Search
Article Contents

Impact of Perturbation Schemes on the Ensemble Prediction in a Coupled Lorenz Model


doi: 10.1007/s00376-022-1376-z

  • Based on a simple coupled Lorenz model, we investigate how to assess a suitable initial perturbation scheme for ensemble forecasting in a multiscale system involving slow dynamics and fast dynamics. Four initial perturbation approaches are used in the ensemble forecasting experiments: the random perturbation (RP), the bred vector (BV), the ensemble transform Kalman filter (ETKF), and the nonlinear local Lyapunov vector (NLLV) methods. Results show that, regardless of the method used, the ensemble averages behave indistinguishably from the control forecasts during the first few time steps. Due to different error growth in different time-scale systems, the ensemble averages perform better than the control forecast after very short lead times in a fast subsystem but after a relatively long period of time in a slow subsystem. Due to the coupled dynamic processes, the addition of perturbations to fast variables or to slow variables can contribute to an improvement in the forecasting skill for fast variables and slow variables. Regarding the initial perturbation approaches, the NLLVs show higher forecasting skill than the BVs or RPs overall. The NLLVs and ETKFs had nearly equivalent prediction skill, but NLLVs performed best by a narrow margin. In particular, when adding perturbations to slow variables, the independent perturbations (NLLVs and ETKFs) perform much better in ensemble prediction. These results are simply implied in a real coupled air–sea model. For the prediction of oceanic variables, using independent perturbations (NLLVs) and adding perturbations to oceanic variables are expected to result in better performance in the ensemble prediction.
    摘要: 本文基于一个简单的耦合Lorenz模型,探讨了多尺度模式的集合预报初始扰动构造相关问题。集合预报试验使用了四种初始扰动方法:随机扰动(RP)、繁殖向量(BV)、集合变换卡尔曼滤波(ETKF)和非线性局部 Lyapunov 向量(NLLV)方法。结果表明,无论使用哪种方法,预报的初始阶段,集合平均与控制预报相近。耦合Lorenz模型由慢系统和快系统耦合而成。由于误差在不同时间尺度系统呈现不同的增长模态,快系统经过较短的一段时间后,集合平均的结果开始优于控制预报,然而,慢系统经过相对较长的时间后,集合预报才开始起作用。此外,由于不同尺度之间的相互反馈过程,无论是对快变量还是对慢变量叠加扰动,都有助于提高慢系统和快系统的预报技巧。对不同初始集合扰动生成方法进行比较,发现NLLVs总体上优于BVs和 RPs,NLLVs和ETKFs的预报能力几乎相当。当向慢变量叠加扰动时,独立扰动(NLLVs和ETKFs)在集合预报中表现出更好的预报技巧。将简单模型的结果引申到真实的海气耦合模式。我们推测,对于海洋变量的预报,使用独立扰动(NLLVs)并且在海洋变量叠加扰动,会取得更好的集合预报效果。
  • 加载中
  • Figure 1.  Time evolution of variables for the coupled Lorenz model: (a) slow variables and (b) fast variables.

    Figure 2.  Projections of the coupled Lorenz model on three two-dimensional planes: (a)–(c) for the slow variables and (d)–(f) for the fast variables.

    Figure 3.  Schematic diagram of the generation of NLLVs [adapted from Hou et al. (2018)]. The creation of NLLV1 is similar to the creation of BV. To acquire the NLLV2, a pair of RPs is initially added to the analysis state. The evolved perturbations (grey dashed line) are orthogonalized with the NLLV1 (blue dashed line) to produce the NLLV2 (green dashed line) using a Gram–Schmidt re-orthonormalization (GSR) procedure. Similarly, the vectors NLLVn are orthogonalized with NLLV1, NLLV2, NLLV3, …, NLLVn–1.

    Figure 4.  Illustration of the initialization and forecasting procedure. Numbers represent the integration steps, and 1 step = 0.005 tus.

    Figure 5.  Panels (a)–(c) Evolution of control forecasts (light blue) against the true state (light red) as a function of lead time for (a) the whole system, (b) the fast subsystem, and (c) the slow subsystem (in the Euclidean norm). (d) Mean growth rate in the form of Lyapunov exponent (value × 100) of 10000 samples as a function of lead time from the coupled Lorenz model for the control run [the whole system (light purple), fast subsystem (light orange), and slow subsystem (light blue)].

    Figure 6.  Mean RMSE (solid lines) and ensemble spread (dashed lines) of 10 000 samples as a function of lead time for the control run (black), RP method (red), BV method (blue), ETKF method (purple), and NLLV method (green) after adding perturbations to all variables: (a) the whole system, (b) the fast subsystem, and (c) the slow subsystem.

    Figure 7.  Mean RMSE (solid lines) and ensemble spread (dashed lines) of 10 000 samples in the fast subsystem as a function of lead time for the control run (black), random perturbation method (red), BV method (blue), ETKF method (purple), and NLLV method (green) after adding perturbations to different variables: (a) adding perturbations to both fast variables and slow variables, (b) adding perturbations only to fast variables, and (c) adding perturbations only to slow variables.

    Figure 8.  Mean RMSE (solid lines) and ensemble spread (dashed lines) of 10 000 samples in the slow subsystem as a function of lead time for the control run (black), random perturbation method (red), BV method (blue), ETKF method (purple), and NLLV method (green) after adding perturbations to different variables: (a) adding perturbations to both fast variables and slow variables, (b) adding perturbations only to fast variables, and (c) adding perturbations only to slow variables.

    Figure 9.  Panels (a)–(c) RMSE of 10 000 samples based on NLLV and BV methods at (a) 3 tus, (b) 6 tus, and (c) 9 tus in the slow subsystem. The upper right-hand corner indicates the ratio of samples where RMSE for the NLLV method is smaller than the RMSE for the BV method in (a)–(c). Panels (d)–(f) are the same as (a)–(c), but for an ensemble spread of 10 000 samples. The upper right-hand corner indicates the ratio of samples where the ensemble spread for the NLLV method is larger than for the BV method in (d)–(f). Panels (a)–(f) are based on the experiments which add perturbations to both fast and slow variables.

    Figure 10.  (a) Basic Brier score (BS) for the event ${\phi _1}$ (${\phi _1}$: where $ X_3^{({\rm{f}})} $ is the climatological mean to the distance of one standard deviation) of ensemble forecasts based on NLLVs (green line), ETKFs (purple line), BVs (blue line), and RPs (red line) as a function of lead time. Panel (b) is the same as (a), but for event ${\phi _2}$ (${\phi _2}$: where $ X_3^{({\rm{s}})} $ is the climatological mean to the distance of one standard deviation).

    Figure 11.  The histogram of the Talagrand distribution for different member intervals. The horizontal dashed lines denote the expected probability for the ensemble forecasts based on (a) BVs, (b) NLLVs, and (c) ETKFs at 2 tus. Panels (a)–(c) are based on the experiment which adds perturbations to both fast and slow variables and predicts the variable $X_3^{({\rm{f}})}$. Panels (d)–(f) are the same as (a)–(c), but at 6 tus and predicted variable is $X_3^{({\rm{s}})}$.

    Table 1.  Physical parameters used in the coupled Lorenz model

    ParameterDescriptionValue
    $ \sigma $Prandtl number10
    $ b $Physical dimensions of the layer8/3
    $ c $Relative time scale10
    $ {r_{\rm{s}}} $Rayleigh number of the slow dynamics28
    ${r_{\rm{f}}}$Rayleigh number of the fast dynamics45
    $ {\varepsilon _{\rm{s}}} $Coupling coefficient of the slow dynamics${10^{ - 2}}$
    $ {\varepsilon _{\rm{f}}} $Coupling coefficient of the fast dynamics10
    DownLoad: CSV
  • Bender, M. A., I. Ginis, R. Tuleya, B. Thomas, and T. Marchok, 2007: The operational GFDL coupled hurricane-ocean prediction system and a summary of its performance. Mon. Wea. Rev., 135, 3965−3989, https://doi.org/10.1175/2007MWR2032.1.
    Bishop, C. H., B. J. Etherton, and S. J. Majumdar, 2001: Adaptive sampling with the ensemble transform kalman filter. Part I: Theoretical aspects. Mon. Wea. Rev., 129, 420−436, https://doi.org/10.1175/1520-0493(2001)129<0420:ASWTET>2.0.CO;2.
    Boffetta, G., P. Giuliani, G. Paladin, and A. Vulpiani, 1998: An extension of the lyapunov analysis for the predictability problem. J. Atmos. Sci., 55, 3409−3416, https://doi.org/10.1175/1520-0469(1998)055<3409:AEOTLA>2.0.CO;2.
    Brier, G. W., 1950: Verification of forecasts expressed in terms of probability. Mon. Wea. Rev., 78, 1−3, https://doi.org/10.1175/1520-0493(1950)078<0001:VOFEIT>2.0.CO;2.
    Buckingham, C., T. Marchok, I. Ginis, L. Rothstein, and D. Rowe, 2010: Short- and medium-range prediction of tropical and transitioning cyclone tracks within the NCEP global ensemble forecasting system. Wea. Forecasting, 25, 1736−1754, https://doi.org/10.1175/2010WAF2222398.1.
    Candille, G., and O. Talagrand, 2005: Evaluation of probabilistic prediction systems for a scalar variable. Quart. J. Roy. Meteor. Soc., 131, 2131−2150, https://doi.org/10.1256/qj.04.71.
    Demeritt, D., H. Cloke, F. Pappenberger, J. Thielen, J. Bartholmes, and M.-H. Ramos, 2007: Ensemble predictions and perceptions of risk, uncertainty, and error in flood forecasting. Environmental Hazards, 7, 115−127, https://doi.org/10.1016/j.envhaz.2007.05.001.
    Ding, R. Q., and J. P. Li, 2007: Nonlinear finite-time Lyapunov exponent and predictability. Physics Letters A, 364, 396−400, https://doi.org/10.1016/j.physleta.2006.11.094.
    Ding, R. Q., and J. P. Li, 2012: Relationships between the limit of predictability and initial error in the uncoupled and coupled lorenz models. Adv. Atmos. Sci., 29, 1078−1088, https://doi.org/10.1007/s00376-012-1207-8.
    Ding, R. Q., J. P. Li, and B. S. Li, 2017: Determining the spectrum of the nonlinear local Lyapunov exponents in a multidimensional chaotic system. Adv. Atmos. Sci., 34, 1027−1034, https://doi.org/10.1007/s00376-017-7011-8.
    Dong, B. W., R. T. Sutton, L. Shaffrey, and N. P. Klingaman, 2017: Attribution of forced decadal climate change in coupled and uncoupled ocean-atmosphere model experiments. J. Climate, 30, 6203−6223, https://doi.org/10.1175/JCLI-D-16-0578.1.
    Ehrendorfer, M., 1997: Predicting the uncertainty of numerical weather forecasts: A review. Meteor. Z., 6, 147−183, https://doi.org/10.1127/metz/6/1997/147.
    Evensen, G., 2003: The Ensemble Kalman Filter: Theoretical formulation and practical implementation. Ocean Dynamics, 53, 343−367, https://doi.org/10.1007/s10236-003-0036-9.
    Evensen, G., 2004: Sampling strategies and square root analysis schemes for the EnKF. Ocean Dynamics, 54, 539−560, https://doi.org/10.1007/s10236-004-0099-2.
    Feng, J., R. Q. Ding, D. Q. Liu, and J. P. Li, 2014: The application of nonlinear local lyapunov vectors to ensemble predictions in lorenz systems. J. Atmos. Sci., 71, 3554−3567, https://doi.org/10.1175/JAS-D-13-0270.1.
    Feng, J., R. Q. Ding, J. P. Li, and D. Q. Liu, 2016: Comparison of nonlinear local lyapunov vectors with bred vectors, random perturbations and ensemble transform kalman filter strategies in a barotropic model. Adv. Atmos. Sci., 33, 1036−1046, https://doi.org/10.1007/s00376-016-6003-4.
    Feng, J., J. P. Li, R. Q. Ding, and Z. Toth, 2018: Comparison of nonlinear local lyapunov vectors and bred vectors in estimating the spatial distribution of error growth. J. Atmos. Sci., 75, 1073−1087, https://doi.org/10.1175/JAS-D-17-0266.1.
    Froude, L. S. R., L. Bengtsson, and K. I. Hodges, 2007: The prediction of extratropical storm tracks by the ECMWF and NCEP ensemble prediction systems. Mon. Wea. Rev., 135, 2545−2567, https://doi.org/10.1175/MWR3422.1.
    Fu, X. H., and B. Wang, 2004: Differences of boreal summer intraseasonal oscillations simulated in an atmosphere-ocean coupled model and an atmosphere-only model. J. Climate, 17, 1263−1271, https://doi.org/10.1175/1520-0442(2004)017<1263:DOBSIO>2.0.CO;2.
    Hou, Z. L., J. P. Li, R. Q. Ding, J. Feng, and W. S. Duan, 2018: The application of nonlinear local Lyapunov vectors to the Zebiak-Cane model and their performance in ensemble prediction. Climate Dyn., 51, 283−304, https://doi.org/10.1007/s00382-017-3920-6.
    Hunt, B. R., E. J. Kostelich, and I. Szunyogh, 2007: Efficient data assimilation for spatiotemporal chaos: A local ensemble transform Kalman filter. Physica D: Nonlinear Phenomena, 230, 112−126, https://doi.org/10.1016/j.physd.2006.11.008.
    Larson, S. M., and B. P. Kirtman, 2017: Drivers of coupled model ENSO error dynamics and the spring predictability barrier. Climate Dyn., 48, 3631−3644, https://doi.org/10.1007/s00382-016-3290-5.
    Leith, C. E., 1974: Theoretical skill of Monte Carlo forecasts. Mon. Wea. Rev., 102, 409−418, https://doi.org/10.1175/1520-0493(1974)102<0409:TSOMCF>2.0.CO;2.
    Leutbecher, M., and T. N. Palmer, 2008: Ensemble forecasting. J. Comput. Phys., 227, 3515−3539, https://doi.org/10.1016/j.jcp.2007.02.014.
    Liu, Z. Y., S. Wu, S. Q. Zhang, Y. Liu, and X. Y. Rong, 2013: Ensemble data assimilation in a simple coupled climate model: The role of ocean-atmosphere interaction. Adv. Atmos. Sci., 30, 1235−1248, https://doi.org/10.1007/s00376-013-2268-z.
    Lorenz, E. N., 1963: Deterministic nonperiodic flow. J. Atmos. Sci., 20, 130−141, https://doi.org/10.1175/1520-0469(1963)020<0130:DNF>2.0.CO;2.
    Lorenz, E. N., 1969: The predictability of a flow which possesses many scales of motion. Tellus, 21, 289−307, https://doi.org/10.1111/j.2153-3490.1969.tb00444.x.
    Lorenz, E. N., 1982: Atmospheric predictability experiments with a large numerical model. Tellus, 34, 505−513, https://doi.org/10.3402/tellusa.v34i6.10836.
    Magnusson, L., M. Leutbecher, and E. Källén, 2008: Comparison between singular vectors and breeding vectors as initial perturbations for the ECMWF ensemble prediction system. Mon. Wea. Rev., 136, 4092−4104, https://doi.org/10.1175/2008MWR2498.1.
    Mogensen, K. S., L. Magnusson, and J. R. Bidlot, 2017: Tropical cyclone sensitivity to ocean coupling in the ECMWF coupled model. J. Geophys. Res., 122, 4392−4412, https://doi.org/10.1002/2017JC012753.
    Mu, M., and Z. N. Jiang, 2008: A new approach to the generation of initial perturbations for ensemble prediction: Conditional nonlinear optimal perturbation. Chinese Science Bulletin, 53(13), 2062−2068, https://doi.org/10.1007/s11434-008-0272-y.
    Ndione, D. M., S. Sambou, S. Kane, S. Diatta, M. L. Sane, and I. Leye, 2020: Ensemble forecasting system for the management of the Senegal River discharge: Application upstream the Manantali dam. Applied Water Science, 10, 126, https://doi.org/10.1007/s13201-020-01199-y.
    Palmer, T., R. Buizza, R. Hagedorn, A. Lawrence, M. Leutbecher, and L. Smith, 2006: Ensemble prediction: A pedagogical perspective. ECMWF Newsletter, 106, 10−17, https://doi.org/10.21957/ab129056ew.
    Palmer, T. N., F. Molteni, R. Mureau, R. Buizza, P. Chapelet, and J. Tribbia, 1992: Ensemble prediction. ECMWF Technical Memorandum, No. 188, 85 pp.
    Perlin, N., I. Kamenkovich, Y. Gao, and B. P. Kirtman, 2020: A study of mesoscale air-sea interaction in the Southern Ocean with a regional coupled model. Ocean Modelling, 153, 101660, https://doi.org/10.1016/j.ocemod.2020.101660.
    Ratnam, J. V., F. Giorgi, A. Kaginalkar, and S. Cozzini, 2009: Simulation of the Indian monsoon using the RegCM3-ROMS regional coupled model. Climate Dyn., 33, 119−139, https://doi.org/10.1007/s00382-008-0433-3.
    Soloviev, A. V., R. Lukas, M. A. Donelan, B. K. Haus, and I. Ginis, 2014: The air-sea interface and surface stress under tropical cyclones. Scientific Reports, 4, 5306, https://doi.org/10.1038/srep05306.
    Stephenson, D. B., C. A. S. Coelho, and I. T. Jolliffe, 2008: Two extra components in the brier score decomposition. Wea. Forecasting, 23, 752−757, https://doi.org/10.1175/2007WAF2006116.1.
    Talagrand, O., R. Vautard, and B. Strauss, 1997: Evaluation of probabilistic prediction systems. Proc. ECMWF Workshop on Predictability, Shinfield Park, Reading, ECMWF, 1−25.
    Thompson, B., C. Sanchez, X. M. Sun, G. T. Song, J. Y. Liu, X.-Y. Huang, and P. Tkalich, 2019: A high-resolution atmosphere-ocean coupled model for the western Maritime Continent: Development and preliminary assessment. Climate Dyn., 52, 3951−3981, https://doi.org/10.1007/s00382-018-4367-0.
    Toth, Z., and E. Kalnay, 1993: Ensemble forecasting at NMC: The generation of perturbations. Bull. Amer. Meteor. Soc., 74, 2317−2330, https://doi.org/10.1175/1520-0477(1993)074<2317:EFANTG>2.0.CO;2.
    Toth, Z., and E. Kalnay, 1997: Ensemble forecasting at NCEP and the breeding method. Mon. Wea. Rev., 125, 3297−3319, https://doi.org/10.1175/1520-0493(1997)125<3297:EFANAT>2.0.CO;2.
    Vannitsem, S., 2017: Predictability of large-scale atmospheric motions: Lyapunov exponents and error dynamics. Chaos, 27, 032101, https://doi.org/10.1063/1.4979042.
    Vannitsem, S., and W. S. Duan, 2020: On the use of near-neutral Backward Lyapunov Vectors to get reliable ensemble forecasts in coupled ocean-atmosphere systems. Climate Dyn., 55, 1125−1139, https://doi.org/10.1007/s00382-020-05313-3.
    Wang, B., Q. H. Ding, X. H. Fu, I.-S. Kang, K. Jin, J. Shukla, and F. Doblas-Reyes, 2005: Fundamental challenge in simulation and prediction of summer monsoon rainfall. Geophys. Res. Lett., 32, L15711, https://doi.org/10.1029/2005GL022734.
    Wang, X., and C. H. Bishop, 2003: A comparison of breeding and ensemble transform kalman filter ensemble forecast schemes. J. Atmos. Sci., 60, 1140−1158, https://doi.org/10.1175/1520-0469(2003)060<1140:ACOBAE>2.0.CO;2.
    Wang, Z. R., D. X. Wu, D. K. Chen, H. D. Wu, X. J. Song, and Z. H. Zhang, 2002: Critical time span and nonlinear action structure of climatic atmosphere and ocean. Adv. Atmos. Sci., 19, 741−756, https://doi.org/10.1007/s00376-002-0013-0.
    Wei, M. Z., Z. Toth, R. Wobus, Y. J. Zhu, C. H. Bishop, and X. G. Wang, 2006: Ensemble Transform Kalman Filter-based ensemble perturbations in an operational global prediction system at NCEP. Tellus A, 58, 28−44, https://doi.org/10.1111/j.1600-0870.2006.00159.x.
    Wei, M. Z., Z. Toth, R. Wobus, and Y. J. Zhu, 2008: Initial perturbations based on the ensemble transform (ET) technique in the NCEP global operational forecast system. Tellus A, 60, 62−79, https://doi.org/10.1111/j.1600-0870.2007.00273.x.
    Wolf, A., J. B. Swift, H. L. Swinney, and J. A. Vastano, 1985: Determining Lyapunov exponents from a time series. Physica D: Nonlinear Phenomena, 16, 285−317, https://doi.org/10.1016/0167-2789(85)90011-9.
    Wu, C.-C., and Coauthors, 2009: Intercomparison of targeted observation guidance for tropical cyclones in the northwestern pacific. Mon. Wea. Rev., 137, 2471−2492, https://doi.org/10.1175/2009MWR2762.1.
    Zhang, S., M. J. Harrison, A. T. Wittenberg, A. Rosati, J. L. Anderson, and V. Balaji, 2005: Initialization of an ENSO forecast system using a parallelized ensemble filter. Mon. Wea. Rev., 133, 3176−3201, https://doi.org/10.1175/MWR3024.1.
    Zhang, Z., and T. N. Krishnamurti, 1999: A perturbation method for hurricane ensemble predictions. Mon. Wea. Rev., 127, 447−469, https://doi.org/10.1175/1520-0493(1999)127<0447:APMFHE>2.0.CO;2.
    Zhou, T. J., L. Ding, J. Ji, L. Li, and W. W. Huang, 2019: Ensemble transform Kalman filter (ETKF) for large-scale wildland fire spread simulation using FARSITE tool and state estimation method. Fire Safety Journal, 105, 95−106, https://doi.org/10.1016/j.firesaf.2019.02.009.
    Zou, L. W., T. J. Zhou, and D. D. Peng, 2016: Dynamical downscaling of historical climate over CORDEX East Asia domain: A comparison of regional ocean-atmosphere coupled model to stand-alone RCM simulations. J. Geophys. Res., 121, 1442−1458, https://doi.org/10.1002/2015JD023912.
  • [1] Jun Kyung KAY, Hyun Mee KIM, Young-Youn PARK, Joohyung SON, 2013: Effect of Doubling the Ensemble Size on the Performance of Ensemble Prediction in the Warm Season Using MOGREPS Implemented at the KMA, ADVANCES IN ATMOSPHERIC SCIENCES, 30, 1287-1302.  doi: 10.1007/s00376-012-2083-y
    [2] JIANG Zhina, MU Mu, 2009: A Comparison Study of the Methods of Conditional Nonlinear Optimal Perturbations and Singular Vectors in Ensemble Prediction, ADVANCES IN ATMOSPHERIC SCIENCES, 26, 465-470.  doi: 10.1007/s00376-009-0465-6
    [3] Zhizhen XU, Jing CHEN, Zheng JIN, Hongqi LI, Fajing CHEN, 2020: Representing Model Uncertainty by Multi-Stochastic Physics Approaches in the GRAPES Ensemble, ADVANCES IN ATMOSPHERIC SCIENCES, 37, 328-346.  doi: 10.1007/s00376-020-9171-1
    [4] Jing WANG, Bin WANG, Juanjuan LIU, Yongzhu LIU, Jing CHEN, Zhenhua HUO, 2020: Application and Characteristic Analysis of the Moist Singular Vector in GRAPES-GEPS, ADVANCES IN ATMOSPHERIC SCIENCES, 37, 1164-1178.  doi: 10.1007/s00376-020-0092-9
    [5] Xiaogu ZHENG, 2009: An Adaptive Estimation of Forecast Error Covariance Parameters for Kalman Filtering Data Assimilation, ADVANCES IN ATMOSPHERIC SCIENCES, 26, 154-160.  doi: 10.1007/s00376-009-0154-5
    [6] Temesgen Gebremariam ASFAW, Jing-Jia LUO, 2022: Seasonal Prediction of Summer Precipitation over East Africa Using NUIST-CFS1.0, ADVANCES IN ATMOSPHERIC SCIENCES, 39, 355-372.  doi: 10.1007/s00376-021-1180-1
    [7] Jie FENG, Ruiqiang DING, Jianping LI, Deqiang LIU, 2016: Comparison of Nonlinear Local Lyapunov Vectors with Bred Vectors, Random Perturbations and Ensemble Transform Kalman Filter Strategies in a Barotropic Model, ADVANCES IN ATMOSPHERIC SCIENCES, 33, 1036-1046.  doi: 10.1007/s00376-016-6003-4
    [8] Youmin TANG, Jaison AMBANDAN, Dake CHEN, , , 2014: Nonlinear Measurement Function in the Ensemble Kalman Filter, ADVANCES IN ATMOSPHERIC SCIENCES, 31, 551-558.  doi: 10.1007/s00376-013-3117-9
    [9] CUI Limei, SUN Jianhua, QI Linlin, LEI Ting, 2011: Application of ATOVS Radiance-Bias Correction to Typhoon Track Prediction with Ensemble Kalman Filter Data Assimilation, ADVANCES IN ATMOSPHERIC SCIENCES, 28, 178-186.  doi: 10.1007/s00376-010-9145-9
    [10] Xin LIU, Jing CHEN, Yongzhu LIU, Zhenhua HUO, Zhizhen XU, Fajing CHEN, Jing WANG, Yanan MA, Yumeng HAN, 2024: An Initial Perturbation Method for the Multiscale Singular Vector in Global Ensemble Prediction, ADVANCES IN ATMOSPHERIC SCIENCES, 41, 545-563.  doi: 10.1007/s00376-023-3035-4
    [11] Zhizhen XU, Jing CHEN, Mu MU, Guokun DAI, Yanan MA, 2022: A Nonlinear Representation of Model Uncertainty in a Convective-Scale Ensemble Prediction System, ADVANCES IN ATMOSPHERIC SCIENCES, 39, 1432-1450.  doi: 10.1007/s00376-022-1341-x
    [12] WAN Liying, ZHU Jiang, WANG Hui, YAN Changxiang, Laurent BERTINO, 2009: A ``Dressed" Ensemble Kalman Filter Using the Hybrid Coordinate Ocean Model in the Pacific, ADVANCES IN ATMOSPHERIC SCIENCES, 26, 1042-1052.  doi: 10.1007/s00376-009-7208-6
    [13] Fuqing ZHANG, Meng ZHANG, James A. HANSEN, 2009: Coupling Ensemble Kalman Filter with Four-dimensional Variational Data Assimilation, ADVANCES IN ATMOSPHERIC SCIENCES, 26, 1-8.  doi: 10.1007/s00376-009-0001-8
    [14] ZHANG Shuwen, LI Deqin, QIU Chongjian, 2011: A Multimodel Ensemble-based Kalman Filter for the Retrieval of Soil Moisture Profiles, ADVANCES IN ATMOSPHERIC SCIENCES, 28, 195-206.  doi: 10.1007/s00376-010-9200-6
    [15] Zhaoxia PU, Joshua HACKER, 2009: Ensemble-based Kalman Filters in Strongly Nonlinear Dynamics, ADVANCES IN ATMOSPHERIC SCIENCES, 26, 373-380.  doi: 10.1007/s00376-009-0373-9
    [16] ZHENG Xiaogu, WU Guocan, ZHANG Shupeng, LIANG Xiao, DAI Yongjiu, LI Yong, , 2013: Using Analysis State to Construct a Forecast Error Covariance Matrix in Ensemble Kalman Filter Assimilation, ADVANCES IN ATMOSPHERIC SCIENCES, 30, 1303-1312.  doi: 10.1007/s00376-012-2133-5
    [17] ZHANG Shuwen, LI Haorui, ZHANG Weidong, QIU Chongjian, LI Xin, 2005: Estimating the Soil Moisture Profile by Assimilating Near-Surface Observations with the Ensemble Kalman Filter (EnKF), ADVANCES IN ATMOSPHERIC SCIENCES, 22, 936-945.  doi: 10.1007/BF02918692
    [18] Kefeng ZHU, Ming XUE, Yujie PAN, Ming HU, Stanley G. BENJAMIN, Stephen S. WEYGANDT, Haidao LIN, 2019: The Impact of Satellite Radiance Data Assimilation within a Frequently Updated Regional Forecast System Using a GSI-based Ensemble Kalman Filter, ADVANCES IN ATMOSPHERIC SCIENCES, 36, 1308-1326.  doi: 10.1007/s00376-019-9011-3
    [19] Jian YUE, Zhiyong MENG, Cheng-Ku YU, Lin-Wen CHENG, 2017: Impact of Coastal Radar Observability on the Forecast of the Track and Rainfall of Typhoon Morakot (2009) Using WRF-based Ensemble Kalman Filter Data Assimilation, ADVANCES IN ATMOSPHERIC SCIENCES, 34, 66-78.  doi: 10.1007/s00376-016-6028-8
    [20] Yu ZHANG, Yuanfu XIE, Hongli WANG, Dehui CHEN, Zoltan TOTH, 2016: Ensemble Transform Sensitivity Method for Adaptive Observations, ADVANCES IN ATMOSPHERIC SCIENCES, 33, 10-20.  doi: 10.1007/s00376-015-5031-9

Get Citation+

Export:  

Share Article

Manuscript History

Manuscript received: 23 September 2021
Manuscript revised: 19 May 2022
Manuscript accepted: 01 June 2022
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Impact of Perturbation Schemes on the Ensemble Prediction in a Coupled Lorenz Model

    Corresponding author: Ruiqiang DING, drq@bnu.edu.cn
  • 1. State Key Laboratory of Numerical Modeling for Atmospheric Sciences and Geophysical Fluid Dynamics (LASG), Institute of Atmospheric Physics, Chinese Academy of Sciences, Beijing 100029, China
  • 2. College of Earth Science, University of Chinese Academy of Sciences, Beijing 100049, China
  • 3. State Key Laboratory of Earth Surface Processes and Resource Ecology, Beijing Normal University, Beijing 100875, China
  • 4. Frontiers Science Center for Deep Ocean Multispheres and Earth System (FDOMES)/Key Laboratory of Physical Oceanography/Institute for Advanced Ocean Studies, Ocean University of China, Qingdao 266100, China
  • 5. Laboratory for Ocean Dynamics and Climate, Pilot Qingdao National Laboratory for Marine Science and Technology, Qingdao 266237, China
  • 6. Department of Atmospheric and Oceanic Sciences and Institute of Atmospheric Sciences, Fudan University, Shanghai 200438, China

Abstract: Based on a simple coupled Lorenz model, we investigate how to assess a suitable initial perturbation scheme for ensemble forecasting in a multiscale system involving slow dynamics and fast dynamics. Four initial perturbation approaches are used in the ensemble forecasting experiments: the random perturbation (RP), the bred vector (BV), the ensemble transform Kalman filter (ETKF), and the nonlinear local Lyapunov vector (NLLV) methods. Results show that, regardless of the method used, the ensemble averages behave indistinguishably from the control forecasts during the first few time steps. Due to different error growth in different time-scale systems, the ensemble averages perform better than the control forecast after very short lead times in a fast subsystem but after a relatively long period of time in a slow subsystem. Due to the coupled dynamic processes, the addition of perturbations to fast variables or to slow variables can contribute to an improvement in the forecasting skill for fast variables and slow variables. Regarding the initial perturbation approaches, the NLLVs show higher forecasting skill than the BVs or RPs overall. The NLLVs and ETKFs had nearly equivalent prediction skill, but NLLVs performed best by a narrow margin. In particular, when adding perturbations to slow variables, the independent perturbations (NLLVs and ETKFs) perform much better in ensemble prediction. These results are simply implied in a real coupled air–sea model. For the prediction of oceanic variables, using independent perturbations (NLLVs) and adding perturbations to oceanic variables are expected to result in better performance in the ensemble prediction.

摘要: 本文基于一个简单的耦合Lorenz模型,探讨了多尺度模式的集合预报初始扰动构造相关问题。集合预报试验使用了四种初始扰动方法:随机扰动(RP)、繁殖向量(BV)、集合变换卡尔曼滤波(ETKF)和非线性局部 Lyapunov 向量(NLLV)方法。结果表明,无论使用哪种方法,预报的初始阶段,集合平均与控制预报相近。耦合Lorenz模型由慢系统和快系统耦合而成。由于误差在不同时间尺度系统呈现不同的增长模态,快系统经过较短的一段时间后,集合平均的结果开始优于控制预报,然而,慢系统经过相对较长的时间后,集合预报才开始起作用。此外,由于不同尺度之间的相互反馈过程,无论是对快变量还是对慢变量叠加扰动,都有助于提高慢系统和快系统的预报技巧。对不同初始集合扰动生成方法进行比较,发现NLLVs总体上优于BVs和 RPs,NLLVs和ETKFs的预报能力几乎相当。当向慢变量叠加扰动时,独立扰动(NLLVs和ETKFs)在集合预报中表现出更好的预报技巧。将简单模型的结果引申到真实的海气耦合模式。我们推测,对于海洋变量的预报,使用独立扰动(NLLVs)并且在海洋变量叠加扰动,会取得更好的集合预报效果。

    • In recent years, air–sea coupled models which describe the interactions between the atmosphere and the ocean have been more extensively applied to simulate weather and climate phenomena (Bender et al., 2007; Zou et al., 2016; Larson and Kirtman, 2017; Mogensen et al., 2017). Air–sea coupling plays an important role in the simulation of weather and climate (Dong et al., 2017; Thompson et al., 2019). At the air–sea interface, material and energy are exchanged, along with many complex physical processes (Soloviev et al., 2014). Coupled models can describe these coupled feedback processes better than atmosphere-only models (Perlin et al., 2020). Hence, the simulation of weather and climate phenomena can be improved by using a coupled air–sea model (Fu and Wang, 2004; Wang et al., 2005; Ratnam et al., 2009; Dong et al., 2017).

      However, the simulation of weather and climate phenomena using coupled air-sea models involves many uncertainties, including initial condition uncertainty (Lorenz, 1969, 1982) and model uncertainty (Leutbecher and Palmer, 2008). Ensemble prediction technology has evolved to reconcile these uncertainties (Leith, 1974; Ehrendorfer, 1997; Demeritt et al., 2007). It generates ensemble members by adding perturbations to the analysis state (Magnusson et al., 2008). The mean of the ensemble members can reduce the errors compared to a single forecast, and we can quantitatively estimate the probability density of a forecast state with a finite number of ensemble members (Froude et al., 2007; Leutbecher and Palmer, 2008; Feng et al., 2014).

      Here, we mainly focus on ensemble prediction related to initial condition uncertainty. The key to constructing initial perturbations is to generate several initial states that can represent the initial uncertainty (Zhang and Krishnamurti, 1999). Many ensemble initial perturbation methods have been developed in succession, such as the Monte Carlo method (also called the random perturbation (RP) method (Leith, 1974)), the bred vector (BV) method (Toth and Kalnay, 1993, 1997), the singular vector (SV) method (Palmer et al., 1992), the ensemble transform Kalman filter (ETKF) method (Wang and Bishop, 2003), the ensemble transform with rescaling (ETR) method (Wei et al., 2006, 2008), the conditional nonlinear optimal perturbations (CNOPs) method (Mu and Jiang, 2008) and the nonlinear local Lyapunov vector (NLLV) method (Feng et al., 2014, 2016, 2018; Ding et al., 2017).

      Many studies have focused on ensemble prediction in atmosphere-only or ocean-only models, but it has not been explored extensively in air–sea coupled models. Ensemble prediction in coupled models seems more complex because of the different time scales of the ocean and the atmosphere (Liu et al., 2013). An initial error can also evolve on different time scales (Vannitsem, 2017). In addition, the feedback process between the coupled components makes the system highly sensitive to errors (Zhang et al., 2005). Hence, important issues in ensemble forecasting in coupled models which contain feedback processes at different time scales remain to be explored.

      Therefore, this paper determines how to add appropriate ensemble initial perturbations to a multiscale system based on multiple initial perturbation methods. The system is called the coupled Lorenz model, characterized by a slow subsystem coupled with a fast subsystem (Boffetta et al., 1998; Ding and Li, 2012). The fast subsystem fluctuates approximately 10 times faster than the slow subsystem, which is close to the relative time scale between the atmosphere and the ocean (Wang et al., 2002). Therefore, we can assume the coupled Lorenz model to be a toy coupled air-sea model.

      The remainder of this paper is organized as follows. Section 2 introduces the coupled Lorenz model and the algorithms to obtain the BVs, ETKFs, and NLLVs. Section 3 presents the properties of RPs, BVs, ETKFs, and NLLVs in the multiscale system. Section 4 is a summary and discussion of our major findings.

    2.   Model and methodology
    • The model used in this study is the coupled Lorenz model. It couples two simple Lorenz63 models (Lorenz, 1963) with different time scales. The first characterizes the slow dynamics, and the second characterizes the fast dynamics (Boffetta et al., 1998; Ding and Li, 2012). The governing equations include:

      where the superscripts $({\rm{s}})$ and $({\rm{f}})$ denote the slow dynamics and fast dynamics, respectively. The physical parameters of the above equations are displayed in Table 1. The relative time scale $c$ is a constant set to 10, indicating that the fast dynamics fluctuate approximately 10 times faster than the slow dynamics. It is close to the relative time scale between the ocean and the atmosphere, which is about 9 (Wang et al., 2002). The variations in the fast variables change much faster than the variations in the slow variables (Fig. 1). The uncoupled slow and fast Lorenz models (coupling coefficients ${\varepsilon _{\rm{s}}}{\kern 1pt} {\kern 1pt} {\kern 1pt} = {\kern 1pt} {{\kern 1pt} ^{}}0,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\varepsilon _{\rm{f}}}{\kern 1pt} {\kern 1pt} {\kern 1pt} { = ^{}}{\kern 1pt} {\kern 1pt} 0$) exhibit chaotic dynamics, with their Lyapunov exponents greater than zero. In setting ${\varepsilon _{\rm{s}}}{\kern 1pt} {\kern 1pt} {\kern 1pt} = {\kern 1pt} {\kern 1pt} {10^{ - 2}},{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\varepsilon _{\rm{f}}}{\kern 1pt} {\kern 1pt} {\kern 1pt} = 1 0$, the maximal Lyapunov exponent in the coupled Lorenz model has a value of 11.5, close to the value from uncoupled fast Lorenz models (Boffetta et al., 1998), indicating that it is the error growth of the fast system that determines the maximal Lyapunov exponent in the coupled Lorenz model.

      ParameterDescriptionValue
      $ \sigma $Prandtl number10
      $ b $Physical dimensions of the layer8/3
      $ c $Relative time scale10
      $ {r_{\rm{s}}} $Rayleigh number of the slow dynamics28
      ${r_{\rm{f}}}$Rayleigh number of the fast dynamics45
      $ {\varepsilon _{\rm{s}}} $Coupling coefficient of the slow dynamics${10^{ - 2}}$
      $ {\varepsilon _{\rm{f}}} $Coupling coefficient of the fast dynamics10

      Table 1.  Physical parameters used in the coupled Lorenz model

      Figure 1.  Time evolution of variables for the coupled Lorenz model: (a) slow variables and (b) fast variables.

      The associated attractor of the coupled system seems interesting from the physical parameters given in Table 1. The two-dimensional projections of the attractor are shown in Fig. 2. The slow dynamics appear to show a typical Lorenz model (Figs. 2ac), whereas the fast dynamics seem much more chaotic (Figs. 2ef). The attractor orbit for fast dynamics becomes denser and the rate of motion is accelerated (Figs. 2ef).

      Figure 2.  Projections of the coupled Lorenz model on three two-dimensional planes: (a)–(c) for the slow variables and (d)–(f) for the fast variables.

    • We use four methods to generate initial perturbations: RP, BV, ETKF, and NLLV. A brief description of the BV, NLLV, and ETKF methods follows.

    • The BV method is based on the rationale that any initial random errors in the basic flow would evolve into the fastest growing directions (leading Lyapunov vectors) in phase space (Toth and Kalnay, 1993, 1997; Feng et al., 2014). The generation of BVs is described as follows. At first, a group of small initial random perturbations is added to the analysis state. After a period of integration (a breeding cycle), the differences between the control and perturbed forecasts are rescaled to the size of the initial perturbations. The rescaled difference fields will be added to the subsequent analysis. After repeating the process for several breeding cycles, the perturbation evolves into a fast-growing perturbation, generating the BVs. The following mathematical formulation is to describe the repeated process:

      where the ${{\boldsymbol{x}}_{\rm{c}}}$ and ${{\boldsymbol{x}}_{\rm{p}}}$ represent the control trajectory and perturbation trajectory, respectively. The term ${\varepsilon _0}{{\boldsymbol{p}} \mathord{\left/ {\vphantom {p {\left\| p \right\|}}} \right. } {\left\| {\boldsymbol{p}} \right\|}}$ represents the scaling, where ${\varepsilon _0}$ is a scaling factor and ${\boldsymbol{p}}$ is the difference between the control and perturbed forecasts.

    • NLLVs are a nonlinear extension of the Lyapunov vectors (LVs) similar to BVs (Feng et al., 2014; Hou et al., 2018). Compared to BVs, different NLLVs are independent of one another and represent the fastest direction of error growth in different subspaces of the phase space. The generation of NLLVs is introduced below (Feng et al., 2014, 2016). As shown in Fig. 3, the leading NLLV (NLLV1), the fastest-growing direction, can be obtained via a breeding process similar to creating a BV. The rest of the NLLVs can be obtained in each breeding cycle via a Gram–Schmidt re-orthonormalization (GSR) process (Wolf et al., 1985; Feng et al., 2014). The evolved perturbations (grey dashed lines in Fig. 3) are orthogonalized with the leading NLLV (NLLVn are orthogonalized associated with NLLV1, NLLV2, NLLV3, …, NLLVn−1). The orthogonalized perturbations are then scaled back to the initial size and enter the next breeding process. After multiple breeding cycles, the NLLVs are produced. This paper's breeding cycle for generating BVs and NLLVs lasted 0.05 time units (tus) and was repeated 20 times.

      Figure 3.  Schematic diagram of the generation of NLLVs [adapted from Hou et al. (2018)]. The creation of NLLV1 is similar to the creation of BV. To acquire the NLLV2, a pair of RPs is initially added to the analysis state. The evolved perturbations (grey dashed line) are orthogonalized with the NLLV1 (blue dashed line) to produce the NLLV2 (green dashed line) using a Gram–Schmidt re-orthonormalization (GSR) procedure. Similarly, the vectors NLLVn are orthogonalized with NLLV1, NLLV2, NLLV3, …, NLLVn–1.

    • The ETKF method, initially introduced by Bishop et al. (2001), was derived from ensemble-based data assimilation theory, which is associated with Kalman filtering (Wang and Bishop, 2003; Wei et al., 2006; Wu et al., 2009). Similar to the ensemble Kalman filter (EnKF), the ETKF method applies Kalman filtering to generate a sample analysis ensemble. However, the ETKF only uses the forecast error covariance matrix to estimate the analysis error covariance through a transformation matrix, not updating the mean state (Wang and Bishop, 2003; Zhou et al., 2019). The equation for the ETKF algorithm is as follows:

      where ${{\boldsymbol{X}}_{\rm{a}}}$ and ${{\boldsymbol{X}}_{\text{f}}}$ are denoted as the analysis perturbation and forecast perturbation matrix, respectively, and ${\boldsymbol{T}}$ is a transformation matrix. The detailed computation process follows Hunt et al. (2007). Localization is not used in this method. A multiplicative covariance inflation factor (set to 1.3) is applied. The observation was produced by adding a random perturbation (following standard Gaussian distribution) to the true state. Moreover, we use an ensemble size of 20, assimilated every 0.05 tus, and the performing time is over 1 tus.

      Studies have shown that the ETKF can be used to generate ensemble perturbations and outperform most ensemble generation schemes in sampling the analysis uncertainties (Wei et al., 2006; Feng et al., 2016). One of the greatest attributes of ETKFs is that they are orthogonal in observational space (Wang and Bishop, 2003; Wei et al., 2006; Feng et al., 2016).

    • To clarify the performance of the evolution of the initial perturbations in a multiscale system as much as possible, we performed several ensemble forecasting experiments in the coupled Lorenz model based on RP, BV, ETKF, and NLLV methods. The model is integrated by a fourth-order Runge-Kutta method with a time step of 0.005 tus in all experiments. The procedure for the ensemble forecasting experiments is shown in Fig. 4. The first 10 000 steps involve a spin-up of the coupled Lorenz model. After spin-up, we use a 200-step ensemble Kalman filter (EnKF) data assimilation scheme (Evensen, 2003, 2004) to create the initial analysis state. The parameter set of the EnKF assimilation procedure is the same as for the ETKF scheme. The assimilation cycle is 0.05 tus, which is perfect to project to 6 hours window in real world. Hence, our 1 tus in this paper is 5 days in real world. During the assimilation process, the BV and NLLV perturbations are calculated based on the assimilated data as a basic flow. Then the ensemble perturbations created by the RP, BV, ETKF, and NLLV methods are added to the analysis state in pairs (both positive and negative perturbations are added). The ensemble perturbation vectors are scaled to 1 × 10–2. The integration from the analysis state is the control forecast, and the perturbed forecasts are ensemble members. By increasing the number of ensemble members, the prediction level of the ensemble forecast, which is driven by BVs, NLLVs, and ETKFs, showed improvement (not shown). Thus, the ensemble size is six pairs in this paper (with positive and negative perturbations superimposed in pairs). We ran 10 000 samples of the ensemble forecast (repeating the assimilation/breeding and forecasting processes). The initial value of each sample has one step interval. The initial states of 10 000 samples include a representative range of coupled model states.

      Figure 4.  Illustration of the initialization and forecasting procedure. Numbers represent the integration steps, and 1 step = 0.005 tus.

    • To evaluate the reliability of the ensemble predictions, a classical Brier score is applied to assess the relative skill of the BV compared with that of NLLV and ETKF. For any event $\phi $, the Brier score (Brier, 1950) is computed as:

      where $N$ is the number of samples, ${f_i}$ denotes the probability of the i-th sample for the prediction of event $\phi $, and ${o_i}$ denotes the probability of the i-th sample actually occurring for event $\phi $ (which can take on values of only 0 or 1).

    3.   Results
    • Before evaluating the quality of the ensemble predictions, the errors from the control forecast were investigated. We assumed that the model was perfect and the true state was a long model run for each sample. As shown in Fig. 5, a large difference exists between the control forecast and the true value. The evolution of the control and true value shows rapid fluctuating changes across the whole system (Fig. 5a). When separating the coupled Lorenz system into a fast and slow subsystem, similar characteristics are found in the fast subsystem compared to the whole system (Fig. 5b). However, the two time series in the slow subsystem are characterized by slow fluctuating changes that do not show a significant distinction until 4 tus into the simulation (Fig. 5c). Given the relatively large difference between the control forecast and the true state, we use the Lyapunov exponential form error growth rate to measure the variation of forecast error for the control run. We found that the initial error in analysis shows positive growth over time. The forecast error for a control run mainly comes from the fast subsystem. The variation in forecast error for the control run is different in the slow and fast subsystems, with the error growing much faster in the fast subsystem than in the slow subsystem (Fig. 5d). The equation for the Lyapunov exponential form error growth rate is as follows:

      Figure 5.  Panels (a)–(c) Evolution of control forecasts (light blue) against the true state (light red) as a function of lead time for (a) the whole system, (b) the fast subsystem, and (c) the slow subsystem (in the Euclidean norm). (d) Mean growth rate in the form of Lyapunov exponent (value × 100) of 10000 samples as a function of lead time from the coupled Lorenz model for the control run [the whole system (light purple), fast subsystem (light orange), and slow subsystem (light blue)].

      where $ {t_0} $ is the initial time, $\left\| {\vartriangle x(t)} \right\|$ denotes the error size in the Euclidean norm at time $ t $.

      Studies have proven that the ensemble forecast improves the quality of the control forecast (Toth and Kalnay, 1997; Ndione et al., 2020). Running an ensemble of forecasts by adding perturbations to initial conditions allows the ensemble mean to improve the prediction by filtering out unpredictable components, and the spread among the forecasts can provide a probability prediction (Toth and Kalnay, 1993). To explore the appropriate ensemble initial perturbation configuration in a multiscale system, many ensemble forecast experiments are conducted in this part of the paper, with multiple perturbation methods (RP, BV, ETKF, and NLLV). The root-mean-square error (RMSE) for the ensemble mean and the ensemble spread are used to measure the forecast skill of the experiments. In a “perfect ensemble”, the ensemble spread will be close to the RMSE of the ensemble mean for all forecast times (Palmer et al., 2006; Magnusson et al., 2008; Buckingham et al., 2010). Considering the different error growth in the fast and slow subsystems, we shall discuss them separately. In Fig. 6, the mean RMSE (solid lines) and ensemble spread (dashed lines) are plotted for the control (black), RP (red), BV (blue), ETKF (purple), and NLLV (green) after adding the perturbations to all variables. The RMSE oscillates at short lead times. It is possible that the RMSE oscillation at short lead times is related to our temporal scale, which is similar to diurnal cycling. In the first 0.5 tus, the RMSE for the NLLV, ETKF, BV, and RP ensembles is similar to that of the control run. This is mainly because positive and negative perturbations superimposed on the control run cancel each other out at the initial time [errors grow linearly at the initial time (Ding and Li, 2007)]. Soon thereafter, regardless of the perturbation method, the ensemble forecast can effectively reduce forecast errors from the control run in a general sense. In terms of the RMSE for the ensemble mean, the results from NLLVs are the lowest, followed by ETKFs, BVs, RPs, and the control forecast. Among these, NLLVs and ETKFs demonstrate nearly the same forecast ability. These two methods have obviously better predictive skill than the BVs in the two main periods: 0.5–2 tus and 4.5–8 tus (a smaller RMSE for the ensemble mean and bigger ensemble spread) (Fig. 6a). During the period of 0.5–2 tus, the better predictive skill of NLLVs and ETKFs over the whole system is reflected mainly in the reduction in forecast errors in the fast subsystem (Fig. 6b). This is reflected in the reduction in forecast errors in the slow subsystem during the 4.5–8 tus forecast period (Fig. 6c).

      Figure 6.  Mean RMSE (solid lines) and ensemble spread (dashed lines) of 10 000 samples as a function of lead time for the control run (black), RP method (red), BV method (blue), ETKF method (purple), and NLLV method (green) after adding perturbations to all variables: (a) the whole system, (b) the fast subsystem, and (c) the slow subsystem.

      Next, we wonder whether adding perturbations to different variables of this system can achieve improvements from BVs to ETKF and NLLVs. Good ensemble perturbations should reflect the initial uncertainty of analysis (Toth and Kalnay, 1993). The ability to capture the initial uncertainties varies among different perturbation methods. Owing to the differing error growth for initial perturbations in fast and slow subsystems (Fig. 5d), the prediction skill for the various perturbation methods may differ when adding different timescale perturbations. To further clarify this issue, three error-addition schemes are used in this study: adding perturbations to both fast and slow variables, adding perturbations only to fast variables, and adding perturbations only to slow variables. Figure 7 shows that whether perturbations are added to fast variables or to slow variables, they contribute to an improvement in the forecasting skill for the fast variables due to the feedback process between the coupled components. When adding perturbations only to fast variables, the ensemble skills of all perturbation methods are improved when predicting fast variables after 0.4 tus (Fig. 7b). However, when adding perturbations solely to the slow variables, only the NLLVs and ETKFs can improve the prediction skill of fast variables during the 0.4–0.8 tus forecast period (Fig. 7c). In other words, only better independent perturbations superimposed on the slow subsystem can improve the forecasting skill of the fast subsystem.

      Figure 7.  Mean RMSE (solid lines) and ensemble spread (dashed lines) of 10 000 samples in the fast subsystem as a function of lead time for the control run (black), random perturbation method (red), BV method (blue), ETKF method (purple), and NLLV method (green) after adding perturbations to different variables: (a) adding perturbations to both fast variables and slow variables, (b) adding perturbations only to fast variables, and (c) adding perturbations only to slow variables.

      The ensemble forecast of slow variables behaves differently than fast variables. The advantages of the ensemble forecast over the control forecast become apparent at 4 tus (Fig. 8). When adding perturbations only to fast variables, the forecasting skills of BVs, ETKFs, and NLLVs are equivalent (Fig. 8b). However, when adding perturbations only to slow variables, large differences are shown between the BVs, with NLLVs and ETKFs indicating that more independent perturbations lead to better prediction skill for the slow subsystem (Fig. 8c). Because of the feedback process between the coupled components, adding perturbations to both fast and slow variables improves the forecasting skill for the slow variables.

      Figure 8.  Mean RMSE (solid lines) and ensemble spread (dashed lines) of 10 000 samples in the slow subsystem as a function of lead time for the control run (black), random perturbation method (red), BV method (blue), ETKF method (purple), and NLLV method (green) after adding perturbations to different variables: (a) adding perturbations to both fast variables and slow variables, (b) adding perturbations only to fast variables, and (c) adding perturbations only to slow variables.

      In general, the ensemble forecast performs differently in different time-scale systems. After a while, the ensemble forecast demonstrates better prediction skill than the control run. The advantages of the ensemble forecast become apparent after a very short period of time in a fast subsystem but after a relatively long period of time in a slow subsystem. The reason for this difference is associated with the different error growths of different time-scale systems. In fast dynamics, errors from the analysis state grow quickly, whereas they grow relatively slowly in slow dynamics. Besides, adding perturbations to both fast and slow variables contributes to an improvement in the forecasting skill for fast variables and slow variables, indicating that uncertainty in both fast and slow variables plays a role in their prediction. When adding perturbations to slow variables, independent perturbations (NLLVs and ETKFs) seem to perform much better than the other types (BVs or RPs) in predicting both fast and slow variables. This is likely because highly independent perturbations can better capture initial uncertainty information. Additionally, NLLVs appear to outperform ETKFs by a narrow margin (Figs. 68). To further confirm this, we conduct an independent sample t-test with the RMSEs of 10 000 samples for the ETKF and NLLV methods. The RMSE data are from the same experiments indicated in Fig. 6a. The mean RMSE from NLLV is less than that from ETKF (with a difference of –0.1429), exceeding the 90% confidence level (with a probability value of 0.0812; not shown). Therefore, of the two independent perturbations, the NLLV is better than the ETKF.

      Other evidence also shows that the independent perturbations (NLLVs) demonstrate better forecasting skill than BVs. Figure 9 provides the distribution of the RMSEs and ensemble spreads from 10 000 samples for the NLLV and BV predictions. At the beginning of the ensemble forecast, the forecast errors for both NLLVs and BVs are concentrated mainly around the diagonal, indicating that the forecasting skill of NLLVs is roughly equal to that of BVs (Fig. 9a). The number of samples with a prediction error by NLLVs less than that by BVs increase over time, reaching 58% of total samples at 6 tus (Fig. 9b). The number of samples with an ensemble spread by NLLVs is greater than that for BVs at any time (Figs. 9df). We conclude that compared to BVs, NLLVs tend to have a smaller RMSE for the ensemble mean, and a bigger ensemble spread, indicating a better ensemble prediction performance.

      Figure 9.  Panels (a)–(c) RMSE of 10 000 samples based on NLLV and BV methods at (a) 3 tus, (b) 6 tus, and (c) 9 tus in the slow subsystem. The upper right-hand corner indicates the ratio of samples where RMSE for the NLLV method is smaller than the RMSE for the BV method in (a)–(c). Panels (d)–(f) are the same as (a)–(c), but for an ensemble spread of 10 000 samples. The upper right-hand corner indicates the ratio of samples where the ensemble spread for the NLLV method is larger than for the BV method in (d)–(f). Panels (a)–(f) are based on the experiments which add perturbations to both fast and slow variables.

      The Brier score (BS) is commonly used to evaluate the quality of probabilistic forecasts generated by ensembles (Stephenson et al., 2008). We choose the event ϕ1 (for $ X_3^{({\text{f}})} $ is the climatological mean to the distance of one standard deviation) (Fig. 10a) and event ϕ2 (for $ X_3^{({\text{s}})} $ is the climatological mean to the distance of one standard deviation) (Fig. 10b) to calculate the basic BS from an average of 10 000 samples. The smaller the BS value, the better the forecasting skill of the ensemble forecast. As shown in Fig. 10, the NLLVs are more skillful than BVs and RPs, and their performance is similar to ETKFs.

      Figure 10.  (a) Basic Brier score (BS) for the event ${\phi _1}$ (${\phi _1}$: where $ X_3^{({\rm{f}})} $ is the climatological mean to the distance of one standard deviation) of ensemble forecasts based on NLLVs (green line), ETKFs (purple line), BVs (blue line), and RPs (red line) as a function of lead time. Panel (b) is the same as (a), but for event ${\phi _2}$ (${\phi _2}$: where $ X_3^{({\rm{s}})} $ is the climatological mean to the distance of one standard deviation).

      The other verification method used was the Talagrand diagram (also called rank histograms), which can characterize the reliability of an ensemble forecast (Talagrand et al., 1997; Candille and Talagrand, 2005). For a reliable ensemble forecasting system, the observation must fall with equal probability into any of the N+1 intervals divided by the N ensemble forecast values (Talagrand et al., 1997). Considering an ensemble forecasting system with N members, the predicted value, $ X_3^{({\text{s}})} $, can be defined as ${P_{i,j}}$, where i denotes the i-th sample, and $j$ denotes the j-th ensemble member. For each sample, we count the number of members whose predicted values are smaller than the true values, represented as n, which can take on values of only 0–N. Then, we count the number of samples (for all samples of S, we run 10 000 samples of the ensemble forecast) under each n, defined by Sn. The ideal frequency of Sn is S/(N+1), for which we expect the true value to have equal probability in the N+1 intervals. We then calculate the relative frequency ${P_n} = {{{S_n}} \mathord{\left/ {\vphantom {{{S_n}} {\left[ {S/(N + 1)} \right]}}} \right. } {\left[ {S/(N + 1)} \right]}}$. The distribution of ${P_n}$ is plotted in Fig. 11. It shows that the NLLV, ETKF, and BV ensembles are under-dispersive. But the results for NLLVs show a flatter histogram, indicating the greater reliability of the NLLV ensemble system. The stability of the NLLV and the ETKF ensemble systems is comparable. The results from BS and the Talagrand diagram are based on the ensemble experiment, which adds perturbations to both fast and slow variables. From this, we choose to analyze results from $X_3^{({\rm{f}})}$ and $X_3^{({\rm{s}})}$. Similar results can be obtained from other error-addition schemes and variables (not shown). These results prove that NLLVs and ETKFs perform better than BVs in the ensemble prediction.

      Figure 11.  The histogram of the Talagrand distribution for different member intervals. The horizontal dashed lines denote the expected probability for the ensemble forecasts based on (a) BVs, (b) NLLVs, and (c) ETKFs at 2 tus. Panels (a)–(c) are based on the experiment which adds perturbations to both fast and slow variables and predicts the variable $X_3^{({\rm{f}})}$. Panels (d)–(f) are the same as (a)–(c), but at 6 tus and predicted variable is $X_3^{({\rm{s}})}$.

    4.   Summary and discussion
    • Ensemble prediction remains a huge challenge in multiscale systems (Vannitsem and Duan, 2020). One important issue is how to generate appropriate perturbations for different time-scale variables. This issue has been addressed here by considering different time-scale initial perturbations in predicting different time-scale variables, noting that the selection of ensemble generation schemes is very important. The NLLV method has proven advantageous in ensemble forecasting (Feng et al., 2014, 2016; Hou et al., 2018). Therefore, we have explored how to add appropriate ensemble initial perturbations to a multiscale system based on multiple initial perturbation methods. The results are given below.

      Compared to the control forecast, the ensemble forecast can effectively reduce forecasting errors in the coupled model. Due to different error growth in different time-scale systems, the advantages of an ensemble forecast become apparent after a very short period of time in a fast subsystem but after a relatively long period of time in a slow subsystem. We found that the dynamic processes among the slow and fast variables are coupled. This became evident when perturbations were added separately to a fast subsystem and a slow subsystem. Regardless of whether the perturbations were added to the fast or slow variables, there was an overall improvement in the forecasting skill for both the fast variables and slow variables.

      In terms of initial perturbation methods, it is evident that independent perturbations (NLLVs and ETKFs) are superior to the other kinds (BVs or RPs). Both NLLVs and ETKFs had nearly equivalent prediction skill, with NLLVs having the greatest skill by a narrow margin. The ensemble forecasting system based on NLLVs or ETKFs is of higher quality than that based on BVs. In particular, when adding perturbations to slow variables, the highly independent perturbations (NLLVs and ETKFs) can capture initial uncertainty information quickly, giving them better prediction skill in the coupled system.

      We may deduce that in a coupled ocean–atmosphere model, for the prediction of fast-scale variables (e.g., atmospheric variables), the ensemble forecast works on reducing the errors from the control forecast after a short time period. However, for slow-scale variables (e.g., oceanic variables), the ensemble forecast may effectively improve the medium and long-term forecasts. Consistent with air–sea coupling, adding perturbations to oceanic variables can improve the forecasting skill for atmospheric variables, and adding perturbations to atmospheric variables can also improve the forecasting skill for oceanic variables. When adding perturbations to oceanic variables, independent perturbations may perform better in the ensemble forecast of atmospheric and oceanic variables. These results may have important implications for the development of ensemble forecasts for coupled models in the future.

      In general, NLLVs and ETKFs perform better than BVs and RPs in a coupled model. The computations of ETKFs require extensive computing resources. Fortunately, ETKF perturbations can be a byproduct of data assimilation. Compared to ETKFs, NLLVs are completely independent (orthogonal) and easy to calculate. Both NLLVs and ETKFs are expected to have a wide potential application in coupled models.

      However, since we obtained these results through a toy model, further research is needed to expand these results to realistic air–sea coupled models. Our research team has attempted to apply NLLVs in the Weather Research and Forecasting (WRF) model. It is expected that NLLVs will exhibit a good performance in realistic air–sea coupled models. Besides, Vannitsem and Duan (2020) discovered that the fastest backward Lyapunov vectors are not optimal for initializing a multiscale ensemble forecasting system. Thus, choosing the appropriate NLLV modes in a multiscale ensemble forecasting system may be important. Both issues will be addressed in the near future.

      Acknowledgements. This work was jointly supported by the National Natural Science Foundation of China (Grant Nos. 42225501, 42105059).

Reference

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return