
Citation: | Fuqing ZHANG, Wei LI, Michael E. MANN. 2016: Scale-dependent Regional Climate Predictability over North America Inferred from CMIP3 and CMIP5 Ensemble Simulations. Adv. Atmos. Sci, 33(8): 905-918., https://doi.org/10.1007/s00376-016-6013-2 |
There is widespread scientific consensus that the accumulation of greenhouse concentrations from fossil fuel burning and other human activities is leading to a warming of the globe and other associated changes in large-scale climate (IPCC, 2013). Most assessments indicate that the cost of the resulting damage from climate change will rise to several percent of the global economy in the decades ahead if left unchecked. Yet, our ability to assess the regional impacts of climate change, which are critical both to assessing the damage caused by climate change and the implementation of adaptive strategies, remains hampered by the remaining substantial uncertainties associated with regional climate projections (Murphy et al., 2004; Tebaldi et al., 2005; Hawkins and Sutton, 2009; Deser et al., 2012; Watterson et al., 2014).
Regional climate projections are typically derived by one of two methods: statistical downscaling or dynamical downscaling (IPCC, 2013). In the former case, statistical relationships between coarse and fine scales derived from modern climate data are used to take coarse-scale climate model predictions/projections and estimate the likely impact on climate statistics at finer spatial and temporal scales. In the latter case, information from coarse climate models is used as boundary constraints on a finer resolution model (a regional climate model) that resolves the smaller spatiotemporal scales of interest. In either case, there is an assumption of a predictable relationship between the large scales captured in the coarse climate model projection and the local scales sought by the downscaling method.
Downscaled climate model projections have increasingly been used as guidance for policymakers and stakeholders at the local, national, and international level in assessing potential impacts and risks associated with human-caused climate change (von Storch et al., 1993; Mearns et al., 1999; Jones et al., 2011). However, the reliability of these projections continues to be debated. There is clearly skill in the largest-scale quantities; for example, the observed increase in global mean temperature (and even continental mean temperatures) can be detected and attributed to anthropogenic climate change (IPCC, 2007). However, confidence in regional-scale projections of surface temperature and precipitation is considerably lower (Whetton et al., 2007; Separovic et al., 2008; Watterson and Whetton, 2011; Deser et al., 2012; Li et al., 2012).
In the current study, we seek to quantify the predictability of regional-scale climate change, with an emphasis on surface temperature and precipitation, across the coterminous United States and surrounding areas, through analysis of the multimodel ensembles of coupled model simulations collected from both CMIP3 and CMIP5. In section 2 we describe the data and methods used in the study. In section 3 we present an analysis of regional climate predictability based on CMIP5 multimodel historical simulations. In section 4 we provide a complementary analysis of CMIP3 multimodel simulations, analyzing both historical simulations and future projections. Conclusions are presented in section 5.
We analyze surface temperature and precipitation fields across the coterminous United States and neighboring oceanic regions (15°-60°N, 70°-130°W), as derived from both observational data and an ensemble of climate model simulations.
Observational data analyzed include monthly mean 5° latitude × 5° longitude grid-box near-surface temperature anomalies over 1850-2014 from HadCRUT4. We add the reference period climatology over 1961-90 to yield absolute surface temperatures. Precipitation data are taken from the 1° latitude × 1° longitude GPCC dataset over the period 1979-2004.
For the climate model simulations, monthly mean surface (2 m) air temperature and precipitation spanning the period 1979-2004 are available for 38 climate models in the historical late 19th to early 21st century 20C3M CMIP5 simulation archives, and 18 for CMIP3 (Taylor et al., 2012) (Table 1). Where multiple simulations are available for a given model, a single ensemble mean is calculated to ensure that each distinct model is represented equally in the ensuing analysis.
We focus our analysis on the boreal summer (June-August) climatological period during the 1979-2004 period of overlap between observations and model simulations. Both observational and model data are interpolated to a common (T85, ∼1.4° latitude × 1.4° longitude) spatial resolution prior to analysis. For the purpose of the ensuing analyses, we define the following terms:
(1) Ensemble mean: the uniform arithmetic mean of all ensemble members;
(2) Ensemble spread: the uniform arithmetic mean of the absolute difference between any two members across all ensemble members;
(3) Ensemble mean error or bias: the (signed) difference between the model ensemble mean and the observational analysis.
In addition to evaluating the ensemble mean, ensemble spread, and error for the observational and model fields themselves, we perform a power spectral analysis (using the Fast Fourier transform) of the fields to evaluate the power spectral density (PSD) characteristics of the various quantities in wavenumber space. In these analyses:
(1) The ensemble mean power spectrum (P) is defined as the PSD of the non-weighted arithmetic mean (V m) for all ensemble members (M=38):
P= PSD(V m),V m=(1/M)∑iVi,M=38;
(2) The PSD of the ensemble spread is defined as:
∆ P=(1/N)∑k=1NP d,P d= PSD(Vi-Vj), where i and j are any pair of ensemble members and N=703.
(3) The PSD of the ensemble mean error or bias is defined as the power spectra of the difference between the ensemble mean and the observation, i.e.,
P'= PSD(V m-V o), where V m and V o refer to the ensemble mean and observational variables, respectively.
The ratio of the PSD ensemble mean (signal) and ensemble spread (noise) as a function of wavenumber defines a scale-dependent SNR measure (Bei and Zhang, 2007). A ratio smaller than unity indicates that the noise amplitude is greater than the signal amplitude, and implies that model estimates of mean changes are unreliable at that spatial scale. Wavenumber 1 corresponds to a single sinusoidal fluctuation over the entire circle of latitude, i.e., it is the coarsest possible measure of zonal variability (wavenumber 0 represents the zonal mean). Since the selected U.S. domain is precisely 1/6 of a circle of latitude, the lowest resolvable wavenumber for that domain is global wavenumber 6. The horizontal scale (wavelength) for a given wavenumber is the length of the circle of latitude divided by the wavenumber, e.g., the length scale corresponding to wavenumbers 1, 2, 3, 4, 5, 6 and 9 are ∼36 000, 18 000, 12 000, 9000, 7200, 6000 and 4000 km, respectively.
We form estimates of the internal variability in the climate means of summertime precipitation and surface air temperature during the period 1979-2004 using the multimodel ensemble of 38 CMIP5 historical simulations (Fig. S1 in electronic supplementary material). The mean difference between any two models within the 38-model ensemble is defined as the ensemble spread; a measure of uncertainty of any deterministic prediction assuming the truth (as well as any single deterministic prediction) is a random draw out of the multimodel ensemble. The ensemble spread can be regarded as a lower bound on the model uncertainty, since it neither accounts for the potential bias due to deficiencies in model physics that are common among models, nor uncertainties in forcing (both anthropogenic and natural).
Figures 1a and b show the resulting mean and ensemble spread of surface temperature and monthly precipitation over the U.S. domain. The largest uncertainty for the surface temperature field is found over the western U.S., with the ensemble spread as high as 5°C-6°C; followed by the central U.S., with a spread of 3°C-5°C; and the eastern U.S., with a spread of ∼ 1.5°C-3°C (all higher than the surrounding oceans, at ∼ 0.5°C-1.5°C). It is worth noting that the warming trend over the U.S. during the past century is on the order of 1°C (Ji et al., 2014), though it is beyond the scope of this study to examine the scale-, variable- and location-dependent predictability of the climate trend.
The large uncertainty in the mean surface temperature field (Fig. 1a) can be interpreted through a parallel assessment using the observational (HadCRUT4) surface temperature during the overlapping time period (Fig. 1c). The error/bias can be estimated as the model ensemble mean minus the observations. The domain-averaged ensemble spread (∼2.1°C) is found to be larger but grossly comparable to the domain-averaged root-mean-square of this estimate of error/bias (1.2°C). Moreover, the spatial pattern of the error/bias is similar to that of the ensemble spread, with the western U.S. displaying the largest mean error, followed by the Great Plains in the central U.S., and finally the eastern U.S. There are, however, some notable differences as well. Of particular interest is the relatively low spread over the North American west and east coasts and neighboring ocean regions (Fig. 1a), which contrasts with the large error/bias estimates over these same regions (Fig. 1c). This suggests the presence of a systematic bias that is common to most or all of the climate models, perhaps associated with deficiencies in the models' representations of land-sea contrast or continental sea-breeze circulations.
The region of maximum uncertainty (ensemble spread) for precipitation (Fig. 1b) is found over lower latitudes (the south central U.S. and Latin America), with an ensemble spread exceeding 2.5 mm d-1——roughly half the amplitude of the observed mean (signal). A second uncertainty maximum in precipitation is located over the northern Great Plains in the lee of the Rockies, with an ensemble spread exceeding 1.5 mm d-1. The spatial pattern of the ensemble mean error (Fig. 1d) is once again grossly consistent with that for the ensemble spread (uncertainty), in that regions of peak amplitude are similar (e.g., common maxima along the southern edge of the domain and northern Great Plains), though the ensemble mean errors of approximately -2.5 mm d-1 are considerably larger than the ensemble spread for the Gulf Coast and Florida Peninsula. In addition, the North American domain-mean absolute ensemble mean error and spread are also comparable in magnitude (0.58 mm d-1 and 0.97 mm d-1, respectively).
Unlike surface temperature, which is primarily determined by large-scale processes, precipitation is heavily influenced by smaller-scale processes including moist convection, land-sea contrast, and orographic lifting. This distinction is exemplified by the local maxima for both ensemble mean error and spread over the Gulf of Mexico (hot spot of convection) and the mountainous areas of the western U.S. (where orographic effects are important). Comparing Figs. 1b and d suggests that, for the CMIP5 simulations of climatological mean precipitation, the ensemble spread can be used qualitatively to assess the uncertainty in the ensemble mean estimate. To place the U.S. results in a broader perspective, we also compare the ensemble mean, spread, and error for the global domain (not shown). The basic results discussed above appear to apply at this larger scale as well (though a detailed analysis of the global domain is beyond the scope of the current study).
The spatial scale-dependence of the predictability of surface temperature and precipitation is quantified by evaluating the PSD along both global circles of latitude and a latitudinal/longitudinal sub-region containing the coterminous U.S. (15°-60° N, 70°-130°W). Figure 2 shows the ensemble mean (left) and ensemble spread (middle) PSD for the CMIP5 surface temperature and precipitation fields, along with the ratio of the ensemble mean to the ensemble spread, i.e., the SNR (right) as a function of global wavenumber. For the global circle of latitude ensemble mean temperature (Fig. 2a), the PSD exhibits a peak at lower wavenumbers (1-3) for the midlatitudes (40°-60°N); while for the subtropics (20°-40°N), three distinct spectral peaks are observed (wavenumbers 1, 3 and 5). By contrast, for the ensemble spread, the PSD (Fig. 2b) decreases quite gradually in both the midlatitudes and the subtropics, though greater amplitudes are found across all wavenumbers for the former. The SNR (Fig. 2c) exceeds unity at all latitude and wavenumber ranges, with the exception of (1) wavenumber 4 between 40°-60°N, (2) wavenumber 6 poleward of 50°N, and (3) wavenumber 9 between 45°-55°N. Given the SNRs, surface temperature projections can be considered most reliable for wavenumbers 1-2 in the midlatitudes, and wavenumbers 1, 3 and 5 within the subtropics. Therefore, meaningful surface temperature predictions (SNR>1) appear possible over a somewhat broad range of latitudes and wavenumbers.
For the more limited U.S. sub-region, the ensemble mean (Fig. 2d) and spread (Fig. 2e) are both larger at lower wavenumbers than for their global counterparts, but the SNR (Fig. 2f) falls below unity for global wavenumber 12 (horizontal scale of 30° longitudinal variation, i.e., distances of ∼3000 km) over the central latitudes of the U.S. (35°-45°N), and for nearly all wavenumbers greater than 36 (scales less than 10° in longitudinal distance, i.e., distances less than ∼1000 km). This observation implies that state-of-the-art (i.e., CMIP5) climate model projections are likely to exhibit very limited skill in predicting regional variations in surface temperature at scales below 1000 km. It is noteworthy that wavenumber 18 (∼20° or ∼2000 km in longitudinal distance) exhibits the maximum SNR at nearly all latitudes for the U.S. domain. We interpret this observation as indicative of the influence of topographical features in the U.S. that induce enhanced predictability at this characteristic spatial scale.
The findings for precipitation (Figs. 2g and h) are quite different from those for surface temperature (Figs. 2a and b). Precipitation exhibits greater spectral amplitude in the subtropics relative to the midlatitudes, especially for lower (1-2) wavenumbers. SNRs at the global scale (Fig. 2i) are generally lower, substantially exceeding unity only for wavenumbers 1-2 between 20°N and 50°N, and wavenumber 4 between 40°N and 60°N. Low predictability (SNR<1) is observed even at wavenumbers 1-2 poleward of 50°N, implying considerable challenges in predicting regional-scale variations in precipitation at high latitudes. Interestingly, however, for the U.S. regional sub-domain (Figs. 2j-l), there are apparently predictable signals (SNR>1) for global wavenumbers 6-12 at nearly all latitudes, and for even higher wavenumbers (24-60, i.e., scales as small as 600 km) in the central U.S. latitudes (35°-45°N). The larger signals over these latitudes in the North American domain may be related to regional-scale terrain effects and land-ocean contrasts, although some models may still have deficiencies in simulating these effects.
To further assess the scale and latitude dependence of surface temperature and precipitation predictability over the U.S. sub-domain, we average the fields over three representative latitude ranges (low latitude, 15°-30°N; midlatitude, 30°-45°N; and high latitude, 45°-60°N; see Fig. 3). Given that the observations represent a single realization drawn from a larger distribution of possible climate histories, if the model ensemble accurately reflects the true climate, the PSD of the observations should be similar to that of individual ensemble members, and the ensemble mean should reflect the approximate mode of the distribution. On the other hand, the PSD of the ensemble spread (representing the uncertainty) should closely resemble that of the difference between the ensemble mean and observations (error/bias) across wavenumbers.
For the global domain, the PSD of the ensemble mean and observational mean are indeed similar for all latitude ranges for both surface temperature and precipitation. An exception is the anomalously low PSD values for surface temperature at wavenumber 2 and those exceeding ∼50, the latter of which we attribute to the low spatial density of surface temperature observations over the open ocean. The PSD for the ensemble spread generally exceeds that of the ensemble error/bias at most wavenumbers, and especially at lower wavenumbers (<10) and for surface temperature. Consistent with our earlier findings (Fig. 2), the PSD for both the ensemble mean and observations (i.e., the signals) exceed those for the ensemble spread and error (noise or uncertainties) for wavenumbers 1-20 for all three latitude ranges for surface temperature (Figs. 3a-c), implying predictability across the associated spatial scales. For precipitation, by contrast, predictability is only evident (Figs. 3g-i) for wavenumbers 1-3 for the low-latitude (15°-30°N) and midlatitude (30°-45°N) zone, and for almost no wavenumbers for the high-latitude (45°-60° N) zone.
For the U.S. regional sub-domain, the PSD for the ensemble mean is generally consistent with that for the observations for both surface temperature and precipitation, and low and intermediate wavenumbers. However, for the high-latitude zone (45°-60°N) the ensemble-mean PSD considerably exceeds that of the observations for higher (> 24 for surface temperature and > 12 for precipitation) wavenumbers. The discrepancy between the ensemble spread and the ensemble mean error/bias is considerably greater for the U.S. regional domain than for the global domain as well.
The inferred predictability of surface temperature and precipitation for the U.S. regional domain varies considerably between the two variables and three latitude ranges (Figs. 3d-f, j-l). For example, the SNR for surface temperature exceeds unity for all wavenumbers lower than 36 (spatial scales as small as ∼1000 km) for the low-latitude (15°-30°N) zone, but the SNR is close to the "no predictability" value of unity for nearly all wavenumbers for the midlatitude (30°-45°N) and high-latitude (45°-60°N) zones. For precipitation, only for the midlatitude zone (30°-45°N) is there evidence of predictability, and at fairly low (6-12) global wavenumbers (i.e., spatial scales no less than ∼6000 km). These examples highlight the challenge for regional-scale climate predictability in North America with existing state-of-the-art global climate models.
To further investigate the robustness of our findings based on the CMIP5 historical simulations (Figs. 1-3) we perform parallel analyses using the CMIP3 (Meehl et al., 2007; Watterson et al., 2014) (Table 2) multimodel ensemble simulations using both (1) the same historical period (1979-2004) (Zhou and Yu, 2006; Timm and Diaz, 2009) and (2) the CMIP3 ("A2" scenario) 21st century climate change projections.
Our conclusions regarding the predictability of regional-scale climate over North America with the CMIP5 historical simulations (Figs. 1-3) are similar to those obtained with the CMIP3 historical multimodel simulations (Figs. 4-6), with only one minor discrepancy: slightly lower SNR values are found for both the global surface temperature and precipitation fields over the midlatitudes of North America for global wave numbers 6-12. The fact that little-to-no improvement in regional predictability is found to result from the substantial model development and improvement reflected by the 5-year period between CMIP3 and CMIP5 suggests that, even with increasingly refined and detailed climate models, our conclusions regarding an apparent scale limit for regional-scale climate predictability are likely to remain true.
Similar conclusions are also obtained for the 21st century projections using the "A2" emissions scenario for the period 2074-99 using the CMIP3 multimodel ensemble simulations (Figs. 7-9). Even though obviously we do not have observations to verify these simulations, our conclusions again remain mostly unchanged: there is an apparent scale limit by which the uncertainty in the prediction (noise) becomes greater than the ensemble mean prediction (signal). This further highlights the limited predictability of climate models, especially at regional scales for different climate scenarios.
In summary, through an analysis of surface temperature and precipitation variability in the CMIP5 historical simulations and comparisons with observational data during the overlapping (1979-2014) interval of the late 20th/early 21st century, we have found that there appears to be a fundamental scale limit below which refinement of climate model predictions may not be possible. While the predictability limit depends on the variables and regions analyzed, the averaging period and/or season, a seemingly robust result is that, for North America, the uncertainty due to intrinsic noise approaches in magnitude the amplitude of the climate change signal at horizontal scales below about 1000 km for surface temperature, and 2000 km for precipitation.
Our findings generalize beyond the specifics of the CMIP5 historical simulation ensemble. Parallel analyses of both (i) the earlier generation CMIP3 historical simulation ensemble and (ii) 21st century climate projections based on the CMIP3 "A2" emissions scenario, yield qualitatively very similar conclusions. Given that downscaling methods (whether based on statistical or dynamical approaches) require information from large-scale climate model simulations as boundary conditions and/or reference states, the lack of predictability at these larger scales likely translates to a lack of predictability at local scales. One apparent exception, based on our findings, are cases where smaller-scale orographic forcing or land-sea contrasts provide additional predictability at smaller scales.
Given the importance of future projections of surface temperature and precipitation for assessing climate change impacts such as heat stress, flooding potential and drought magnitude, duration and extent, our findings suggest great challenges in assessing climate change risk and damage at regional scales most important to stakeholders and policymakers. One potential implication of our findings is that regional adaptation efforts might, in some circumstances, be better focused on reducing vulnerability to climate change in general, rather than planned adaptation to specific projected climate changes.
Bei N., F. Q. Zhang, 2007: Impacts of initial condition errors on mesoscale predictability of heavy precipitation along the Mei-Yu front of China. Quart. J. Roy. Meteor. Soc., 133, 83- 99.
|
Deser C., R. Knutti, S. Solomon, and A. S. Phillips, 2012: Communication of the role of natural variability in future North American climate. Nature Clim.Change, 2, 775- 779.
|
Hawkins E., R. Sutton, 2009: The potential to narrow uncertainty in regional climate predictions. Bull. Amer. Meteor. Soc., 90, 1095- 1107.
|
|
|
Ji F., Z. H. Wu, J. P. Huang, and E. P. Chassignet, 2014: Evolution of land surface air temperature trend. Nature Clim.Change, 4(
|
Jones C., F. Giorgi, and G. Assar, 2011: The coordinated regional downscaling experiment: CORDEX, an international downscaling link to CMIP5. CLIVAR Exchanges, 16, 34- 39.
|
Li W., C. E. Forest, and J. Barsugli, 2012: Comparing two methods to estimate the sensitivity of regional climate simulations to tropical SST anomalies. J. Geophys. Res., 117, D20103.
|
Mearns L. O., I. Bogardi, F. Giorgi, I. Matyasovszky, and M. Palecki, 1999: Comparison of climate change scenarios generated from regional climate model experiments and statistical downscaling. J. Geophys. Res., 104, 6603- 6621.
|
|
Murphy J. M., D. M. H. Sexton, D. N. Barnett, G. S. Jones, M. J. Webb, M. Collins, and D. A. Stainforth, 2004: Quantification of modelling uncertainties in a large ensemble of climate change simulations. Nature, 430, 768- 772.
|
Separovic L., R. de Ela, and R. Laprise, 2008: Reproducible and irreproducible components in ensemble simulations with a Regional Climate Model. Mon. Wea. Rev. 136, 4942- 4961.
|
Taylor K. E., R. J. Stouffer, and G. A. Meehl, 2012: An overview of CMIP5 and the experiment design. Bull. Amer. Meteor. Soc., 93, 485- 498.
|
Tebaldi C., R. L. Smith, D. Nychka, and L. O. Mearns, 2005: Quantifying uncertainty in projections of regional climate change: A Bayesian approach to the analysis of multimodel ensembles. J. Climate., 18, 1524- 1540.
|
Timm O., H. F. Diaz, 2009: Synoptic-statistical approach to regional downscaling of IPCC twenty-first-century climate projections: Seasonal rainfall over the Hawaiian Islands. J.Climate, 22, 4261- 4280.
|
von Storch,
|
Watterson I. G., P. H. Whetton, 2011: Distributions of decadal means of temperature and precipitation change under global warming. J. Geophys. Res., 116, D07101.
|
Watterson I. G., J. Bathols, and C. Heady, 2014: What influences the skill of climate models over the continents? Bull. Amer. Meteor. Soc., 95, 689- 700.
|
Whetton P., I. Macadam, J. Bathols, and J. O'Grady, 2007: Assessment of the use of current climate patterns to evaluate regional enhanced greenhouse response patterns of climate models. Geophys. Res. Lett., 34, L14701.
|
Zhou T.-J., R. C. Yu, 2006: Twentieth-century surface air temperature over china and the globe simulated by coupled climate models. J.Climate, 19(
|
Peihua QIN, Zhenghui XIE, Jing ZOU, Shuang LIU, Si CHEN. 2021: Future Precipitation Extremes in China under Climate Change and Their Physical Quantification Based on a Regional Climate Model and CMIP5 Model Simulations. ADVANCES IN ATMOSPHERIC SCIENCES, 38(3): 460-479. DOI: 10.1007/s00376-020-0141-4 | |
Huanhuan ZHU, Zhihong JIANG, Juan LI, Wei LI, Cenxiao SUN, Laurent LI. 2020: Does CMIP6 Inspire More Confidence in Simulating Climate Extremes over China?. ADVANCES IN ATMOSPHERIC SCIENCES, 37(10): 1119-1132. DOI: 10.1007/s00376-020-9289-1 | |
Ying XU, Xuejie GAO, Filippo GIORGI, Botao ZHOU, Ying SHI, Jie WU, Yongxiang ZHANG. 2018: Projected Changes in Temperature and Precipitation Extremes over China as Measured by 50-yr Return Values and Periods Based on a CMIP5 Ensemble. ADVANCES IN ATMOSPHERIC SCIENCES, 35(4): 376-388. DOI: 10.1007/s00376-017-6269-1 | |
Xiaolei CHEN, Yimin LIU, Guoxiong WU. 2017: Understanding the Surface Temperature Cold Bias in CMIP5 AGCMs over the Tibetan Plateau. ADVANCES IN ATMOSPHERIC SCIENCES, 34(12): 1447-1460. DOI: 10.1007s00376-017-6326-9 | |
TIAN Di, GUO Yan*, DONG Wenjie. 2015: Future Changes and Uncertainties in Temperature and Precipitation over China Based on CMIP5 Models. ADVANCES IN ATMOSPHERIC SCIENCES, 32(4): 487-496. DOI: 10.1007/s00376-014-4102-7 | |
WANG Huijun, FAN Ke, SUN Jianqi, LI Shuanglin, LIN Zhaohui, ZHOU Guangqing, CHEN Lijuan, LANG Xianmei, LI Fang, ZHU Yali, CHEN Hong, ZHENG Fei. 2015: A Review of Seasonal Climate Prediction Research in China. ADVANCES IN ATMOSPHERIC SCIENCES, 32(2): 149-168. DOI: 10.1007/s00376-014-0016-7 | |
YANG Shili, FENG Jinming, DONG Wenjie, CHOU Jieming. 2014: Analyses of Extreme Climate Events over China Based on CMIP5 Historical and Future Simulations. ADVANCES IN ATMOSPHERIC SCIENCES, 31(5): 1209-1220. DOI: 10.1007/s00376-014-3119-2 | |
FENG Jinming, WEI Ting, DONG Wenjie, WU Qizhong, and WANG Yongli. 2014: CMIP5/AMIP GCM Simulations of East Asian Summer Monsoon. ADVANCES IN ATMOSPHERIC SCIENCES, 31(4): 836-850. DOI: 10.1007/s00376-013-3131-y | |
LIU Yonghe, FENG Jinming, MA Zhuguo. 2014: An Analysis of Historical and Future Temperature Fluctuations over China Based on CMIP5 Simulations. ADVANCES IN ATMOSPHERIC SCIENCES, 31(2): 457-467. DOI: 10.1007/s00376-013-3093-0 | |
FENG Jinming, WANG Jun, YAN Zhongwei. 2014: Impact of Anthropogenic Heat Release on Regional Climate in Three Vast Urban Agglomerations in China. ADVANCES IN ATMOSPHERIC SCIENCES, 31(2): 363-373. DOI: 10.1007/s00376-013-3041-z |
1. | Jiafeng Liu, Yaqiong Lu. How Well Do CMIP6 Models Simulate the Greening of the Tibetan Plateau?. Remote Sensing, 2022, 14(18): 4633. DOI:10.3390/rs14184633 |
2. | Kim C. Steiner, Lake E. Graboski, Jennifer L. Berkebile, et al. Uncertainty in the modelled mortality of two tree species ( Fraxinus ) under novel climatic regimes. Diversity and Distributions, 2021, 27(8): 1449. DOI:10.1111/ddi.13293 |
3. | Francisco J. Tapiador, Rémy Roca, Anthony Del Genio, et al. Is Precipitation a Good Metric for Model Performance?. Bulletin of the American Meteorological Society, 2019, 100(2): 223. DOI:10.1175/BAMS-D-17-0218.1 |
4. | Udit Bhatia, Auroop Ratan Ganguly. Precipitation extremes and depth-duration-frequency under internal climate variability. Scientific Reports, 2019, 9(1) DOI:10.1038/s41598-019-45673-3 |
5. | Surya Gupta, Suresh Kumar. Simulating climate change impact on soil erosion using RUSLE model − A case study in a watershed of mid-Himalayan landscape. Journal of Earth System Science, 2017, 126(3) DOI:10.1007/s12040-017-0823-1 |
6. | Yuejian Zhu, Xiaqiong Zhou, Malaquias Peña, et al. Impact of Sea Surface Temperature Forcing on Weeks 3 and 4 Forecast Skill in the NCEP Global Ensemble Forecasting System. Weather and Forecasting, 2017, 32(6): 2159. DOI:10.1175/WAF-D-17-0093.1 |
7. | Walter G. Whitford, Benjamin D. Duval. Ecology of Desert Systems. DOI:10.1016/B978-0-12-815055-9.00011-4 |