-
Bulk microphysics parameterizations (BMPs) seek to simplify the representation of microphysical processes and their associated HSDs and hydrometer types with a minimal number of prognostic quantities/parameters. Kessler (1969) pioneered the development of BMPs for warm rain processes by partitioning the whole liquid drop population into small cloud droplets and large raindrops. He assumed that the initial formation of embryonic raindrops was represented through the so-called autoconversion process that is further parameterized as a linear function of cloud liquid water content. Raindrops follow the Marshall–Palmer negative exponential size distribution and grow by collecting cloud droplets through continuous accretion. The Kessler BMP is a simple one-moment scheme for warm rain processes that only predicts liquid water mixing ratio. Subsequent studies have applied similar ideas to develop BMPs for ice processes (Wisner et al., 1972;; Cotton et al., 1982; Lin et al., 1983; Rutledge and Hobbs, 1984). Early ice BMPs assume that all hydrometeors are of spherical shapes, and these BMPs simply partition the complex solid hydrometeor types into ice and snow (e.g., Lin et al., 1983).
The early one-moment BMPs have been extended to predict two moments (zeroth moment for number concentration in addition to water mixing ratio) since the late 1980’s and early 1990’s (e.g., Hu and He, 1987; Murakami, 1990; Ferrier, 1994; Meyers et al., 1997; Reisner et al., 1998; Khairoutdinov and Kogan, 2000; Seifert and Beheng, 2001; Lou et al., 2003; Chen and Liu, 2004; Milbrandt and Yau, 2005a; Morrison et al., 2005; Morrison and Grabowski, 2007; Phillips et al., 2007; Lim and Hong, 2010; Thompson and Eidhammer, 2014 ). Motivation for the development of two-moment BMPs stemmed from scientific interests in aerosol indirect effects on climate and climate change (Twomey, 1977; Albrecht, 1989), together with the recognition of one-moment BMPs suffering from obvious shortcomings, and growing computational power. The prediction of both number concentration and water mixing ratio permits effective radius and other cloud properties being represented in a more realistic manner, allowing for better consideration of aerosol–cloud interactions compared to one-moment BMPs that only predict water mixing ratio. Such two-moment BMPs have been increasingly used in CRMs and LES models (e.g., Ferrier, 1994; Meyers et al., 1997; Seifert and Beheng, 2001; Morrison et al, 2005) and GCMs (e.g., Ghan et al., 1997; Lohmann et al., 1999; Ming et al., 2007; Morrison and Gettelman, 2008). Although two-moment BMPs still dominate as of now, development and use of three-moment BMPs have started since 2005 and are the focus of this paper (see section 3.2 for detailed discussion). Considering more moments in BMPs improves the process representation in principle on one hand, and on the other hand, it poses the necessity to consider more processes/quantities and thus increases the computational cost at the same time. Take liquid-phase clouds as an example. According to the idea of partitioning the total drop population into cloud droplets and raindrops, four different conversion mechanisms are involved with the collection equation: (1) autoconversion process as the formation of embryonic raindrops by collection of cloud droplets among themselves; (2) accretion as the process of raindrops collecting cloud droplets; (3) self-collection of cloud droplets whereby the resultant droplets remain in the category of cloud droplets; and (4) self-collection of raindrops whereby the resultant drops remain in the category of raindrops. Only the processes of autoconversion and accretion need to be considered in predicting changes of water mixing ratios in one-moment BMPs since self-collections of cloud droplets and raindrops do not affect the corresponding water contents. However, self-collections change the number concentration and spectral shape of the corresponding hydrometeor populations and thus need to be seriously considered in multi-moment BMPs (Beheng, 1994).
Another extension of BMP development is concerned with the treatment of hydrometeor types. To represent the multiple hydrometeor types within the constraint of computational resources, most existing BMPs use a limited number of hydrometeor categories defined by a set of prescribed physical hydrometeor characteristics (e.g., morphology, bulk density, and terminal velocity) that broadly describe a given “typical” hydrometeor type. Most liquid phase BMPs have followed Kessler (1969) to partition the whole liquid phase into two categories of cloud droplets and raindrops, probably with the exception reported in Saleeby et al. (2015) and related studies whereby a second mode of large cloud droplets with diameters from 40 μm to 80 μm is added. The main complexity of representing hydrometeor types lies with solid-phase hydrometeors, with the trend of an ever-increasing number of ice categories being considered in BMPs. For example, the simplest BMPs assume two solid categories of ice and snow analogous to liquid clouds. A rimed ice category (“graupel” or “hail”) was added in Lin et al. (1983) and Rutledge and Hobbs (1984). Other BMPs considered graupel and hail as separate categories to allow for both slower- and faster-falling rimed ice particles (Ferrier, 1994; Milbrandt and Yau, 2005b; Mansell et al., 2010). Walko et al. (1995) and Meyers et al. (1997) further added separate snow and aggregates categories. Straka and Mansell (2005) developed a BMP with 10 different categories of solid particles: column ice crystal, plate ice crystal, rimed cloud ice, snow (ice crystal aggregates), three categories of graupel with different densities and intercepts, frozen drops, small hail, and large hail.
Regardless of their specific differences, BMPs have mostly followed a paradigm of two pillars. The first pillar assumes some analytical HSD function such as the three-parameter Gamma function to close the set of equations for predicting HSD moments. The second pillar assumes that the diverse variety of hydrometeors can be grouped into several hydrometeor types characterized by a set of relationships (e.g., power-laws) between hydrometeor properties (e.g., mass, area, mass density, and terminal velocity) and hydrometer maximum diameter. Note that maximum dimension has been commonly used to describe non-spherical hydrometeors in literature; but maximum radius is used throughout this paper to avoid confusion with fractal dimension discussed in section 3.5. Figure 2 presents a BMP complexity diagram based on this two-pillar paradigm to illustrate the different levels of BMP complexity. In this diagram, the x-axis and y-axis denote the number of HSD moments and the number of considered hydrometeor categories, respectively. Accordingly, we recommend a BMP notation system “MiCj” to classify a specific BMP wherein the subscripts “i” and “j” denote the number of prognostic moments and the number of hydrometeor categories, respectively. Use of this classification system facilitates a first-order differentiation among the ever-increasing number of BMPs in literature. Table 1 provides some examples of BMPs available in the WRF4.1.2 model that are coupled with the radiative scheme to provide key microphysical properties (e.g., liquid water path and effective radius) for calculating cloud radiative properties.
BMP Scheme Reference Thompson Thompson et al. (2008) Thompson aerosol aware Thompson and Eidhammer (2014) WSM6 Lim and Hong (2010) WDM6 Lim and Hong (2010) NSSL double moment Mansell et al. (2010) P3 single moment Morrison and Milbrandt (2015) P3 double moment Morrison and Milbrandt (2015) Table 1. Examples of the bulk microphysics parameterizations in WRF4.12
Figure 2. Classification diagram to illustrate the complexities of bulk microphysics schemes. A bulk scheme is characterized by the number of distribution moments (x-axis) and number of hydrometeor categories (y-axis). The notation MxCy is used in representing the microphysics schemes. The scheme complexity increases with either or both numbers, with the extremes approaching the explicit microphysics and particle-resolved DNS discussed in sections 4 and 5, respectively. Note that classifications of the exemplary schemes here should be viewed as approximate because not all the schemes consider the same hydrometer categories with the same number of moments.
Three extreme “BMP” scenarios are worth mentioning. First, when the number of prognostic HSD moments is sufficiently high, the moment-based BMPs become essentially equivalent to explicit bin microphysics according to the equivalence between the HSD moment and bin discretization:
where n(r) denotes the HSD; r is the hydrometeor radius; and
$ {M}_{p} $ is the p-th HSD moment. This extreme BMP is denoted by i = ∞ in the BMP classification system. Note that there exist mathematical situations where the size distributions are not uniquely determined by their moments, i.e., the so-called dissimilar iso-momental distributions (White, 1990); however, they are less of a concern in reality (McGraw et al., 1998). Second, there has been a recently emerging fundamental change in representing hydrometeor types that moves from the discrete classification of different hydrometeor types to representation of continuous hydrometeor properties (hereafter “continuous” representation for convenience; see section 3.4 for more discussions). Likewise, this continuous representation of hydrometeor types is denoted by j = ∞ in the BMP classification system. Evidently, the ultimate representation of cloud microphysics is a combination of explicit microphysics with “continuous” hydrometeor type, or explicit “continuous” microphysics for short.These extreme scenarios of explicit microphysics are embodied in the schemes of bin microphysics and particle-based microphysics detailed in section 4, and further in the particle-resolved direct numerical simulation (PR-DNS) models detailed section 5. In addition, discussions on plausible theoretical foundations buttressing BMPs are deferred to section 3.5.
-
Similar to Milbrandt and Yau (2005a, b), the prognostic equation for the pth HSD moment of the jth hydrometeor category can be generically written as
where u is the 3D air velocity vector, and the terms on the right-hand side of Eq. (2) represent, respectively, advection/divergence, turbulent mixing representing the subgrid turbulent transport (note its difference from the turbulent entrainment-mixing processes discussed in section 3.3), sedimentation, and microphysical sources (or sinks when it is negative). The source term is computed as a sum of the tendencies resulting from individual microphysical processes. The p-th-moment weighted terminal velocity is given by:
For three-moment BMPs, the three common prognostic moments are number concentration (p = 0), water mixing ratio (p = 3), and radar reflectivity (p = 6), and the corresponding equations are often closed by assuming that the HSD follows a three-parameter analytical HSD (e.g., Gamma distribution n(r) = N0rμexp(−λr), Weibull distribution, or lognormal distribution) and relating the prognostic moments to the HSD parameters (see Appendix for the different forms of analytical HSDs and the corresponding relationships). For two-moment BMPs, the spectral shape parameter such as μ in the Gamma HSD is assumed as a prespecified constant. The addition of M6 allows for explicit consideration of the effect of HSD spectral shape in principle, which is neglected in both one-moment and two-moment BMPs. In practice, to parameterize the effects of HSD spectral shape on individual microphysical processes in BMPs is not trivial, and existing studies are limited in scope, some of which are highlighted next.
-
Raindrop sedimentation is arguably one of the first topics on which the effects of spectral shape have been extensively investigated in efforts to address the issue of size sorting that results from large drops falling faster than small ones. It is obvious that one-moment BMPs are not able to catch the size sorting phenomenon due to using a single mass-weighted sedimentation. Wacker and Seifert (2001) mathematically analyzed the equations of raindrop sedimentation resulting from treatments of a one-moment BMP, a two-moment BMP, and a reference bin microphysics model, and investigated the resultant differences in vertical profiles of rainwater and surface precipitation. They showed that the two-moment BMP, while producing a solution closer to the reference bin microphysics than the one-moment BMP, overestimated sedimentation of large raindrops due to the assumed inverse-exponential HSD. They also found that the treatment of moments transformed the original linear equation for bin microphysics to quasi-linear moment equations, resulting in spurious shockwaves. Using a similar approach, Milbrandt and Yau (2005a) showed that the two-moment BMP could mimic the effects of size sorting when distinct number-weighted and mass-weighted mean fall speeds are used to sediment M0 and M3, respectively. However, when the HSD is constrained to an inverse-exponential function, the differential sedimentations led to excessive size sorting in the two-moment BMP. They also examined the limitation of an inverse exponential HSD by using a three-parameter Gamma HSD function, by comparing different fixed values of μ, and by diagnosing μ through an empirical relationship derived from a bin microphysics model. Milbrandt and Yau (2005b) further extended the study to include a prognostic equation for M6 (radar reflectivity) (thus μ) and developed the first three-moment BMP. Their results demonstrated that by allowing μ to vary, either diagnostically or prognostically, the problems of excessive size sorting associated with two-moment BMPs became manageable, because the difference between the number- and mass-weighted fall speeds increased with broadening HSDs. It is evident that this important feedback between the spectral shape and different moment-weighted fall speeds cannot be adequately represented with one- or two-moment BMPs.
Although most BMPs assume that cloud droplets do not fall, some recent modeling studies revealed that cloud droplet sedimentation plays a crucial role in determining cloud properties as well, through affecting cloud-top entrainment and droplet evaporation. For example, Ackerman et al. (2004) found, by simulating marine stratocumulus clouds with an LES with bin-microphysics, that decreases in sedimentation and precipitation due to increased droplet concentration (thus reduced droplet sedimentation) enhanced cloud-top entrainment and suppressed surface precipitation, and that the response of cloud water to increased droplet concentrations was determined by competition between moistening from suppressed surface precipitation and drying from increased entrainment of overlying air. When the overlying air was dry, the response of cloud water to aerosol-induced suppression of precipitation was dominated by enhanced entrainment of overlying dry air, reducing cloud water as droplet concentration increases. A complete suppression of sedimentation accelerated entrainment, drying the boundary layer, and thinning the cloud layer as the cloud base rises faster than the cloud top. They also pointed out that the traditional concept of “non-precipitating clouds” could be misleading, since it was the change in the precipitation flux from droplet sedimentation within clouds that modulates the drying of the boundary layer by cloud-top entrainment. Ackerman et al. (2009) even found that liquid water path responded more strongly to cloud water sedimentation than to drizzle in the models. Bretherton et al. (2007) examined the effect of cloud droplet sedimentation on entrainment rate and liquid water path of a nocturnal non-drizzling stratocumulus layer using an LES with bulk microphysics. In agreement with Ackerman et al. (2004), they found that consideration of droplet sedimentation decreased entrainment rate and increased liquid water path. However, instead of attributing the sedimentation–entrainment link to boundary-layer turbulence as Ackerman et al., they suggested that droplet sedimentation reduced entrainment by removing liquid water from the entrainment zone and thus inhibiting two mechanisms that promote the sinking of entrained air into the cloud layer: entrainment-induced evaporative cooling and longwave radiative cooling. A sensitivity study shows that the radiative effect is less important than the reduced evaporation. Caldwell and Bretherton (2009) further found that the impact of droplet concentration on entrainment is primarily through droplet sedimentation feedback rather than through drizzle processes. These results suggest that the impacts of droplet sedimentation on cloud-top entrainment should be considered in climate model simulations of aerosol indirect effects.
Note that these studies on droplet sedimentation have assumed the Stokes terminal velocity and lognormal cloud droplet size distribution with a fixed geometric standard deviation (or relative dispersion). The effect of varying spectral shape of the cloud droplet size distribution through droplet sedimentation has not yet been explored, although some qualitative effect can be gleaned from the fact that σg has been used as a tunable parameter to prevent the model from over-entraining (too small droplet sedimentation). Together with the studies on raindrop sedimentation, these studies have demonstrated that adequate consideration of HSD spectral shape effect is important for treating sedimentation of not only large raindrops but also small cloud droplets. Further extension to solid hydrometers such as ice crystals merits investigation as well.
-
Autoconversion is another important process in which spectral shape has been demonstrated to play a critical role. Briefly, all the autoconversion parameterizations that have been developed so far can be generically written as a two-part function given by:
where R is the autoconversion rate; R0 represents the conversion rate after the onset of the autoconversion process and is called rate function; and T represents the threshold function describing the threshold behavior of the autoconversion process. The rate function R0 has been the primary focus of most studies of autoconversion over the last few decades (e.g., Kessler, 1969; Manton and Cotton, 1977; Liou and Ou, 1989; Baker et al., 1993; Liu and Daum, 2004).
The threshold function T, however, has received less attention and is thus focused on here. Liu et al. (2006a) pointed out that according to how the threshold function is specified, existing autoconversion parameterizations can be grouped into three major types: Kessler-type, Sundquist-type, and Berry-type, named after the key original researchers. Briefly, Kessler (1969) assumed that the autoconversion process exhibited an abrupt threshold behavior described by:
where the Heaviside step function H(LWC − LWCc) indicates that no autoconversion occurs when the liquid water content (LWC) is less than the threshold value LWCc. Later Kesser-type parameterizations (Manton and Cotton, 1977; Liou and Ou, 1989; Baker, 1993; Liu and Daum, 2004) replace L with some measure of droplet size such that:
where rm and rc denote the driving and critical radius, respectively. Although it has been agreed that the threshold process in the Kessler-type parameterizations is driven by some mean diameter, the exact rm differs with different Kessler-type parameterizations. For example, rm respectively represents the mean volume radius (r3) in the parameterization of Manton and Cotton (1977), mean radius of the fourth moment (r4) in the parameterizations of Liou and Ou (1989), Baker (1993), and Boucher et al. (1995), and mean radius of the sixth moment (r6) in Liu and Daum (2004). The essential discrepancy among different driving radii lies in the role of representing the effect of HSD spectral shape; for monodispersed HSDs, all the different mean diameters are equal. Furthermore, the critical radius rc had been treated as an empirical parameter tuned in modeling studies until Liu et al. (2004), in which a theoretical expression was derived to relate rc to LWC and cloud droplet concentration. To overcome the deficiency of all-or-nothing autoconversion with the Kessler-type parameterization, Sundqvist (1978) proposed another ad hoc threshold function given by
Del Genio et al. (1996) introduced a slightly different threshold function
Equation (5d) exhibits a cloud-to-rain transition sharper than Eq. (5c), but still smoother than the Heaviside function. Liu et al. (2006b) presented a more general Sundquist-type equation that covers more threshold behaviors and explicitly considers droplet concentration as well:
It can be readily shown that Eq. (5e) covers the three types of autoconversion threshold functions by varying the empirical parameter ν ≥ 0: it approaches the Kessler-type and Berry-type when ν approaches
$\infty $ and 0, respectively. Berry-type refers to the BMPs that do not consider the threshold behavior or implicitly assume T = 1 (Berry, 1967; Beheng, 1994; Khairoutdinov and Kogan, 2000). Nevertheless, these discrepant expressions lack clear physics, thus precluding a sound choice between them. Furthermore, they are only for the conversion rate of liquid water content (third moment). These deficiencies were eliminated in Liu et al. (2006a, 2007), in which a general autoconversion threshold function is theoretically derived such that:where Γ and
$\gamma $ represent the complete and incomplete Gamma functions, respectively, and xc is the ratio of the critical to mean masses of the droplet population. The parameter q is the spectral shape parameter in the Weibull droplet size distribution and has a unique relationship with the relative dispersion εc of cloud droplet size distribution given by:The parameter p denotes the order of HSD moment, and Eq. (6) describes the threshold behavior of droplet concentration, liquid water content, and radar reflectivity, respectively, when p = 0, 3, and 6. Note that the use of εc, instead of the HSD shape parameter such as q in the Weibull HSD, has the advantage of encompassing the representation of HSD spectral shape embodied in the commonly used different analytical functions (e.g., Gamma, Weibull, and lognormal distributions; see Appendix for details) and different processes. Following these ideas, Xie and Liu (2011) derived the formulation by use of the four-parameter modified Gamma function that covers the three-parameter Gamma and Weibull HSDs as special cases.
Inspection of Eq. (6) reveals that the autoconversion threshold behavior is in fact determined by two dimensionless numbers: critical mass ratio xc and εc (Fig. 3a). The critical mass ratio xc determines where the “abrupt” conversion occurs, and εc determines the conversion steepness or threshold types. Most importantly, Eq. (6) reveals that the different types of autoconversion threshold functions essentially reflect the variation of εc; as εc increases from 0 to
$ \infty $ , the autoconversion threshold function changes from the Kessler-type through the Sundqvist-type to the Berry-type. The theoretical autoconversion parameterization eliminates the need for tunable empirical parameters (e.g., rc and ν) at microphysical scales. Different versions of the theoretical threshold formulation have been applied to climate modeling studies (Rotstayn and Liu, 2009; Xie et al., 2018), CRM studies (Guo et al., 2008), threshold radar reflectivity separating precipitating clouds from non-precipitating clouds (Liu et al., 2008a), the dependence of rain initiation height on aerosol properties and updraft velocity (Chen et al., 2018a), and fog investigation (Niu et al., 2010; Lu et al., 2013a). It is noteworthy that the current expression for critical radius (thus xc) does not account explicitly for turbulence effect and εc; further research is needed to address these issues. Also worth mentioning is that the autoconversion process is conventionally thought to be something as a practical need/convenience to represent the droplet collection process in models; however, the theoretical formulation reveals that it is a real physical threshold process that links droplet condensation, evaporation, and collection processes.Figure 3. Illustration of effects of relative dispersion (ε). The left panel, adapted from Liu et al. (2008b), shows that the dispersion-induced error in cloud albedo (left y-axis) and cloud radiative forcing (right y-axis). The right panel, adapted from Liu et al. (2006a), shows that different ad hoc types of autoconversion parameterization reflect the variation of relative dispersion.
-
The impact of HSD spectral shape on cloud radiative properties has been investigated primarily through its effect on cloud droplet effective radius (re) defined as the ratio of the third to the second moment of cloud droplet size distribution (Hansen and Travis, 1974; Slingo, 1989; Liu and Daum, 2000). Like the autoconversion parameterization, early parameterizations expressed effective radius as either a linear or a cubic root function of LWC without considering the dependence on droplet concentration and εc (Stephen, 1978; Fouquart et al., 1989). A serious deficiency of such “one-moment” re parameterization is its inability to study the first aerosol indirect effect. Later, Bower and Choularton (1992) and Bower et al. (1994) proposed an expression that relates re to the ratio of LWC to droplet number concentration for monodisperse cloud droplet size distribution or εc = 0. However, monodisperse droplet size distributions seldom occur in real clouds; Martin et al. (1994) empirically estimated εc for continental and marine clouds; and a similar idea has been used to distinguish the difference in εc between continental and marine clouds in climate models (Ghan et al., 1997; Lohmann et al., 1999; Rotstayn, 1999). However, such two-moment re parameterization does not fully consider the influence of spectral shape. Subsequent studies (Liu and Hallett, 1997; Liu and Daum, 2000; Lu et al., 2007) have shown that re can be generally parameterized through the “1/3” power-law:
where N denotes the droplet number concentration, and the dimensionless parameter βe is an increasing function of εc that has a unique correspondence with the commonly used Gamma, Weibull, and lognormal size distributions (see Appendix for a list of the expressions). A larger εc leads to a larger βe, which in turn leads to a larger re, smaller cloud optical depth, smaller cloud albedo, and smaller cloud radiative forcing, other factors being the same. This bias induced by incorrect representation of εc is called dispersion bias by Liu et al. (2008b), which showed that an incorrect εc could lead to a bias in cloud radiative forcing comparable in magnitude to the climate forcing caused by doubled CO2 (Fig. 3b).
The significance of considering spectral shape is further reinforced by the fact that a perturbation in aerosol properties can concurrently alter cloud droplet concentration and εc, and the aerosol-induced dispersion effect acts to buffer the better-studied aerosol indirect effect through aerosol-induced change in droplet concentration (number effect hereafter) such that it negates the strong number effect in the aerosol-limited regime (Liu and Daum, 2002; Wood et al., 2002; Peng and Lohmann, 2003; Rotstayn and Liu, 2003, 2005, 2009; Yum and Hudson, 2005; Lu et al., 2007; Peng et al., 2007; Chen et al., 2012; Ching et al., 2012; Pandithurai et al., 2012; Xie et al., 2018; Wang et al., 2020), but enhances the weak number effect in the updraft-limited regime (Martins and Dias, 2009; Ma et al., 2010; Berg et al., 2011; Hudson et al., 2012; Chen et al., 2016a, 2018a; Xie et al., 2017). Igel et al. (2017) showed that εc significantly affects cloud fraction as well.
-
Although the importance of considering HSD spectral shape in BMPs has been increasingly recognized, representing it adequately in BMPs remain extremely challenging. A few studies have examined the dispersion effect by use of empirical expressions derived from measurements in climate and weather models, including the Australian CSIRO Mk3.0 (Rotstayn and Liu, 2003, 2005, 2009), the ECHAM (Peng and Lohmann, 2003), the Community Atmosphere Model (CAM) (Xie et al., 2017; Wang et al., 2020), the Institute of Atmospheric Physics’s atmospheric GCM (IAP AGCM) (Xie et al., 2018), and the Weather Research and Forecasting (WRF) model (Xie and Liu, 2011, 2015; Wang et al., 2013, 2018; Xie et al., 2013). Among many findings from these modeling studies, two points are worth emphasizing. First, the discrepancy among the different empirical expressions is large enough to cause significant discrepancies in numerical simulations in clouds, precipitation, and radiative forcing. Second, these empirical expressions are based on limited sets of observational data collected from certain types of clouds. Their generalization is thus questionable.
To overcome such deficiencies, there have been two rare attempts at formulating theoretical expressions that can be used to diagnose relative dispersion in two-moment BMPs (Liu et al., 2006c; Liu and Li, 2015). Briefly, both formulations start with the approximate equation for regular condensational growth given by (Rogers and Yau, 1989; Pruppacher and Klett, 1997):
where r is the droplet radius, S is the fractional water vapor supersaturation, and G is a function of the air temperature and pressure. The Liu et al. (2006c) formulation is based on the relationship
along with a steady-state equation for diagnosing the mean radius r1. The Liu-Li formulation first diagnoses the mean radius from the condensation-induced change of LWC and supersaturation S based on
Relative dispersion
${\varepsilon }_{c}$ is then diagnosed from the relationship between r1 and mean volume radius r3 for the Gamma droplet size distributionWang et al. (2020) compared the impacts of the two diagnostic formulations and other empirical representations on climate model simulations and found significant differences.
It is worth mentioning that several studies have tried to obtain empirical relationships between the parameters of rain drop size distributions by analyzing measurements (Ulbrich, 1983; Zhang et al., 2001; Brandes et al., 2003; Cao and Zhang 2009) or model simulations with bin microphysics (Milbrandt and Yau, 2005a; Seifert, 2008).
-
Although the above-mentioned diagnostic expressions are valuable to improve two-moment BMPs with fixed spectral shape, they are not universal and may not be generalized beyond the conditions or assumptions on which these diagnostic expressions are based. To some extent, this deficiency manifests itself in the diversity of the empirical expressions for
$ {\varepsilon }_{c} $ . To allow for a more general consideration of HSD spectral shape, increasing efforts have been devoted to developing prognostic three-moment BMPs, in which an additional moment (usually M6, which is proportional to radar reflectivity for spherical particles of constant density) is added to predict the spectral shape parameter (Milbrandt and Yau, 2005b; Szyrmer et al., 2005; Shipway and Hill, 2012; Dawson et al., 2014; Loftus et al., 2014; Naumann and Seifert, 2016). MY2005b is a three-moment scheme with the six hydrometeor categories being cloud, rain, ice, snow, graupel, and hail. All hydrometeors but ice crystals are assumed to have the power-law mass–radius relationship with a power exponent of 3; ice crystals are assumed to be bullet rosettes following Ferrier (1994). The primary physical process with prognostic shape parameter is the effect of hydrometeor sedimentation to mitigate the problem of excessive size sorting of the two-moment scheme with prescribed constant value of μ. Treatment of other microphysical processes largely follows previous studies (e.g., Ferrier, 1994) to balance M6 budgets. Naumann and Seifert (2016) developed a three-moment BMP for warm rain processes, including evaporation and collection processes. Paukert et al. (2019) designed a three-moment BMP for rain microphysics by adding prognostic M6 to replace the empirical shape–slope relationship based on Cao et al. (2008) in the so-called P3 scheme (see section 3.4 for more discussion on the P3 scheme). A unique feature of this work, compared to existing three-moment schemes, is the explicit consideration of influences of raindrop self-collection and collisional breakup on HSDs. Chen and Tsai (2016) developed a three-moment BMP for depositional growth of cloud ice crystals. The crystal size distribution was described with a three-parameter Gamma function; the geometric shape of ice crystals was represented using the volume-weighted aspect ratio. It was found that failure to consider the ice crystal shape led to a 45% underestimation of mass growth. Using only two moments to describe the crystal size distribution led to a 37% underestimation in mass and 28% underestimation in the bulk aspect ratio of the ice crystals. The proposed scheme was able to capture the shape memory effect and the gradual adaptation of ice crystal aspect ratios to a new growth habit regime.Note that the treatment of cloud droplets in most existing so-called three-moment BMPs is actually two-moment, for example, assuming a Gamma size distribution with a fixed value of μ = 3 (Cohard and Pinty, 2000). Clark (1974) might be the first attempt at developing a three-moment BMP for cloud droplet population under the assumption of Gamma or lognormal droplet size distribution; but he only tested the BMP in highly simplified one- and two-dimensional cloud models. Deng et al. (2018) recently took up a similar task and tested their three-moment BMP with the WRF model. These rare three-moment BMPs for cloud droplets likely face challenges in representing both droplet concentration and εc in the updraft-limited regime similar to the empirical or analytical expressions (Ghan et al., 2011; Chen et al., 2016a).
It is noteworthy that similar multi-moment modeling efforts with assumed mathematical size distribution functions can be found in studies of aerosol dynamics (Williams, 1986). It should be valuable to compare the works in the two fields, especially in seeking to improve representation of aerosol–cloud interactions (Wang et al., 2013).
-
The above-mentioned BMPs and investigations into spectral shape effects are based primarily on the adiabatic assumption without (adequately) considering the effects of turbulence–microphysics interactions, including turbulent entrainment-mixing processes and stochastic condensation to be discussed in this subsection.
In seeking explanations for the discrepancies between observed and predicted droplet size distributions and rain initiation, a few studies as far back as the 1970s proposed the concepts of various turbulent entrainment-mixing mechanisms and investigated their impacts on cloud droplet size distributions (Warner, 1973; Latham and Reed, 1977; Baker and Latham, 1979; Baker et al., 1980). Figure 4 schematically illustrates the general characteristics of the various processes in terms of the commonly used microphysical mixing diagram (Burnet and Brenguier, 2007). Briefly, in homogeneous entrainment-mixing, entrained dry air is assumed to quickly mix with cloudy air such that all droplets experience the same condition for evaporation, leading to decreased droplet sizes and unchanged droplet number in the cloudy volume but decreased droplet number concentration because entrained air increases cloud volume through dilution (positive correlation between the mean droplet size and droplet number concentration). In the extreme inhomogeneous entrainment-mixing process, turbulent mixing is so slow that some droplets can completely evaporate to saturate the entrained dry air before mixing occurs, leading to decreased droplet concentration but unchanged mean droplet sizes. Fewer droplets means less competition for water vapor; therefore, the remaining droplets can grow bigger if the cloud parcel is lifted upward again after entrainment-mixing. This process is described as inhomogeneous mixing with subsequent ascent and leads to negative correlation between the mean droplet size and droplet concentration. In real clouds, any scenario between these extremes can occur, depending on the coupled dynamical and microphysical conditions characterized by the dimensionless Damkoler number defined as:
Figure 4. Microphysical mixing diagram of mean volume radius vs. droplet concentration to illustrate major turbulent entrainment-mixing mechanisms and the corresponding microphysical relationships. See text for detailed explanation.
where τmix, τr, ε, and L are the turbulent mixing time, microphysical time, turbulent dissipation rate, and entrained dry eddy size, respectively.
Several lines of progress have been recently made in unifying the quantification of all different entrainment-mixing processes and developing parameterizations to represent their microphysical effects. The first line of progress lies in finding some microphysical measures to unify the various entrainment-mixing processes. In a modeling study, Morrison and Grabowski (2008) proposed an equation to estimate the effects of turbulent entrainment-mixing processes:
where the subscript “0” denotes the variable after entrainment but before evaporation, and 0 ≤ α ≤ 1 is an empirical parameter determining the type of turbulent entrainment-mixing processes (α = 0 for homogeneous entrainment-mixing and α = 1 for extreme inhomogeneous entrainment-mixing). In a series of studies (Lu et al., 2013a, b; Lu et al., 2014a), we further defined various microphysical measures called homogeneous mixing degrees such that a larger homogeneous mixing degree indicates a stronger degree of homogeneous mixing process.
Recognizing that “Da” requires knowledge of entrained eddy sizes that is not unique in turbulent clouds and difficult to know, Lehmann et al. (2009) suggested using the transition length (L*) in calculation of turbulent mixing time, which is the turbulent eddy size that corresponds to Da = 1 and is given by,
They showed that smaller L* leads to stronger homogeneous mixing. Note that a similar transition length was actually introduced earlier (Kabanov et al., 1971; Baker et al., 1980); but Lehmann et al. (2009) were the first to conduct a systematic examination of it. Lu et al. (2011) further introduced the dimensionless transition scale number (NL) given by:
where η is the Kolmogorov turbulence scale and ν is the kinematic viscosity. The simplification of the Damkoler number to transition scale length and then to transition scale number is the second line of progress toward developing a parameterization to represent all the turbulent entrainment-mixing mechanisms in atmospheric models.
The final line of progress lies in building empirical relationships between the homogenous mixing degree and transition scale number that can be used to parameterize microphysical influences of entrainment-mixing processes on cloud microphysical properties (Lu et al., 2013b, 2014a; Gao et al., 2018; Luo et al., 2020; Desai et al., 2021). A parameterization was recently implemented in the LES version of WRF-Solar, a WRF version tailored for solar irradiance forecast (Endo et al., 2015; Jimenez et al., 2016), and examined for its influences on simulated cloud and radiative properties (Xu et al., 2022). There have been a few modeling studies with different and more complicated treatments of entrainment-mixing processes (Jarecka et al., 2009, 2013; Hoffmann and Feingold, 2019).
Beside the framework of homogeneous/inhomogeneous mechanisms, other ideas based on vertical mixing have been proposed to explain the microphysical variations during entrainment-mixing processes, including entity-type entrainment-mixing (Telford, 1996) and vertical circulation mixing (Wang et al., 2009). However, it is not clear as to how to represent these vertical mixing ideas in atmospheric models and how to reconcile the vertical mixing ideas with the homogeneous/inhomogeneous mixing framework, despite some efforts (Yum et al., 2015; Yeom et al., 2017).
Another related topic is stochastic condensation, which considers the growth of a droplet population as a stochastic process and relates the cloud droplet size distribution to multiscale supersaturation fluctuations associated with turbulence. Stochastic condensation was pursued in the 1960s, especially by Chinese and former Soviet Union scientists (Gu, 1962; Zhou, 1963; Levin and Sedunov, 1966; Sedunov, 1974). These early studies often replaced the full growth equations with simplified versions of kinetic equations amenable to analytical analysis, assumed Gaussian fluctuations, and claimed that turbulence fluctuations lead to spectral broadening of droplet size distributions. On the contrary, by numerically solving the growth equations under Gaussian fluctuating environments generated by Monte-Carlo simulations, Warner (1969), and Bartlett and Jonas (1972) predicted that turbulent fluctuations only slightly broadened droplet size distributions. They argued that supersaturation and updraft were so closely coupled to one another that a droplet experiencing a higher supersaturation and growing faster was likely in a stronger updraft and had a shorter time to grow. Manton (1979) demonstrated that turbulent mixing could break the link between supersaturation and updraft, leading to spectral broadening. Khvorostyanov and Curry (1999a, b) extended these early studies by presenting a more general set of stochastic kinetic equations and pointed out that the early low-frequency theories of stochastic condensation generally yielded droplet size distributions of the Gaussian type while observations tended to follow positively skewed distributions. They derived a more general mean-field equation and showed that the Gamma distribution was the solution to their equation under certain assumptions in the low-frequency regime. Khvorostyanov and Curry (2008a, b, c) further extended their formulation to include ice particles. By formulating stochastic condensation in terms of the stochastic Langevin equation and Fokker-Planck equation with known supersaturation fluctuation, McGraw and Liu (2006) derived a special type of Weibull droplet size distribution.
Several points are noteworthy. First, although addressing the issue of spectral shape is a primary motivation to invoke different turbulent entrainment-mixing processes, previous studies have been focused most on LWC and droplet concentration and thus only modify two-moment BMPs. Incorporation of εc into the framework remains elusive (Korolev et al., 2016; Pinsky et al., 2016a, b; Luo et al., 2021). Second, it is not trivial to represent supersaturation fluctuation/variation in atmospheric models that commonly assume saturation adjustment. Finally, turbulent entrainment-mixing and stochastic condensation have been largely investigated in separation, although entrainment-mixing and fluctuations are actually closely connected and act together on droplets over a range of scales (Su et al., 1998). Research is needed to consider entrainment-mixing processes and stochastic condensation together. Fully addressing these issues calls for particle-resolved DNS models with sufficiently large model domain size (see section 5 for more discussion on particle-resolved DNS).
-
In addition to choosing the optimal number of HSD moments, the other challenge in designing BMPs confronts the other BMP pillar—diversity of above-mentioned hydrometeor categories. The traditional way to handle the hydrometeor category diversity is through introducing more hydrometeor categories into BMPs, as illustrated in Fig. 2. Although it seems natural, the ever increase of hydrometeor categories is clearly limited by computational resources and the additional load of parameterizing more physical processes involved in BMP developments. Moreover, this traditional paradigm likely suffers from some fundamental difficulties. Take simpler BMPs in GCMs as examples (e.g., Rasch and Kristjánsson, 1998; Morrison and Gettelman, 2008; Gettelman and Morrison, 2015). GCM BMPs often assume two ice categories: small “cloud ice” and large precipitating “snow”, with the effects of rimed particles (graupel and hail) being neglected. Conversion of mass between the cloud ice and snow categories is parameterized by analogy to warm bulk microphysics schemes. Ice-to-snow autoconversion is treated in an ad hoc way that varies among schemes, while accretion is formulated by assuming continuous collection with a gravitational collection kernel, neglecting the fall speed of cloud ice. One-moment BMPS (e.g., Rasch and Kristjánsson, 1998) represent ice autoconversion by converting mass in the cloud ice category to the snow category when it exceeds a threshold mass mixing ratio. Two-moment BMPs (e.g., Morrison and Gettelman, 2008; Gettelman and Morrison, 2015) represent ice autoconversion by converting both mass and number of cloud ice larger than a critical size. Like liquid clouds, model simulations are highly sensitive to the value of critical ice size (Zhang et al., 2013; Eidhammer et al., 2014). However, unlike warm clouds, the ice-to-snow autoconversion may not represent any real individual physical processes because small ice particles can grow to snow through several pathways (e.g., deposition, aggregation, and riming). Furthermore, the physical properties of cloud ice and snow (density, shape, and terminal fall speed) vary widely. Thus, the separation of solid hydrometeors into cloud ice and snow as distinct categories and introduction of autoconversion may not have as strong of a physical basis as the liquid counterparts, and the abrupt (discrete) transitions between ice properties may be physically unrealistic. The situation could change for three-moment BMPs if a theoretical ice autoconversion threshold function can be derived similar to the autoconversion process of warm rain discussed in section 3.2. However, no work has been reported in the literature, and it may not be possible in view of the fundamental problems just discussed. Another deficiency in the conventional treatment of hydrometeor category is that empirical hydrometeor mass–size, projected area–size, and fall speed–size relationships are often used, which may not be self-consistent. This inconsistency likely causes unphysical results. Obviously, self-consistency of these relationships and quantities is desired because they are physically coupled in nature (Mitchell et al., 2011).
An entirely different paradigm has been emerging since the early 2000s that continuously tracks hydrometeor properties in time and space, instead of separating solid hydrometeors into different predefined categories. This emerging paradigm is hereafter called as the “continuous paradigm” in contrast to the traditional “discrete paradigm”, and key studies are briefly discussed here. The idea of the continuous paradigm originated with the bin ice microphysics of Hashino and Tripoli (2007). Harrington et al. (2013a, b) and Sulia et al. (2014) developed a bulk scheme that predicts particle habit evolution by including the crystal a- and c-axis mixing ratios as prognostic variables, thereby allowing for prediction of crystal axis ratio from vapor depositional growth. Other schemes use a diagnostic approach to include variability in ice particle properties. Lin and Colle (2011) included separate categories for cloud and precipitating ice and diagnosed the degree of riming and ice particle properties (mass–size and fall speed–size relationships) for precipitating ice as a function of the ratio of the riming rate to the sum of riming and deposition growth rates. Such a diagnostic approach is computationally efficient because it does not require additional prognostic variables, but particle properties are calculated locally and are not tracked in time and space. Morrison and Milbrandt (2015) generalized the approach and presented a method to predict several bulk physical properties of ice particles, eliminating the need for artificial conversion between ice categories and forming the basis for a conceptually new bulk microphysics scheme (P3 scheme for Predicted Particle Properties; see also Milbrandt and Morrison (2016). To represent the evolution of various physical properties in space and time, the P3 scheme includes a single solid-phase category with four prognostic mixing ratio variables: the total ice mass qi, ice number Ni, the ice mass from rime growth qrim, and the bulk rime volume Brim. They also made a distinction between prognostic variables that are tracked through all the important mechanisms of change and growth, including vapor dynamical tendencies from advection and subgrid-scale mixing and microphysical tendencies (growth/decay processes and sedimentation), and predicted variables that are derived from the prognostic variables, including the rime mass fraction, bulk density, and mean particle size. Eidhammer et al. (2017) implemented the P3 scheme in CAM5 by modifying the two-moment bulk scheme (Morrison and Gettelman, 2008; Gettelman and Morrison, 2015). The mass–size and projected area–size relationships vary with particle sizes following Erfani and Mitchell (2016) and Morrison and Milbrandt (2015). Different lookup tables were used for the integration over HSD due to the size-dependent mass– and area–size relationships. Riming effect on physical properties of ice particle was neglected in Eidhammer et al. (2017). In a similar effort, Zhao et al. (2017) merged cloud ice and snow into a single prognostic variable, i.e., total ice, following Lin and Colle (2011). They incorporated the shape and riming impacts on ice particle properties through an environment-dependent riming intensity. In this BMP, eight microphysical processes (autoconversion of cloud ice to snow, accretion of cloud ice by snow, accretion of cloud water by snow, Bergeron process between cloud water and snow, melting of snow, fast freezing of rain, deposition, and sublimation of snow) are no longer needed or consolidated.
For liquid-phase clouds, it is generally thought that category classification is much less of a problem since the partition of cloud droplets and rain drops has clear correspondence with distinct growth modes of vapor diffusion for cloud droplets and collision–coalescence for rain, and cloud droplets and small raindrops are approximated by liquid spheres reasonably well. Nevertheless, some studies have shown that additional drop modes may be needed to accurately represent warm-rain processes (Saleeby et al., 2015). Kogan and Belochitski (2012) applied a similar idea of one hydrometeor category to develop a BMP for liquid-phase processes by predicting five moments of the full drop size distribution that consists of both small cloud droplets and large raindrops. The five prognostic moments are the zero, second, third, fourth, and sixth, corresponding respectively to number concentration, surface area, water content, drizzle flux, and radar reflectivity. The process rates and key microphysical properties are then diagnosed from the empirical multiple power-law fits to the LES simulations with a detailed bin microphysics of some stratocumulus clouds. Thus, this effort can be characterized as M5C1 according to the parameterization classification suggested in section 2. It is interesting to note that to some extent, this scheme trades hydrometeor categories for HSD moments.
Although the continuous paradigm of hydrometeor category is conceptually attractive, and perhaps has practical benefits according to the above-mentioned studies, many questions remain to be answered. For example, as discussed above, the computational cost associated with any BMP is likely proportional to the product of the number of hydrometeor size distribution moments and the number of hydrometeor categories. Will the benefit of reducing the latter be at the cost of adding the former, and to what extent? More studies are recommended on this promising new paradigm, with the high hope of a transformative change like the celebrated Scientific Revolution in astronomy from the Ptolemaic geocentric system to the Copernican heliocentric system.
-
Regardless of their detailed differences, virtually all BMPs are built on two pillars: (1) HSD is described by some known function, and perhaps the most used function is the Gamma distribution; (2) the relationships between hydrometeor properties and hydrometer radius (e.g., mass–radius relationship) can be described by power-laws generically given by:
where az and bz are the prefactor and power exponent for the corresponding variable z, respectively. However, the ansatz of the two pillars is largely empirical or heuristic. In searching for a theoretical foundation for the common form of empirical HSDs and why observed droplet size distributions are generally broader than those predicted by the classical theory, a theoretical formalism has been developed by integrating into cloud physics the ideas of statistical physics and information theory (Zhang and Zeng, 1994; Liu et al., 1995; Liu, 1995, 1997; Liu and Hallett, 1997; Yano et al., 2016; Wu and McFarquhar, 2018). This theoretical formalism considers atmospheric particles as a system instead of following individual particles and thus is referred to as systems theory. We argue that the systems theory can be used to provide a theoretical foundation for the HSD function assumed in BMPs, because a BMP essentially seeks to represent the collective behavior of stochastic subgrid microphysical processes over the model grid, which is many orders of magnitude larger than the scale at which cloud microphysics acts. Furthermore, the assumed power-laws are closely related to the self-affine fractal geometry of individual hydrometeors and scale invariance properties of cloud turbulence fields (Mandelbrot, 1983; Liu, 1997). Below is a summary of the key equations involved in the theoretical formulations presented in those studies. In the framework developed by Liu and his coauthors throughout the years (Liu formalism hereafter), the particle system is constrained by two equations:
where z is called the restriction variable and related to the physical conservation laws controlling the particle system; Z is the total amount of z per unit volume; N is the total particle concentration; n(z) is the particle concentration per unit volume per unit z interval; ρ(z) = n(z)/N can be considered as the probability that the particle of z occurs. As the Boltzmann energy distribution describes the most probable energy distribution of a molecular system and the Maxwell velocity distribution characterizes the most probable velocity distribution of a molecular system, there exists a characteristic HSD that occurs most probably among all the possible HSDs. The most probable HSD is obtained by maximizing the spectral entropy defined after the Shannon entropy for complex systems:
where k is a proportional constant that has no effect on the derivation of the most probable droplet size distribution. Maximization of the spectral entropy subject to the constraints given by Eqs. (18a, b) yields the most probable distribution with respect to z:
where α = Z/N represents the mean Z per particle (compared to the physical meaning of “KBT” in the Boltzmann energy distribution representing the mean energy per molecule). A combination of Eqs. (17) and (20) leads to the general most probable HSD of Weibull form:
where the parameters N0 = azbz/α and λ = az/α.
It is worth emphasizing that the restriction variable z, and thus parameters az and bz, is related to the conservation laws controlling the particle system such as turbulent entrainment-mixing processes and particle geometrical shapes. Wu and McFarquhar (2018) presented a similar formulation based on relative entropy (hereafter WM formalism), instead of the Shannon information entropy. They derived the four-parameter generalized gamma distribution as the most probably size distribution:
The spectral shape parameter μ is related to the prior distribution satisfying the power law:
Note that
$ {\rho }_{0}\left(r\right) $ was called the invariant measure in Wu and McFarquhar (2018). The power-law form of the prior distribution in the WM formalism is derived from the scale invariance principle that two different cloud volumes have the same shape of prior distribution.It can be readily shown that the generalized Gamma function covers the Weibull distribution and Gamma distribution as special cases when μ = b−1 and b = 1, respectively. The primary distinction between the Liu formalism and WM formalism lies in their use of the prior distribution. The Liu formalism implicitly assumes a uniform prior distribution whereas the WM formalism assumes that the prior distribution obeys a power-law relationship to hydrometer radius independent of the power law relationship associated with the conservation law. Thus, the Liu formalism is equivalent to the WM formalism under the assumption that the exponent in the power-law prior is related to the power-law relationship associated with the conservation law such that μ = b−1. It remains an open question whether this equation holds as a result of some deeper scale-invariance principles, in addition to the common laws of conservation used to build the kinetic equations.
The other BMP pillar is that various hydrometeor properties (e.g., mass, volume, surface area, density, and terminal velocity) assume power-law relationships with hydrometer radii. Most power-law relationships are readily understood for homogeneous spherical particles such as cloud droplets. After analyzing the empirical results scattered in literature since 1935 on atmospheric aerosol particles and hydrometeors, Liu (1995, 1997) showed that the variety of power-law relationships for atmospheric particles could be unified into self-affine fractals (Mandelbrot, 1983), which covers as special cases irregularly shaped self-similar fractals and regular Euclidean geometric shapes. In practice, unlike a self-similar fractal that requires only one fractal dimension to quantify its morphology, characterization of self-affine fractals requires at least two fractal dimensions and thus simultaneous quantification (e.g., measurements) of mass, surface area, and size of particles. More studies on the power-law relationships for fractal-like ice crystal aggregates (both measured or simulated) can be found in Schmitt and Heymsfield (2010), Ishimoto et al. (2012), Letu et al. (2016), Lawson et al. (2019), Schmitt et al. (2019), Li et al. (2022), and Przybylo et al. (2019, 2022a, b). However, the self-consistency of the different power-law relationships has not been rigorously investigated in context of self-affine fractals. A systematic investigation of the various power-laws and their relationships under different conditions also holds promise in shifting the paradigm from discrete BMPs to continuous BMPs (Lin and Colle, 2011; Lin et al., 2011; Zhao et al., 2021).
Heuristic analysis suggests that the power-law relationships between the restriction variable and hydrometer radius are likely related to the physical laws controlling the processes and hydrometeor morphology. All else being equal, more irregular (smaller fractal dimension) hydrometers (Liu, 1995; Schmitt and Heymsfield, 2010) and stronger turbulent entrainment-mixing (Liu et al., 2002) tend to have broader HSDs. For solid hydrometeors, hydrometer morphology or power-laws may be related to turbulence and entrainment-mixing processes as well; however, such a connection has not been investigated adequately. The link between the scale-invariance for individual fractal hydrometers and the scale-invariance for hydrometeor population merits further investigation.
It should be noted that the above discussion is for a system that is either constrained by a single physical law or the physical laws can be characterized by a single power-law between the restriction variable z and hydrometeor radius r, i.e., the single mode HSD case. The general formulation of the systems theory that covers multiple physical constraints and more complex relationships between the restriction variable z and particle radius can be found in Liu et al. (1995). Some preliminary attempts have also been made to extend the systems theory to address the least probable distribution and scale-dependence of cloud droplet size distributions in turbulent clouds (Liu and Hallett, 1998; Liu et al., 2002), and formulate a kinetic potential theory on rain initiation (McGraw and Liu, 2003, 2004). More research along these lines is warranted.
-
A literature survey indicates that six different approaches have been used by different researchers to develop BMPs. The first is based on either educated guess or empirical observations. The Kessler BMP is a typical example of this approach. Such empirical parameterizations suffer from a major deficiency of lacking clear physics. The second approach is theoretical formulation. The advantage of the theoretical approach lies in its clear underlying physics, and thus, it should be used whenever possible. For example, Manton and Cotton (1977) used simple dimensional analysis arguments to extend the Kessler BMP to include the influence of varying cloud droplet concentrations. Verlinde et al. (1990) showed that an analytical solution to the full stochastic collection equation could be obtained if the collection efficiencies are held constant. In a series of publications, Liu and his coauthors (Liu et al., 2004, 2005, 2006a, b, 2007) derived a sequence of theoretical autoconversion parameterizations. Their theoretical analysis revealed the implicit assumptions underlying some commonly used autoconversion schemes and eliminated some empirical parameters (e.g., critical radius). Despite the high desirability of clear physics and reduced number of tunable parameters, it remains challenging to derive analytical expressions describing all the microphysical processes and hydrometeor types involved. This is particularly true with more complex BMPs (e.g., increasing number of moments and/or hydrometeor categories). The third approach relies on curve-fitting of numerical simulations by detailed numerical models with explicit microphysics. Most BMPs are built on parcel or one-dimensional models with bin microphysics (Berry, 1967, 1968; Berry and Reinhardt, 1974a, b, c, and d; Beheng, 1994; Chen and Liu, 2004). Khairoutdinov and Kogan (2000) derived their BMP from the statistical analysis of simulated stratiform clouds by an LES with bin microphysics. Noh et al. (2018) explored using simulations with a particle-based cloud model to develop BMPs. The fourth approach is based on numerical simulations with bin microphysics as well. However, instead of establishing simple empirical expressions describing individual processes, this approach relies on look-up tables (LUT) generated directly from bin microphysics models to circumvent the difficulties of fitting complex non-linear relationships (Feingold et al., 1988, 1999; Saleeby et al., 2015). Feingold et al. (1988) replaced the autoconversion formulation in the Regional Atmospheric Modeling System (RAMS) with full stochastic collection solutions considering droplet self-collection and rain (drizzle) drop collection of cloud droplets with more realistic collection kernels from Long (1974) and Hall (1980) rather than assuming constant collection efficiencies used in the earlier versions of RAMS. Feingold et al. (1999) applied a similar LUT approach to emulate representation of collection and drop sedimentation. Walko et al. (1995) described the implementation of this approach in RAMS for prediction of hydrometeor mixing ratios. Meyers et al. (1997) extended this approach to mixing ratio and number concentration. Saleeby et al. (2015) added explicit nucleation of cloud droplets as well as a second cloud mode of large cloud droplets with diameters from 40 μm to 80 μm. The addition of a new droplet category changes the BMP in RAMS to a M2C8 BMP that considers small cloud droplets, large cloud droplets, rain, pristine ice, snow, aggregates, graupel, and hail. A similar LUT approach was used in Paukert et al. (2019) in their three-moment BMP for rain processes. The fifth approach—the machine learning approach—is emerging as a viable candidate as a result of the advances in computational sciences, machine learning, and big data. The machine learning approach can be viewed as an extension of the third approach based on statistical regression models, or using more advanced machine learning models to fit more complex nonlinear relationships, or as a mathematical formalization of the LUT approach. The data used to develop the machine learning models may come from numerical models (virtual data) and/or measurements. For example, Rothenberg et al. (2018) built a polynomial chaos expansion emulator of an adiabatic parcel model with bin microphysics that can be used to diagnose droplet number at the maximum supersaturation from a wide range of dynamic, thermodynamic, and aerosol conditions. They found that the emulator requires much less computational time to build, store, and evaluate than a high-dimensional lookup table. Liu et al. (2018) emulated their parcel model simulations with a multiple layer perceptron (MLP) model to estimate liquid water content, droplet concentration, and εc at levels of maximum supersaturation. Seifert and Rasp (2020) explored the use of neural networks for parameterizing the processes of autoconversion, accretion, and self-collection in a two-moment BMP based on particle-based simulations of the collision-coalescence equation. Chiu et al. (2021) based their neural network modeling on the numerical simulations of the stochastic collection equation with bin microphysics constrained by measured droplet size distributions. Among other things, both studies suggested that the autoconversion parameterization may be improved by incorporating information about rain, due perhaps to the fact that although rain has no direct impact on autoconversion by definition, it contains information on the spectral shape of cloud droplet size distribution known to affect autoconversion rate (e.g., Liu et al., 2006a). Gettelman et al. (2021) replaced the bin microphysical model with several neural networks designed to emulate the autoconversion and accretion rates produced by the bin microphysical model. Despite its advantages, the machine learning approach aggravates the shortcomings of unclear physics and interpretability fraught with the traditional empirical and LUT approaches. It is interesting to note that the activation functions commonly used in machine-learning models resemble the functions describing such microphysical threshold processes as activation of aerosol particles into cloud droplets and autoconversion of cloud droplets to raindrops. Such functional similarities may be valuable to develop interpretable machine learning BMPs. In practice, hybrid approaches that combine two or more above approaches have often been, and will likely continue to be, used as the sixth approach. For example, LUT is partly used for some complex processes in some popular BMP packages (Morrison and Gettelman, 2008; Thompson et al., 2008).
-
Despite many efforts and achievements, much remains to be done in the development of multi-moment schemes. The following points are particularly worth mentioning. First, existing studies have been concerned mainly with simple hydrometeor categories and a limited number of microphysical processes, and thus, more comprehensive and systematic investigation is needed. Second, as the number of predicted moments increases, the optimal choice of moments to use is worth exploring (Wacker and Lupkes, 2009; Milbrandt and McTaggart-Cowan, 2010; Shan, 2020). Third, so far, we have discussed BMPs without specific differentiation between the types and spatiotemporal resolutions of numerical models wherein the BMPs are used (e.g., GCM vs. CRM or LES). However, BMPs are expected to be scale-dependent from the fundamental nature of multiscale fluctuations associated with turbulent clouds (Liu et al., 2002). Although one often deals with such issues by tuning empirical parameters embedded in BMPs, the need for scale-aware BMPs will become more acute for models with an adaptive or unstructured mesh. Finally, as discussed above, detailed models with explicit microphysics, especially bin microphysics, have been instrumental in developing BMPs. However, many aspects of explicit microphysics modeling are in dire need of improvement. Furthermore, although it is well known that small-scale turbulence (e.g., sub-LES scales) and related processes play a critical role in determining cloud microphysics, the effects of turbulence-related processes have not been considered in most BMPS, and serious knowledge gaps remain in understanding turbulence–microphysics interactions. Modeling explicit microphysics and the effects of turbulence are the foci of section 4 and section 5, respectively.
-
This Appendix summarizes and compares the key relationships between the HSD parameters and HSD moments for the commonly used Gamma size distribution, Weibull size distribution, and Lognormal size distribution. Also summarized are some empirical expressions.
For the modified Gamma size distribution given by n(r)=N0r μexp(−λrq), we have
where Mp (μp) denotes the (normalized) pth moment, and N0,
$\mu$ , λ, and q are the distribution parameters. The Gamma and Weibull size distributions correspond to q = 1 and μ = q−1, respectively.For the lognormal distribution given by
$n\left(r\right)= $ $ \dfrac{N}{{\ln}(\chi )\sqrt{2\pi }} \dfrac{1}{r} \text{exp}\left(-\dfrac{1}{2}{\left(\dfrac{{\ln}\left(r\right)-{\ln}\left({r}_{0}\right)}{{\ln}(\chi )}\right)}^{2}\right)$ , we have:where the parameters r0 and x are the geometric mean radius and standard deviation, respectively.
Application of the equations for the normalized moments and definitions of the commonly used statistics such as relative dispersion readily yields the most commonly used expressions in terms of the different distribution parameters for the Gamma, Weibull, and Lognormal distributions, which can be rewritten in terms of relative dispersion, a parameter of clear physical meaning. This variable transformation not only permits straightforward comparison between the parameterizations derived from these distinct assumed distribution forms, but also unifies the treatment of the spectral effects on microphysical processes or properties such as effective radius and autoconversion rate. As an example, Table A1 summarizes the key expressions used in parameterization of effective radius for the commonly used HSD functions. The monodisperse and Gaussian HSDs are also provided as references.
Distribution Effective Radius Ratio $\,\beta_{\rm{e} }$ Relative Dispersion ε Monodisperse $\,\beta_{\rm{e} }$ = 1 ε = 0 Weibull $\,\beta_{\rm{e} }\text{=1.04}\dfrac{ { {\varGamma } }^{2/3}\left(\text{3/}{b}\right)}{ {\varGamma }\left(\text{2/}{b}\right)}{ {b} }^{\text{1/3} }$ ${\varepsilon }\text{=}{\left[\dfrac{2{b}{\varGamma }\left({2/b}\right)}{ { {\varGamma } }^{2}\left({1/b}\right)}-1\right]}^{1/2}\approx \dfrac{1}{ {b} }$ Gamma $\,\beta_{\rm{e} }=\dfrac{ {\left(1+2{ {\varepsilon } }^{2}\right)}^{2/3} }{ {\left(1+{ {\varepsilon } }^{2}\right)}^{1/3} }$ ${\varepsilon }=\dfrac{1}{ {\left(1+{\mu }\right)}^{1/2} }$ Lognormal $\,\beta_{\rm{e} }=\left(1+{ {\varepsilon } }^{2}\right)$ ${\varepsilon }={\left[{\exp}\left({ {\ln} }^{2}{\chi }\right)-1\right]}^{1/2}$ Table A1. Summary of equations for parameterizing spectral effect on effective radius.
BMP Scheme | Reference |
Thompson | Thompson et al. (2008) |
Thompson aerosol aware | Thompson and Eidhammer (2014) |
WSM6 | Lim and Hong (2010) |
WDM6 | Lim and Hong (2010) |
NSSL double moment | Mansell et al. (2010) |
P3 single moment | Morrison and Milbrandt (2015) |
P3 double moment | Morrison and Milbrandt (2015) |