Advanced Search
Article Contents

Tower-Based Greenhouse Gas Measurement Network Design——The National Institute of Standards and Technology North East Corridor Testbed


doi: 10.1007/s00376-017-6094-6

  • The North-East Corridor (NEC) Testbed project is the 3rd of three NIST (National Institute of Standards and Technology) greenhouse gas emissions testbeds designed to advance greenhouse gas measurements capabilities. A design approach for a dense observing network combined with atmospheric inversion methodologies is described. The Advanced Research Weather Research and Forecasting Model with the Stochastic Time-Inverted Lagrangian Transport model were used to derive the sensitivity of hypothetical observations to surface greenhouse gas emissions (footprints). Unlike other network design algorithms, an iterative selection algorithm, based on a k-means clustering method, was applied to minimize the similarities between the temporal response of each site and maximize sensitivity to the urban emissions contribution. Once a network was selected, a synthetic inversion Bayesian Kalman filter was used to evaluate observing system performance. We present the performances of various measurement network configurations consisting of differing numbers of towers and tower locations. Results show that an overly spatially compact network has decreased spatial coverage, as the spatial information added per site is then suboptimal as to cover the largest possible area, whilst networks dispersed too broadly lose capabilities of constraining flux uncertainties. In addition, we explore the possibility of using a very high density network of lower cost and performance sensors characterized by larger uncertainties and temporal drift. Analysis convergence is faster with a large number of observing locations, reducing the response time of the filter. Larger uncertainties in the observations implies lower values of uncertainty reduction. On the other hand, the drift is a bias in nature, which is added to the observations and, therefore, biasing the retrieved fluxes.
    摘要: 东北走廊(NEC, North-East Corridor)测试平台项目是美国国家标准与技术研究所(NIST, National Institute of Standards and Technology)的第三个温室气体排放源测试平台. 本项目旨在提高温室气体测量能力. 本文介绍了这个项目的与大气反演方法相结合的密集观测网络的设计方法. 这种方法应用WRF(ARW版本)模式(Advanced Research Weather Research and Forecasting Model)与STILT模式(Stochastic Time-Inverted Lagrangian Transport model)相耦合来计算假定的观测对地表温室气体排放源的敏感性(足迹). 和其他观测网的设计算法的不同之处在于, 这个密集观测网络应用一个基于k-means聚类方法的迭代挑选算法, 以最小化每个站点的时间响应之间的相似性, 并最大化城市排放源贡献的敏感性. 一旦选择了某种配置的观测网, 将使用综合反演贝叶斯卡尔曼滤波来评估它的性能. 我们展示了由不同数量的塔和不同的位置的塔组成的不同配置的多个观测网的性能. 结果表明, 由于附加到每个站点的空间信息不能最理想的覆盖最大可能的区域, 过度密集的观测网的空间覆盖范围会减小, 而过度分散的观测网则会失去约束通量不确定性的能力. 此外, 我们还探讨了使用带有较低成本和具有较大不确定性和时间偏移的性能传感器的高密度网络的可能性. 当观测站点较多时, 分析收敛速度变快, 减少了滤波的响应时间. 观测中的较大不确定性意味着较少的不确定性的减少值. 另一方面, 自然界中存在的偏差被带入到观测中, 从而使得反演通量有偏差.
  • 加载中
  • Brèon, F. M., Coauthors, 2015: An attempt at estimating Paris area CO2 emissions from atmospheric concentration measurements. Atmos. Chem.Phys, 15, 1707- 1724.http://adsabs.harvard.edu/abs/2015ACP....15.1707B
    Brioude, J., Coauthors, 2012: A new inversion method to calculate emission inventories without a prior at mesoscale: Application to the anthropogenic CO2 emission from Houston, Texas. J. Geophys. Res., 117(D5),D05312, doi: 10.1029/ 2011JD016918.http://onlinelibrary.wiley.com/doi/10.1029/2011JD016918/citedby
    Cambaliza, M. O. L., Coauthors, 2014: Assessment of uncertainties of an aircraft-based mass balance approach for quantifying urban greenhouse gas emissions. Atmos. Chem. Phys., 14, 9029- 9050.http://adsabs.harvard.edu/abs/2013ACPD...1329895C
    Coniglio M. C., J. Correia, P. T. Marsh, and F. Y. Kong, 2013: Verification of convection-allowing WRF model forecasts of the planetary boundary layer using sounding observations. Wea.Forecasting, 28, 842- 862.http://adsabs.harvard.edu/abs/2013WtFor..28..842C
    Duren R. M., C. E. Miller, 2012: Measuring the carbon emissions of megacities. Nature Clim.Change, 2, 560- 562.http://www.nature.com/nclimate/journal/v2/n8/nclimate1629/metrics
    Forgy E. W., 1965: Cluster analysis of multivariate data: Efficiency versus interpretability of classifications. Biometrics, 21, 768- 769.http://ci.nii.ac.jp/naid/10009668881
    Gerbig C., J. C. Lin, S. C. Wofsy, B. C. Daube, A. E. Andrews, B. B. Stephens, P. S. Bakwin, and C. A. Grainger, 2003: Toward constraining regional-scale fluxes of CO2 with atmospheric observations over a continent: 2. Analysis of COBRA data using a receptor-oriented framework. J. Geophys. Res., 108( D24): 4757.http://onlinelibrary.wiley.com/doi/10.1029/2003JD003770/full
    Hartigan J. A., M. A. Wong, 1979: Algorithm AS 136: A K-means clustering algorithm. Applied Statistics, 28, 100- 108.
    Hungershoefer K., F.-M. Breon, P. Peylin, F. Chevallier, P. Rayner, A. Klonecki, S. Houweling, and J. Marshall, 2010: Evaluation of various observing systems for the global monitoring of CO2 surface fluxes, Atmos. Chem. Phys., 10, 10 503- 10 520.http://www.oalib.com/paper/2696124
    IPCC, 2013: Climate Change 2013: The Physical Science Basis. Working Group I Contribution to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. Stocker et al.,Eds., Cambridge University Press, Cambridge, 1552 pp.
    Janjić, Z. I., 1994: The step-mountain Eta coordinate model: Further developments of the convection, viscous sublayer, and turbulence closure schemes. Mon. Wea. Rev., 122, 927- 945.http://adsabs.harvard.edu/abs/1994MWRv..122..927J
    Kain J. S., 2004: The Kain-Fritsch convective parameterization: An update. J. Appl. Meteor., 43, 170- 181.http://adsabs.harvard.edu/abs/2004japme..43..170k
    Kort E. A., W. M. Angevine, R. Duren, and C. E. Miller, 2013: Surface observations for monitoring urban fossil fuel CO2 emissions: Minimum site location requirements for the Los Angeles Megacity. J. Geophys. Res., 118, 1577- 1584.http://onlinelibrary.wiley.com/doi/10.1002/jgrd.50135/citedby
    Lauvaux T., A. E. Schuh, M. Bocquet, L. Wu, S. Richardson, N. Miles, and K. J. Davis, 2012: Network design for mesoscale inversions of CO2 sources and sinks. Tellus B, 64, 17980.http://www.oalib.com/paper/2228123
    Lauvaux, T., Coauthors, 2016: High-resolution atmospheric inversion of urban CO2 emissions during the dormant season of the Indianapolis Flux Experiment (INFLUX). J. Geophys. Res., 121, 5213- 5236.http://onlinelibrary.wiley.com/doi/10.1002/2015JD024473/abstract
    Lin J. C., C. Gerbig, S. C. Wofsy, A. E. Andrews, B. C. Daube, K. J. Davis, and C. A. Grainger, 2003: A near-field tool for simulating the upstream influence of atmospheric observations: The Stochastic Time-Inverted Lagrangian Transport (STILT) model. J. Geophys. Res., 108( D16), 4493.http://onlinelibrary.wiley.com/doi/10.1029/2002JD003161/full
    Lorenc A. C., 1986: Analysis methods for numerical weather prediction. Quart. J. Roy. Meteor. Soc., 112: 1177- 1194.http://onlinelibrary.wiley.com/doi/10.1002/qj.49711247414/full
    Loveland, T. R., A. S. Belward, 1997: The IGBP-DIS global 1km land cover data set, DISCover: First results. . Int. JRemote Sens., 18, 3289- 3295.http://www.tandfonline.com/doi/abs/10.1080/014311697217099
    McKain K., S. C. Wofsy, T. Nehrkorn, J. Eluszkiewicz, J. R. Ehleringer, and B. B. Stephens, 2012: Assessment of ground-based atmospheric observations for verification of greenhouse gas emissions from an urban region. Proceedings of the National Academy of Sciences of the United States of America, 109, 8423- 8428.http://europepmc.org/articles/PMC3365165
    Mellor G. L., T. Yamada, 1982: Development of a turbulence closure model for geophysical fluid problems. Rev. Geophys., 20, 851- 875.http://onlinelibrary.wiley.com/doi/10.1029/RG020i004p00851/full
    Mlawer E. J., S. J. Taubman, P. D. Brown, M. J. Iacono, and S. A. Clough, 1997: Radiative transfer for inhomogeneous atmospheres: RRTM, a validated correlated-k model for the longwave. J. Geophys. Res., 102( D14): 16 663- 16 682.http://citeseer.uark.edu:8080/citeseerx/showciting;jsessionid=750CA337FE47F96B78405DB0E9817FA8?cid=2768998
    Mueller K. L., S. M. Gourdji, and A. M. Michalak, 2008: Global monthly averaged CO2 fluxes recovered using a geostatistical inverse modeling approach: 1. Results using atmospheric measurements. J. Geophys. Res., 113( D21): D21114.http://onlinelibrary.wiley.com/doi/10.1029/2007JD009734/pdf
    Nakanishi M., H. Niino, 2006: An improved Mellor-Yamada Level-3 model: Its numerical stability and application to a regional prediction of advection fog. Bound.-Layer Meteor., 119, 397- 407.http://link.springer.com/article/10.1007/s10546-005-9030-8
    Nehrkorn T., J. Eluszkiewicz, S. C. Wofsy, J. C. Lin, C. Gerbig, M. Longo, and S. Freitas, 2010: Coupled weather research and forecasting-stochastic time-inverted Lagrangian transport (WRF-STILT) model. Meteor. Atmos. Phys., 107, 51- 64.http://link.springer.com/article/10.1007/s00703-010-0068-x
    Patil M. N., R. T. Waghmare, S. Halder, and T. Dharmaraj, 2011: Performance of Noah land surface model over the tropical semi-arid conditions in western India. Atmos. Res., 99, 85- 96.http://www.sciencedirect.com/science/article/pii/S0169809510002449
    Rosenzweig C., W. Solecki, S. A. Hammer, and S. Mehrotra, 2010: Cities lead the way in climate-change action. Nature, 467, 909- 911.http://europepmc.org/abstract/MED/20962822
    Ruiz-Arias J. A., J. Dudhia, F. J. Santos-Alamillos, and D. Pozo-Vázquez, 2013: Surface clear-sky shortwave radiative closure intercomparisons in the Weather Research and Forecasting model. J. Geophys. Res., 118, 9901- 9913.http://onlinelibrary.wiley.com/doi/10.1002/jgrd.50778/full
    Shiga Y. P., A. M. Michalak, S. Rand olph Kawa, and R. J. Engelen, 2013: In-situ CO2 monitoring network evaluation and design: A criterion based on atmospheric CO2 variability. J. Geophys. Res., 118, 2007- 2018.http://onlinelibrary.wiley.com/doi/10.1002/jgrd.50168/full
    Skamarock, W. C., Coauthors, 2008: A description of the advanced research WRF version 3. NCAR Technical Note NCAR/TN-475+STR.http://agris.fao.org/openagris/search.do?recordID=AV20120131583
    Strahler A., D. Muchoney, J. Borak, M. Friedl, S. Gopal, E. Lambin, and A. Moody, 1999: MODIS land cover product algorithm theoretical basis document (ATBD) version 5.0: MODIS land cover and land-cover change. USGS, NASA. [Available online from ]http://modis.gsfc.nasa.gov/data/atbd/ atbd_mod12.pdf
    Thompson G., R. M. Rasmussen, and K. Manning, 2004: Explicit forecasts of winter precipitation using an improved bulk microphysics scheme. Part I: Description and sensitivity analysis. Mon. Wea. Rev., 132, 519- 542.http://adsabs.harvard.edu/abs/2004MWRv..132..519T
    Turnbull, J. C., Coauthors, 2015: Toward quantification and source sector identification of fossil fuel CO2 emissions from an urban area: Results from the INFLUX experiment. J. Geophys. Res., 120, 292- 312.http://onlinelibrary.wiley.com/doi/10.1002/2014jd022555/abstract
    Wu L., G. Broquet, P. Ciais, V. Bellassen, F. Vogel, F. Chevallier, I. Xueref-Remy, and Y. L. Wang, 2016: What would dense atmospheric observation networks bring to the quantification of city CO2 emissions? Atmos. Chem. Phys., 16, 7743- 7771.
    Ziehn T., A. Nickless, P. J. Rayner, R. M. Law, G. Roff, and P. Fraser, 2014: Greenhouse gas network design using backward Lagrangian particle dispersion modelling-Part 1: Methodology and Australian test case. Atmos. Chem. Phys., 14, 9363- 9378.http://adsabs.harvard.edu/abs/2014ACPD...14.7557Z
  • [1] CAO Yunchang, CHEN Yongqi, LI Pingwha, 2006: Wet Refractivity Tomography with an Improved Kalman-Filter Method, ADVANCES IN ATMOSPHERIC SCIENCES, 23, 693-699.  doi: 10.1007/s00376-006-0693-y
    [2] Xuanming ZHAO, Jiang ZHU, Lijing CHENG, Yubao LIU, Yuewei LIU, 2020: An Observing System Simulation Experiment to Assess the Potential Impact of a Virtual Mobile Communication Tower–based Observation Network on Weather Forecasting Accuracy in China. Part 1: Weather Stations with a Typical Mobile Tower Height of 40 m, ADVANCES IN ATMOSPHERIC SCIENCES, 37, 617-633.  doi: 10.1007/s00376-020-9058-1
    [3] Xuanming ZHAO, Jiang ZHU, Lijing CHENG, Yubao LIU, Yuewei LIU, 2020: An Observing System Simulation Experiment to Assess the Potential Impact of a Virtual Mobile Communication Tower–based Observation Network on Weather Forecasting Accuracy in China. Part 1: Weather Stations with a Typical Mobile Tower Height of 40 m, ADVANCES IN ATMOSPHERIC SCIENCES.  doi: 10.1007/s00376-020-9058-1-bug
    [4] Zhenglong LI, Jun LI, Pei WANG, Agnes LIM, Jinlong LI, Timothy J. SCHMIT, Robert ATLAS, Sid-Ahmed BOUKABARA, Ross N. HOFFMAN, 2018: Value-added Impact of Geostationary Hyperspectral Infrared Sounders on Local Severe Storm Forecasts——via a Quick Regional OSSE, ADVANCES IN ATMOSPHERIC SCIENCES, 35, 1217-1230.  doi: 10.1007/s00376-018-8036-3
    [5] WANG Mingxing, LIU Qiang, YANG Xin, 2004: A Review of Research on Human Activity Induced Climate Change I. Greenhouse Gases and Aerosols, ADVANCES IN ATMOSPHERIC SCIENCES, 21, 314-321.  doi: 10.1007/BF02915561
    [6] Xiaogu ZHENG, 2009: An Adaptive Estimation of Forecast Error Covariance Parameters for Kalman Filtering Data Assimilation, ADVANCES IN ATMOSPHERIC SCIENCES, 26, 154-160.  doi: 10.1007/s00376-009-0154-5
    [7] Xiang LI, Yiyong LUO, 2016: Response of North Pacific Eastern Subtropical Mode Water to Greenhouse Gas Versus Aerosol Forcing, ADVANCES IN ATMOSPHERIC SCIENCES, 33, 522-532.  doi: 10.1007/s00376-015-5092-9
    [8] M. Mohsin IQBAL, M. Arif GOHEER, 2008: Greenhouse Gas Emissions from Agro-Ecosystems and Their Contribution to Environmental Change in the Indus Basin of Pakistan, ADVANCES IN ATMOSPHERIC SCIENCES, 25, 1043-1052.  doi: 10.1007/s00376-008-1043-z
    [9] A. Longhetto, S. Ferrarese, C. Cassardo, C. Giraud, F. Apadttla, P. Bacci, P. Bonelli, A. Marzorati, 1997: Relationships between Atmospheric Circulation Patterns and CO2 Greenhouse-Gas Concentration Levels in the Alpine Troposphere, ADVANCES IN ATMOSPHERIC SCIENCES, 14, 309-322.  doi: 10.1007/s00376-997-0052-7
    [10] Zheng Xunhua, Wang Mingxing, Wang Yuesi, Shen Renxing, Li Jing, J. Heyer, M. Koegge, H. Papen, 2000: Mitigation Options for Methane, Nitrous Oxide and Nitric Oxide Emissions from Agricultural Ecosystems, ADVANCES IN ATMOSPHERIC SCIENCES, 17, 83-92.  doi: 10.1007/s00376-000-0045-2
    [11] B. ABISH, P.V. JOSEPH, Ola. M. JOHANNESSEN, 2015: Climate Change in the Subtropical Jetstream during 1950-2009, ADVANCES IN ATMOSPHERIC SCIENCES, 32, 140-148.  doi: 10.1007/s00376-014-4156-6
    [12] Ting WEI, Wenjie DONG, Qing YAN, Jieming CHOU, Zhiyong YANG, Di TIAN, 2016: Developed and Developing World Contributions to Climate System Change Based on Carbon Dioxide, Methane and Nitrous Oxide Emissions, ADVANCES IN ATMOSPHERIC SCIENCES, 33, 632-643.  doi: 10.1007/s00376-015-5141-4
    [13] MA Xiaoyan, GUO Yufu, SHI Guangyu, YU Yongqiang, 2004: Numerical Simulation of Global Temperature Change during the 20th Century with the IAP/LASG GOALS Model, ADVANCES IN ATMOSPHERIC SCIENCES, 21, 227-235.  doi: 10.1007/BF02915709
    [14] Qin SU, Buwen DONG, Fangxing TIAN, Nicholas P. KLINGAMAN, 2024: Anthropogenic Influence on Decadal Changes in Concurrent Hot and Dry Events over China around the Mid-1990s, ADVANCES IN ATMOSPHERIC SCIENCES, 41, 233-246.  doi: 10.1007/s00376-023-2319-z
    [15] Zhe WANG, Zifa WANG, Zhiyin ZOU, Xueshun CHEN, Huangjian WU, Wending WANG, Hang SU, Fang LI, Wenru XU, Zhihua LIU, Jiaojun ZHU, 2024: Severe Global Environmental Issues Caused by Canada’s Record-Breaking Wildfires in 2023, ADVANCES IN ATMOSPHERIC SCIENCES, 41, 565-571.  doi: 10.1007/s00376-023-3241-0
    [16] ZHENG Xiaogu, WU Guocan, ZHANG Shupeng, LIANG Xiao, DAI Yongjiu, LI Yong, , 2013: Using Analysis State to Construct a Forecast Error Covariance Matrix in Ensemble Kalman Filter Assimilation, ADVANCES IN ATMOSPHERIC SCIENCES, 30, 1303-1312.  doi: 10.1007/s00376-012-2133-5
    [17] Youmin TANG, Jaison AMBANDAN, Dake CHEN, , , 2014: Nonlinear Measurement Function in the Ensemble Kalman Filter, ADVANCES IN ATMOSPHERIC SCIENCES, 31, 551-558.  doi: 10.1007/s00376-013-3117-9
    [18] Shi Guangyu, Fan Xiaobiao, 1992: Past, Present and Future Climatic Forcing due to Greenhouse Gases, ADVANCES IN ATMOSPHERIC SCIENCES, 9, 279-286.  doi: 10.1007/BF02656938
    [19] WAN Liying, ZHU Jiang, WANG Hui, YAN Changxiang, Laurent BERTINO, 2009: A ``Dressed" Ensemble Kalman Filter Using the Hybrid Coordinate Ocean Model in the Pacific, ADVANCES IN ATMOSPHERIC SCIENCES, 26, 1042-1052.  doi: 10.1007/s00376-009-7208-6
    [20] Fuqing ZHANG, Meng ZHANG, James A. HANSEN, 2009: Coupling Ensemble Kalman Filter with Four-dimensional Variational Data Assimilation, ADVANCES IN ATMOSPHERIC SCIENCES, 26, 1-8.  doi: 10.1007/s00376-009-0001-8

Get Citation+

Export:  

Share Article

Manuscript History

Manuscript received: 09 April 2016
Manuscript revised: 08 November 2016
Manuscript accepted: 16 January 2017
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Tower-Based Greenhouse Gas Measurement Network Design——The National Institute of Standards and Technology North East Corridor Testbed

  • 1. National Institute of Standards and Technology, Gaithersburg, MD 20899, USA

Abstract: The North-East Corridor (NEC) Testbed project is the 3rd of three NIST (National Institute of Standards and Technology) greenhouse gas emissions testbeds designed to advance greenhouse gas measurements capabilities. A design approach for a dense observing network combined with atmospheric inversion methodologies is described. The Advanced Research Weather Research and Forecasting Model with the Stochastic Time-Inverted Lagrangian Transport model were used to derive the sensitivity of hypothetical observations to surface greenhouse gas emissions (footprints). Unlike other network design algorithms, an iterative selection algorithm, based on a k-means clustering method, was applied to minimize the similarities between the temporal response of each site and maximize sensitivity to the urban emissions contribution. Once a network was selected, a synthetic inversion Bayesian Kalman filter was used to evaluate observing system performance. We present the performances of various measurement network configurations consisting of differing numbers of towers and tower locations. Results show that an overly spatially compact network has decreased spatial coverage, as the spatial information added per site is then suboptimal as to cover the largest possible area, whilst networks dispersed too broadly lose capabilities of constraining flux uncertainties. In addition, we explore the possibility of using a very high density network of lower cost and performance sensors characterized by larger uncertainties and temporal drift. Analysis convergence is faster with a large number of observing locations, reducing the response time of the filter. Larger uncertainties in the observations implies lower values of uncertainty reduction. On the other hand, the drift is a bias in nature, which is added to the observations and, therefore, biasing the retrieved fluxes.

摘要: 东北走廊(NEC, North-East Corridor)测试平台项目是美国国家标准与技术研究所(NIST, National Institute of Standards and Technology)的第三个温室气体排放源测试平台. 本项目旨在提高温室气体测量能力. 本文介绍了这个项目的与大气反演方法相结合的密集观测网络的设计方法. 这种方法应用WRF(ARW版本)模式(Advanced Research Weather Research and Forecasting Model)与STILT模式(Stochastic Time-Inverted Lagrangian Transport model)相耦合来计算假定的观测对地表温室气体排放源的敏感性(足迹). 和其他观测网的设计算法的不同之处在于, 这个密集观测网络应用一个基于k-means聚类方法的迭代挑选算法, 以最小化每个站点的时间响应之间的相似性, 并最大化城市排放源贡献的敏感性. 一旦选择了某种配置的观测网, 将使用综合反演贝叶斯卡尔曼滤波来评估它的性能. 我们展示了由不同数量的塔和不同的位置的塔组成的不同配置的多个观测网的性能. 结果表明, 由于附加到每个站点的空间信息不能最理想的覆盖最大可能的区域, 过度密集的观测网的空间覆盖范围会减小, 而过度分散的观测网则会失去约束通量不确定性的能力. 此外, 我们还探讨了使用带有较低成本和具有较大不确定性和时间偏移的性能传感器的高密度网络的可能性. 当观测站点较多时, 分析收敛速度变快, 减少了滤波的响应时间. 观测中的较大不确定性意味着较少的不确定性的减少值. 另一方面, 自然界中存在的偏差被带入到观测中, 从而使得反演通量有偏差.

1. Introduction
  • Carbon dioxide (CO2) is the major long-lived, anthropogenic greenhouse gas (GHG) that has substantially increased in the atmosphere since the industrial revolution due to human activities, raising serious climate and sustainability issues, (IPCC, 2013). Development of methods for determination of GHG flows to and from the atmosphere, independent of those used to develop GHG inventory data and reports, will enhance that data scientific basis, and thereby increasing confidence in them.

    Cities play an important role in emissions mitigation and sustainability efforts because they intensify energy utilization and greenhouse gas emissions in geographically small regions. Urban areas are estimated to be responsible for over 70% of global energy-related carbon emissions (Rosenzweig et al., 2010). This percentage is anticipated to grow as urbanization trends continue; cities will likely contain 85%-90% of the U.S. population by the end of the current century. Urban carbon studies have increased in recent years, with diverse motivations ranging from urban ecology research to testing methods for independently verifying GHG emissions inventory reports and estimates. Examples of these are Salt Lake City (McKain et al., 2012), Houston (Brioude et al., 2012), Paris (Brèon et al., 2015), Los Angeles (Duren and Miller, 2012) and Indianapolis (INFLUX; Cambaliza et al., 2014; Turnbull et al., 2015; Lauvaux et al., 2016). Different measurement approaches have been used to independently measure GHG emissions. These have included aircraft mass balance, isotope ratios, satellite observations, and tower-based observing networks coupled with atmospheric inversion analysis. A common conclusion is that greater geospatial resolution is needed to support urban GHG monitoring and source attribution, hence the need for measurement capabilities and networks of higher spatial density.

    The North-East Corridor (NEC) Testbed project is the third of three NIST (National Institute of Standards and Technology) greenhouse gas emissions testbeds designed to advance greenhouse gas measurements capabilities and provide the means to assess the performance of new or advanced methods as they reach an appropriate state of maturity. The first two testbeds are the INFLUX experiment (Cambaliza et al., 2014; Lauvaux et al., 2016) and the LA Megacities project (Duren and Miller, 2012). As with the other testbeds, the NEC project will use atmospheric inversion methods to quantify sources of GHG emissions in the urban areas. Its initial phase is located at the southern end of the northeast corridor in the Washington D.C. and Baltimore area, and is focused on attaining a spatial resolution of approximately 1 km2. The aim of this, and the other NIST GHG testbeds, is to establish reliable measurement methods for quantifying and diagnosing GHG emissions data, independent of the inventory methods used to obtain them. Since atmospheric inversion methods depend on observations of the GHG mixing ratio in the atmosphere, deploying a suitable network of ground-based measurement stations is a fundamental step in estimating emissions and their uncertainty from the perspective of the atmosphere. A fundamental goal of the testbed effort is to develop methodologies that permit quantification of levels of uncertainty in such determinations.

    Several studies have focused on designing global (Hungershoefer et al., 2010), regional (Lauvaux et al., 2012; Ziehn et al., 2014) and urban (Kort et al., 2013; Wu et al., 2016) GHG observing networks, relying on inverse modeling or observing system simulation experiments (OSSEs). Unlike other network design algorithms, we applied an iterative selection algorithm, based on a k-means clustering method (Forgy, 1965; Hartigan and Wong, 1979), to minimize the similarities between the temporal response of each site and maximize sensitivity to the urban contribution. Thereafter, a synthetic inversion Bayesian Kalman filter (Lorenc, 1986) was used to evaluate the performances of the observing system based on the merit of the retrieval over time and the amount of a priori uncertainty reduced by the network measurements and analysis. In addition, we explore the possibility of using a very high density network of low-cost, low-accuracy sensors characterized by larger uncertainties and drift over time. As with all network design methods based on inversion modeling, our approach is dependent on specific choices made in configuring the estimation problem, such as the resolution at which fluxes are estimated or how the error statistics are represented. However, (Lauvaux et al., 2016) showed that, for INFLUX, the uncertainty reduction in real working conditions is about 30% for the urban area, leading to estimated uncertainties about 25% of the total city emissions. We consider this as an acceptable level of uncertainty for our domain, and it will be considered as the target uncertainty reduction even though (Wu et al., 2016) proposed more restrictive levels of uncertainties to be required for city-scale long-term trend detection.

    The structure of the paper is as follows. Section 2 describes the transport model, the network selection algorithm and the inversion method employed. Section 3 presents and discusses the results obtained for the various measurement network configurations analyzed, discussing the impact of the network compactness, the impact of additional observing points, and the observation uncertainties and drift. Lastly, section 4 highlights the main conclusions obtained.

2. Methodology
  • In this work we employ high-resolution simulations to derive the sensitivity of hypothetical observations to surface GHG emissions in the Washington D.C./Baltimore area. Specifically, we performed two separate month-long simulations for 2013 (February and July) to capture the different meteorological behavior in winter and summer. Afterwards, we used an iterative selection algorithm to design and investigate the performance of potential observing networks. These were evaluated by means of an inversion algorithm within an OSSE.

    For logistical considerations, the potential observation locations were obtained from existing communications antennas registered with the FCC (Federal Communications Commission). We selected the antennas located on towers, placed in urban locations, currently in service, and having a height between 50 and 150 m above ground level. This pre-selection criterion resulted in 98 candidate towers.

  • The footprint (sensitivity of observation to surface emissions in units of ppm μmol-1 m2 s) for every potential observing location was estimated using the Stochastic Time-Inverted Lagrangian Transport model (STILT; Lin et al., 2003), driven by meteorological fields generated by the Weather Research and Forecasting (WRF) model (WRF-STILT; Nehrkorn et al., 2010). Five-hundred particles were released from each potential observation site hourly, and were tracked as they moved backwards in time for 24 h. The footprint can be calculated from the particle density and residence time in the layer that sees surface emissions, defined as 0.5 PBLH (planetary boundary layer height) (Gerbig et al., 2003).

    The Advanced Research WRF (WRF-ARW) model, version 3.5.1, which is a state-of-the-art Numerical Weather Prediction simulator, was used to simulate the meteorological fields. The ARW core uses fully compressible, non-hydrostatic Eulerian equations on an Arakawa C-staggered grid with conservation of mass, momentum, entropy, and scalars (Skamarock et al., 2008).

    The initial (0000 UTC) and boundary conditions (every three hours) were taken from North America Regional Reanalysis (NARR) data provided by the National Centers for Environmental Prediction. Simulations were run continuously for the 28 days of February 2013 and the 31 days of July 2013.

    A two-way nesting strategy (with feedback) was selected for downscaling the three telescoping domains, which had horizontal resolutions of 9, 3 and 1 km, with a Lambert conical conformal projection with N40 and N60 as reference parallels. These domains were centered on the Washington/Baltimore area (39.079°N, 76.865°W), with 101× 101, 121× 121, and 130× 121 horizontal grid (latitude × longitude) cells, respectively. This domain configuration was chosen to limit the influence of the NARR-provided boundary conditions on the area of interest. A configuration of 60 vertical levels, with higher resolution between the surface and 3 km, was selected to better reproduce the boundary layer dynamics. To ensure model stability, the time-step size was defined dynamically using a CFL (Courant-Friedrichs-Lewy) criterion of 1.

    Accurately reproducing the planetary boundary layer (PBL) structure is a key point in atmospheric transport models, since the species mixing within the boundary layer is primarily driven by the turbulent structures found there. Therefore, the Mellor-Yamada-Nakanishi-Niino 2.5-level (MYNN2; Nakanishi and Niino, 2006) PBL parametrization was selected, because this scheme is a local PBL scheme that diagnoses potential temperature variance, water vapor mixing ratio variance, and their covariances, to solve a prognostic equation for the turbulent kinetic energy. It is an improved version of the former Mellor-Yamada-Janjic (MYJ) scheme (Mellor and Yamada, 1982; Janjić, 1994), where the stability functions and mixing length formulations are based on large eddy simulation results instead of observational datasets. MYNN2 has been shown to be nearly unbiased in PBL depth, moisture and potential temperature in convection-allowing configurations of WRF-ARW. This alleviates the typical cool, moist bias of the MYJ scheme in convective boundary layers upstream from convection (Coniglio et al., 2013). For the radiative heat transfer scheme, the RRTMG scheme (Mlawer et al., 1997) for short and long wave radiation was selected, since good performance has been reported for it (Ruiz-Arias et al., 2013). To model the microphysics, we selected the Thompson scheme (Thompson et al., 2004) because of its improved treatment of the water/ice/snow effective radius coupled to the radiative scheme (RRTMG). For the cumulus cloud scheme, the widely used Kain-Fritsch scheme (Kain, 2004) was selected only for the outermost domain (9 km). The Noah model was selected as the land surface model (LSM), since (Patil et al., 2011) showed that the skin temperature and energy fluxes simulated by Noah-LSM are reasonably comparable with observations and, thus, an acceptable feedback to the PBL scheme can be expected.

  • By using the footprints computed for each potential observation site along with Eq. (2), we simulated the hourly mixing ratio for each site assuming a uniform unit flux (1 μmol m-2 s-1) over the domain. We simulated the urban land use response by assuming a unit flux for the urban category definition provided by MODIS 2012, MCD12Q1 (Loveland and Belward, 1997; Strahler et al., 1999). The urban response was then normalized by the total response and averaged in order to obtain a weight representing the urban contribution observed at each potential observing location. Afterwards, an iterative selection algorithm was applied in order to minimize the similarities between the temporal response of each site and maximize the urban contribution. This method uses the k-means algorithm, a widely used method in cluster analysis. This aims to partition n vectors into N (N≤ n) clusters to minimize the within-cluster sum of squares (Forgy, 1965; Hartigan and Wong, 1979). For each iteration, the algorithm groups the sites in N clusters based on the similarities of the logarithmic temporal response and removes from each cluster the site with the smallest urban contribution. If a cluster is singular, that site is kept if the urban contribution is larger than a user-defined threshold (minimum urban CO2 contribution for a given tower candidate). This process is iterated until the N clusters are singular and then the network is evaluated as described in the next section. Different networks were computed by using different numbers of clusters and threshold values, covering a wide range of possible configurations.

  • Once the network locations have been selected, a synthetic inversion experiment within a Bayesian framework is conducted in order to evaluate the performances of the observing system.

    The CO2 flux at the surface is related to the measurements by the following equation: \begin{equation} \label{eq1} {y}={Hx}+{\varepsilon}_r , \ \ (1)\end{equation} where y is the observations vector (n× 1) (here, the CO2 concentrations measured at different tower locations, heights and times); x is the state vector (m× 1, where m is the total number of pixels in the domain), which we aim to optimize (here, the CO2 fluxes); H is the observation operator (n× m), which converts the model state to observations, constructed by using the footprints previously computed, and εr is the uncertainty in the measurements and in the modeling framework (model-data mismatch).

    Optimum posterior estimates of fluxes are obtained by minimizing the cost function J: \begin{equation} \label{eq2} J({x})=\frac{1}{2}[({x}-{x}_b)^{\rm T}{P}_b^{-1}({x}-{x}_b)+({Hx}-{y}){R}^{-1}({Hx}-{y})] ,\ \ (2) \end{equation} where xb is the first guess or a priori state vector; Pb is a priori error covariance matrix, which represents the uncertainties in our a priori knowledge about the fluxes; and R is the error covariance matrix, which represents the uncertainties in the observation operator and the observations, also known as model-data mismatch.

    We aim to use observations distributed in time to constrain a state vector potentially evolving with time. Thus, the minimization of Eq. (3) leads to the well-known Kalman filter equations (Lorenc, 1986): \begin{eqnarray} \label{eq3} {x}_{a}&=&{x}_b+{K}({y}-{Hx}_b) ,\ \ (3)\\[1mm] \label{eq4} {K}&=&{P}_{b}{H}^{\rm T}({HP}_b{H}^{\rm T}+{R})^{-1} ,\ \ (4)\\[1mm] \label{eq5} {P}_{a}&=&({I}-{KH}){P}_b ,\ \ (5)\\[1mm] \label{eq6} {P}_{b,t+1}&=&{MP}_a{M}^{\rm T}+{Q} ,\ \ (6)\\[1mm] \label{eq7} {x}_{b,t+1}&=&{Mx}_a .\ \ (7)\\[-7mm]\nonumber \end{eqnarray}

    Here, xa is the analysis state vector or the optimized fluxes and Pa is the analysis error covariance matrix. K is the Kalman gain matrix, and it modulates the correction being applied to the a priori state vector as well as the a priori error covariance matrix.

    Equations (7) and (8) are the evolution equations of the filter. Here, we consider persistence for x and, thus, the evolution operator M will be the identity matrix with no evolution error covariance Q. Therefore, the new state vector and error covariance will be the analysis state vector and error covariance calculated in the previous time step.

    It is commonly assumed that the initial covariance Pb follows an exponential model, (Mueller et al., 2008; Shiga et al., 2013), where σi represents the uncertainty for the pixel i (considered here as 100% of the pixel emissions), di,j represents the distance between the pixels i and j, and L is the correlation length of the spatial field: \begin{equation} \label{eq8} P_{b,i,j}=\sigma_i\sigma_je^{-d_{i,j}/L} . \ \ (8)\end{equation}

    This analytical framework allows us to evaluate each network configuration by selecting a reasonable state vector x, which will be considered as the actual emissions and employed using Eq. (2) to generate synthetic observations (pseudo-observations) with added statistically independent Gaussian random errors εr consistent with diagonal R. The standard deviation (SD) of the added Gaussian errors is selected to be 5 ppm, and it aims to reproduce the uncertainties in the observations (0.1 ppm) and the modeling framework (4.999 ppm). When considering the case of low-cost, low-accuracy mixing ratio sensors, the uncertainty was increased to 7 ppm, as a result of assuming a sensor uncertainty of 4.9 ppm. Drift in these sensors is treated as a bias in the observations that linearly increases with time; all sensors drift with the same rate during one month. We simulated 1, 3 and 5 ppm as total drift over the course of a month in the simulation. We understand that the sensors would probably drift differently with an instrument-specific non-constant rate, causing biases (or compensating biases) and spurious spatial gradients. However, the assumption of a linear drift increasing with time at a constant rate equal for all the sensors allows us to get an estimation of the impact of the drift in a simple way, avoiding the possible compensation between the different sensors drifts.

    It is worth noting that in the current standard, high accuracy networks (0.1 ppm) require very expensive sensors, costly infrastructure, and recurrent calibration strategies. The technical requirements for operating a low accuracy sensor network (4.9 ppm) are less stringent than the current standards and, therefore, the cost (sensors, installation and maintenance/calibration) will be much lower. Nevertheless, the feasibility of such a network is still an open question and it has to be further studied and demonstrated.

    We apply the filter forwards in time with a time window of six hours, advancing one hour in each iteration by using as initial estimate that is half the value of the true emissions. Then, we compare the analysis state vector retrieved in each iteration with the assumed true value to assess the merit of the observation system (bias and SD of the differences between the retrieved and true emissions). In addition, we compare the analysis error covariance matrix Pa for the last time-step with the initial error covariance matrix Pb, to evaluate the capability of the system to reduce the uncertainty in our estimates [uncertainty reduction: Eq. (9)]: \begin{equation} R_{un}=100\left(1-\frac{\sqrt{diag({P}_{a})}}{\sqrt{diag({P}_{b})}}\right) .\ \ (9) \end{equation}

    The actual emissions assumed here aim to represent three kinds of sources; area sources, transportation sources, and point sources. Two area sources are defined: urban emissions are based on the urban fraction computed from the MODIS-IGBP land cover, multiplied by 5 μmol m-2 s-1, with the remaining area assumed to emit 1 μmol m-2 s-1. Emissions from main roads are assumed to be 30 μmol m-2 s-1. Point source emissions are taken from the EPA GHG inventory, normalized by the area of one pixel (∼ 1 km2). This inventory allows us to test the capabilities of the network for recovering a wide range of emissions (1-2400 μmol m-2 s-1) spatially distributed in a high resolution grid (Fig. 1). The simulated CO2 enhancements by using the inventory proposed here provided time series at the different stations with mean enhancements ranging from 3.5 to 8.5 ppm, a median from 1 to 4.4 ppm, and SD between 4 and 14 ppm, resembling the atmospheric variability observed in other urban settings (McKain et al., 2012; Lauvaux et al., 2016)

    The diurnal cycle of photosynthesis and respiration significantly influences atmospheric mixing ratios of CO2, specifically in summer; photosynthesis draws down atmospheric CO2 during the daytime when fossil fuel CO2 is maximized. The presence of biogenic fluxes will, therefore, reduce the CO2 enhancements measured at the towers, increasing the relative importance of the uncertainties over the signal, thus significantly weakening the ability to estimate fossil fuel emissions in an urban environment. Although the a posteriori error covariance does not depend on the enhancements nor the a priori fluxes, the inclusion of biogenic fluxes will impact the values of the overall uncertainty reduction due to the fact that the a priori error covariance will account for the presence of those fluxes. This limitation will be important in determining the actual performance of a network configuration, especially in summer (July), but it will not have any impact on the comparison between network configurations.

    Figure 1.  Assumed CO2 emissions inventory.

    Figure 2.  The (a) bias and (b) standard deviation (SD) evolution of the CO2 flux retrieval for the urban land use in February (afternoon hours) by using 12 towers, a threshold of 25%, a model-data mismatch of 5 ppm, and five values of correlation length. The legend in (a) means the same for (b).

    Figure 3.  Uncertainty reduction using threshold values of (a) 15% and (b) 45%, and the (c) bias and (d) standard deviation (SD) evolution of the CO2 flux retrieval for the urban land use with four threshold values employed in the selection of the network for the month of February (afternoon hours) by using 12 towers, a correlation length 10 km, and a model-data mismatch of 5 ppm. The legend in (d) means the same for (c).

3. Results and discussion
  • The correlation length used in the initial covariance Pb impacts observing system capabilities to constrain fluxes (Fig. 2). By using a correlation length equal to the grid resolution (1 km), the inversion system is challenged to constrain the fluxes. Most of the correction is applied nearby tower locations and is reflected in the bias and SD shown in Fig. 2. Larger correlation lengths positively impact retrieval quality, considerably reducing the bias and SD. However, as the correlation length is allowed to increase to 20 km, the SD increases, indicating that for this case, a correlation length of ∼ 10 km seems appropriate (Fig. 2b). Similar performance in terms of the bias is also observed in Fig. 2a. This behavior seems to be in agreement with (Lauvaux et al., 2012), who showed that including correlations that are too large can lead to an overly constrained system. Published values of the correlation length ranges from a few kilometers (less than 10 km) to hundreds or thousands of kilometers (Mueller et al., 2008; Lauvaux et al., 2012, 2016). Typically, small values of the correlation length are associated with high-resolution studies conducted in a small domain with a relatively high density network, as seen in INFLUX (Lauvaux et al., 2016) and the case in this study, while large correlation length values are seen in low-resolution inversions in regional to global domains with sparse networks (Mueller et al., 2008).

    The user-defined threshold controls the compactness of the network, with higher threshold values resulting in a more compact network (Figs. 3a and b). The Baltimore area presents a denser network in all cases, probably due to higher variability in the meteorological conditions between stations due to the local effects produced by the proximity of Chesapeake Bay. The presence of more towers in the same area increases the uncertainty reduction in that specific area, leading to better local flux constraint. However, better performances in terms of total CO2 retrieved fluxes are achieved by using lower threshold values (Figs. 3c and d), due to the increased network spatial coverage. In addition, the mean uncertainty reduction in February for the urban land use is 29.5%, 30.5%, 30.2% and 28.2%, for threshold values of 15%, 25%, 35% and 45%, respectively. This indicates that too compact a network loses spatial coverage whilst too dispersed a network loses capability in constraining flux uncertainties. The selected threshold values covered the full range of urban contribution for the candidate towers. We do not expect the results to significantly improve the performances of the network in the threshold range of 25%-35%, due to the discrete nature of the possible candidates and the small differences between the values of uncertainty reduction shown for those two threshold values.

    Figure 4 compares the mean CO2 flux estimated uncertainty for the urban area by using randomly selected networks and the proposed clustering approach. Both months, February and July, show a similar declining trend as the number of towers increases. However, July presents larger values of uncertainty, which is related to weaker sensitivities. The fact that the random method is significantly different from the clustering method shows that the latter is selecting observing locations in a smarter way. For instance, the random method could select two towers too close together and, therefore, the footprints would be highly overlapped, reducing the spatial coverage of the network. Another possibility is that the random method could pick locations too close to the edge of the urban area or with too little urban contribution, limiting the impact of that location on the inversion. From the results shown here, the proposed approach to select towers outperforms the random method. The benefits of the clustering approach are larger for July and for low tower numbers. Preselection of potential tower locations in the first stage improves the performances of the randomly selected networks, as there are only towers within the urban land use region available for selection, and therefore contributing to the networks' measurement ability.

    Figure 4.  Mean CO2 flux estimated uncertainty for the urban area after a month of assimilation (afternoon hours only: 1700-2100 UTC) as a function of the number of towers employed using a randomly generated network (dashed lines) and the proposed clustering method (solid lines).

    Figure 5.  Average (a, b) sensitivity and (c, d) uncertainty reduction for the designed networks for the months of (a, c) February and (b, d) July, with 12 towers, a 25% threshold, and a 10 km correlation length. Red contours in the sensitivity plots correspond to 3.33× 10-4 ppm (μmol m-2 s-1)-1.

    Figure 5 shows the average sensitivity and uncertainty reduction for the designed networks for the months of February and July using 12 towers, a 25% threshold (minimum urban CO2 contribution for a given tower candidate), and a 10 km correlation length. It is worth noting that during the summer (July), the excess of energy causes deeper boundary layers and enhanced mixing that reduces the tower sensitivity to the surface fluxes (footprints). Therefore, weaker sensitivities lead to smaller uncertainty reduction. Winter (February) is the opposite, showing larger values of sensitivities and uncertainty reduction. During spring and autumn, we can assume the situation will be something between the two, being that these two months are representative of the two extreme cases during the year. The tower distribution is also different for each month, caused by the different meteorological conditions. However, there are up to six coincident stations. In both cases, the networks show high sensitivity values for most of the MODIS-defined urban area. The average uncertainty reduction for the urban land use achieved by these networks is 31% and 25%, and the 95th percentile is 54% and 42% for February and July, respectively.

    The average uncertainty reduction for urban land use increases as the number of towers increases, as expected (Fig. 6a). However, the uncertainty reduction is not additive and, therefore, the impact of adding towers decreases as the number of towers increases (Fig. 6b). The average uncertainty reduction per unit cost (ratio performance/cost) decreases proportionally to \(1/\sqrt{N}\) (N is the number of towers). As a consequence, low numbers of towers would seem to be a more efficient selection. Nevertheless, in absolute numbers, more towers mean more uncertainty reduction, as well as more spatial coverage. Similar results were shown in (Wu et al., 2016). On the other hand, by using low-accuracy sensors, the trend is conserved but shifted to lower values of average uncertainty reduction. For instance, 12 low-cost, low-accuracy sensors would perform the same as 7 high-accuracy sensors (Fig. 6a). However, assuming the high-accuracy sensors network would cost 10 times more than the low-accuracy sensors network (including the price of the sensors, installation and maintenance/calibration), the benefits of using the low cost sensors is rather clear. In this case, the average uncertainty reduction per unit cost (ratio performance/marginal cost) of a 96 low-cost sensors network is comparable to a network with 3-4 high-accuracy sensors (Fig. 6b). This fact impacts the capabilities of the network in reducing uncertainty and the spatial coverage.

    Figure 6.  The (a) average uncertainty reduction and (b) average uncertainty reduction per unit cost for the urban land use as a function of the number of towers after a month of assimilation (afternoon hours) for the months of February and July. The unit cost per tower is 10× for the case of 5 ppm and for the case of 7 ppm. The legend in (b) means the same for (a).

    Figure 7.  Uncertainty reduction for networks using (a) 4, (b) 14, (c) 48 and (d) 96 observing points, with a correlation length 10 km and model-data mismatch of 5 ppm for July (afternoon hours).

    Despite the decreased efficiency per tower, adding new towers allows us to "see" areas nearby the added towers (Fig. 7). Thus, a network with too few stations (Fig. 7a), or one that is overly dense (Fig. 7b), would not be sufficient to constrain the fluxes for the whole urban area of Washington D.C. and Baltimore. Networks with a larger number of observing points (Figs. 7c and d) would cover the urban areas with considerably improved values of uncertainty reduction.

    The addition of observing points also impacts the quality of the retrieved CO2 fluxes, as lower bias and SD are obtained by using a larger number of towers (Fig. 8). In addition, the convergence to the true values is faster with a large number of towers, reducing the response time of the filter (Fig. 8a). Thus, by using 12 towers, the spin-up time of the filter is in the order of 100 h, 20 days, if we use just 5 h during the afternoon hours. On the other hand, by using 96 towers, the spin-up time of the filter is reduced to just 10 h, 2 days. Reducing the spin-up time of the filter directly impacts the analysis system's ability to constrain time-dependent fluxes to shorter time scales.

    Figure 8.  The (a) bias and (b) standard deviation (SD) evolution of the CO2 flux retrieval for the urban land use in July (afternoon hours) by using 2, 4, 12, 48 and 96 observing points, with a correlation length of 10 km and model-data mismatch of 5 ppm. The legend in (a) means the same for (b).

    Figure 9.  The (a) bias and (b) standard deviation (SD) evolution of the CO2 flux retrieval for the urban land use in February (afternoon hours) by using 24 observing points, with a correlation length of 10 km, model-data mismatch of 5 ppm, and 0, 1, 3 and 5 ppm total drift. The legend in (a) means the same for (b).

    Sensor drift is a bias in nature. Such drift disrupts the Gaussian assumptions, therefore biasing the retrieved fluxes. Low drift (1 ppm) results in small bias, slowly increasing with time. In addition, the SD is also low and comparable to the case with no drift (Fig. 9). On the other hand, larger drifts (3 and 5 ppm) result in bias and SD dramatically increasing with time as the drift increases. This fact makes it unacceptable to perform inversions and periodic calibrations would be required in order to minimize the impact of the drift. From these results, an optimum calibration period for large drifts would be in the order of half of the spin-up time of the filter (∼ 20 h in the case of February, with 24 observing points).

4. Conclusions
  • High-resolution simulations along with clustering analysis were used to design a network of surface stations for the area of Washington D.C./Baltimore. Thereafter, a Kalman filter within an OSSE was employed to evaluate the performances of different networks consisting of different number of towers and where the location of these towers varied. Additionally, we explored the possibility of using a very high density network of low-cost, low-accuracy sensors characterized by larger uncertainties and drift over time.

    The results show that overly compact networks lose spatial coverage, whilst overly spread-out networks lose capability in constraining uncertainties in the fluxes. July presents weaker sensitivities and, therefore, lower uncertainty reduction. The tower distribution is also different for each season, caused by the different meteorological conditions. However, there are up to six coincident stations. In both cases, the networks show high sensitivity values and uncertainty reduction for most of the urban land use.

    The average uncertainty reduction for the urban land use increases as the number of towers increases. However, the uncertainty reduction is not additive and, therefore, the impact of adding towers decreases as the number of towers increases. By using low-accuracy sensors, the trend is conserved but shifted to lower values of average uncertainty reduction. For instance, 12 low-cost, low-accuracy sensors would perform the same as 7 high-accuracy sensors. Nevertheless, the benefits of using low-cost sensors is rather clear. In this case, the average uncertainty reduction per unit cost (ratio performance/marginal cost) of a 96 low-cost sensors network is comparable to a network with 3-4 high-accuracy sensors. This fact impacts the capability of the network in reducing uncertainty and the spatial coverage. In addition, the convergence to the true values is faster with a large number of towers, reducing the response time of the filter, which directly impacts our ability to constrain time-dependent fluxes at shorter time scales.

    The drift is a bias in nature, which is added to the observations, therefore biasing the retrieved fluxes. Low drift results in bias and SD comparable to the case with no drift. On the other hand, larger drifts results in bias and SD dramatically increasing with time as the drift increases. This fact makes it unacceptable to perform inversions, and periodic calibrations would be required in order to minimize the impact of the drift.

Reference

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return