Introduction

  • Wind sets the number. In most downwind methods, wind speed multiplies the emission rate and wind direction steers the plume. Small wind errors → big flux errors.[1]
  • Winds are often from models (“reanalysis”). Wind speed (WS) & direction (WD) propagate directly to flux:
    • Emissions (Flux) ~ Concmeas × WS × Plume Width
  • Emissions scale linearly with speed. A +1 m/s wind bias increases the rate → if true wind = 5 m/s, error ~ +20%.[2]
  • When is reanalysis “good enough” to trust? Here we test an example workflow. A global workhorse—ERA5—stacked up against a short record of METEC’s winds.

Overall Objective: Help you—

  • See where reanalysis fits day-to-day
  • Interpret uncertainty vs. site data
  • Equip you with a quick, repeatable, easy-to-use workflow; determine usefulness

Site Meteorological Data


Site Description

Meteorological data were used from the METEC site (METEC), operated by The Energy Institute at CSU in Ft Collins, CO. Site terrain is relatively flat on shortgrass prairie. To the west, Horsetooth Reservoir is held in by a series of four dams connecting the Dakota Hogback, rising 400–500 ft above METEC, oriented NNW-SSE. Soldier Canyon Dam, 1.9 km to the WSW of METEC (Fig. 1), is where elevation increases and can climb to 2,000 ft (~0.6 km) within a few km. These features from the WSW to NNW lead to a strong bimodal wind pattern throughout the day (Fig. 1, inset), using  long-term data at nearby Christman Field[3] (CSU Atmospheric Science).

reference caption
Fig. 1. Terrain influencing winds at METEC. METEC is closest to the ridge at Soldier Canyon Dam (yellow, lower left). Star indicates Christman Field Weather (Wx) Station[3] (lower right); annual wind rose inset.
A scientific instrument tower at the METEC site

Weather Data at METEC

METEC 6m Weather Station (Fig. 2)

  • 1 Hz output
  • RM Young 81000 3D sonic anemometer (WS, WD) @6.7 m
  • Temp/RH, Surface Pressure

ZX300 Lidar (Fig. 2, lower right)

  • 9 heights, 10 scansa
  • Output: U, w, q (8 Hz)
  • Compact weather station
  • Future work—use in workflow

ERA5 Data

aHeights (in m): 10, 15, 20, 30, 38, 10, 75, 100, 200, 300


Renalysis Data


What it is

A long, gridded record built by blending observations with one fixed model — an always — on a backdrop for your sites.

Always On backdrop?

  • Continuous, ready-to-query weather record
  • Hourly coverage spanning decades
  • Everywhere (global)
  • Multi-variable
  • Sensor-independent

What it’s for

Plan inspections, pick higher-yield windows, convert plumes to rates, and sanity-check when instruments are down.

What is higher-yield?

  • Wind “Sweet Spot” U ~ 2–6 m/s
  • Well-mixed BL
  • Steady wind
  • High visibility
  • Geometry (clear fetch & downwind alignment)

What it is not

A fence-line sensor; treat as a calibrated backdrop and validate for high-stakes calls.

Calibrated backdrop?

  • Background meteorology
  • Validate locally
  • Adjust, flag small biases (low RMSEb& MAEc), or apply pattern corrections
  • Documentation

bRoot Mean Squared Error
cMean Absolute Error


Where it varies

By product, region, and provider/method—we make that sensitivity visible.

Visible sensitivity?

  • Side-by-side metrics
  • Directional error-by-sector
  • Region toggle (onshore/offshore)
  • Evaluate methods
  • By case/scenario

Table 1. Model reanalysis products with details (not a complete listing). Rows 5 and 6 are used in this analysis.

Row NumberProductCoverageHoriz. Grid SpacingΔtHeights (type)†Includes BL? ‡ Notes
120CRv3GLM~75 km3 hModel/P-levels (sfc winds via SLP assimilation)Long historical record;
surface‑pressure‑only data assimilation
2ASRv2Arctic***15 km3 h (typ.)
10 m diagnostic; model/P-levelsPBLHArctic‑focused regional reanalysis (2000–2016)
3CERRAEurope (pan‑Euro domain)5.5 km1 h**10 m diagnostic; model levelsPBLHEuropean regional reanalysis; ensemble (CERRA‑EDA)
4CFSR/CFSv2GLM~38-100 km6 h*
10 m diagnostic; P-levelsPBLHCoupled atmos–ocean–ice system (NCEP)
5ERA5

GLM~31 km1 h10 m & 100 m diagnostic; full model/P-levelsBLHWidely used global baseline; ensemble available for uncertainty
6ERA5‑LandGL9 km1 h10 m diagnostic (land points only)Land‑surface focused; pair w/ ERA5 for BLH over land
7HRRR‑ReanalysisCONUS (+ near‑shore)3 km1 h10 m & ~80 m diagnostic; model levels
PBLHHigh‑res U.S. regional; useful in complex terrain
8JRA‑3QGLM~55 km3-6 h10 m diagnostic; P-levelsPBLHNewer JMA reanalysis, 1947→; successor to JRA‑55
9JRA‑55GLM~55 km6 h10 m diagnostic; P-levelsPBLHJMA legacy global baseline, 1958→
10MERRA‑2GLM~50 km1 h10 m diagnostic; model/P-levelsPBLHLong, stable NASA record;
aerosol‑coupled
† “10 m” & “100 m” winds diagnosed at fixed heights AGL—not pressure levels (P-levels). Profiles are extrapolated to sensor heights (e.g., 6.7 m sonic).
‡ BLH vs PBLH – Model-computed heights (m AGL); labels differ by system. Regime/stratification diagnostics for bin, mismatch checks.
* Varies by stream; ** Analyses 3-hourly; *** ≥ High‑latitude domain

Acronyms/Abbrevs: AGL (Above Ground Level), Atmos (Atmospheric), BL (Boundary Layer), BLH (BL Height), CONUS (Continental U.S.), EDA (Ensemble Data Assimilation), HRRR (High Resolution Rapid Refresh), GL (Global, Land only)), GLM (Global, Land & Marine), JMA (Japan Meteorological Analysis), NASA (National Aeronautics & Space Agency), NCEP (Nat’l Centers for Environ. Prediction), P-level (Pressure level), PBLH (Planetary BL Height), Sfc (Surface), SLP (Sea Level Pressure)

Example Workflow: Is Reanalysis “Good Enough”?

Arrows pointing a path from start to stop: Ingest, Align, Extrapolate, Compare, Diagnose, Decide

Preliminary Workflow

Ingest

  1. Pull ERA5 (time series CSV) for your site: → 10 m wind, 2 m Temp & Dewpoint
  2. Record product, version, variables, latitude, longitude, dates, and units

Align

  1. Clean site data → 10 min means; find calms (< 0.5 m/sd), find insufficient n
  2. For each hour, use a centered ±30 min window and circular mean for direction

Extrapolate

  1. Match heights: 10 m → sensor height (use power-law vertical wind profile equation)
  2. Clearly label estimates and work off copies (i.e., keep all originals to avoid unrecoverable errors)

Compare

  1. Compute bias and error metrics
  2. Split by day, night, and wind sectors; display sample counts ERA5 v METEC

Diagnose

  1. Identify patterns: day/night, sector bias, wind speed dependence, ERA5 v METEC matches or mismatches
  2. Note sensitivities: Stability, “Sweet Spot”?

Decide

  • (Green) Low speed error, direction error ≤ 20°
  • (Yellow) One fails or large sector bias—add met
  • (Red) Both fail in night/stable or complex fetch

dCalm winds in AERMOD and CALPUFF are both < 0.5 m/s (causes unstable/unreliable wind directions).


Ingest → Align → Extrapolate (not shown)

Sep 2025 DaysDatasetnOut ofCompletenessCalms (%)Low n (<70%)Total Filtered
4, 5, 6, 7,10 min910100890.3%9.0%0.7%9.7%
9, 12, 13Hourly15116889.9%4.8%5.4%10.1%
Two charts. Top chart: METEC Wind Speed (m/s) on y-axis and ERA5 Wind Speed (m/s) on the x-axis. Bottom chart: Observed Wind Direction (deg) on the y-axis and ETA5 Wind Direction (deg) on the x-axis.
Fig 3. METEC vs ERA5 hourly winds for the 7 days. Wind speed (top) & direction (bottom).
  • METEC’s “Completeness” is fair/good (Table 2), ~90%.
  • Many WS fall within FAC2 but cluster for WD (FAC2 = factor of 2; Fig. 3).
  • METEC’s 7-day* wind rose shows major contributions from 1-3 mph, NW & SE (Fig. 4; much like in Fig. 1).
    *Not continuous (Table 2).
Radar plot of hourly METEC winds
Fig 4. Radar plot of hourly METEC winds for the 7 days (not stacked).

Preliminary Results for METEC Sample

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.


Compare → Diagnose

  • Height matched WS bias small (<±1 m/s), with all mismatched biased high (Fig. 5).
  • Day Bias → WS underestimated
  • Night Bias → WS overestimated
  • Overall → Low bias (Fig. 6, Table 3)
  • WD shows systematic deviation >90° in cMAE & cRMSE (c means circular, Table 3). Terrain-induced errors?
  • RMSE is ~40% > MAE (1.0 m/s, Table 3).
Wind Speed (m/s) on the y-axis. Day, Night, and Overall on the x-axis. Data grouped by height matched and height mismatched.
Fig 6. WS bias by day/night.

Overall, height-matched:

  • Bias = −0.06 m/s
  • MAE = 0.98 m/s
  • RMSE = 1.3 m/s
  • FAC2 69% → not a universal metric. Use sector filtering & additional met for tough decisions.
Wind Speed (m/s) on the y-axis with height matched and height mismatched
Fig 5. Hourly WS bias by wind sector (n, green is n >12). Bars indicate the standard deviation of bias for that n.

Decide

  • (Green) Use ERA5 for sectors (Table 3, green):
    • (Green) RMSE ~ 0.7-1.1 m/s, FAC2 ~ 80-100% → 
      “Good Enough” for scheduling/triage
  • (Yellow) Caution/anchor (Table 3; yellow or red):
    • (Yellow) Anchor with short met → RMSE >1.1 m/s, FAC2 ≤71%
    • (Red) Weakest 270-360à RMSE >1.3 m/s, FAC2 ≤41%
       → Needs more anchor data, or use site-level data

Table 3. Summary statistics. FAC2 is % of n within a factor of 2.

Wind SectornBias (m/s)MAE (m/s)RMSE (m/s)FAC2 (%)Wind Direction
ColorMatchMismatchMatchMismatchMatchMismatchMatchMismatchcMAEcRMSE
Green031.01.11.41.51.51.61001001.8/2112.5
Green305-0.4-0.30.70.71.11.08080116.4117.4
Green609-0.10.00.70.70.80.88989114.0120.2
Yellow90170.30.40.80.91.11.17176109.7112.0
Yellow12024-0.5-0.41.01.01.31.37171138.8143.9
Yellow15013-0.7-0.60.80.91.21.26262134.9140.9
Green180100.10.20.50.60.70.7808093.7105.8
Yellow21012-0.6-0.51.31.21.51.55058100.4114.8
Green24010-0.4-0.31.01.01.11.18080108.6115.6
Red270270.50.61.01.01.31.44141104.7112.4
Red300180.20.31.21.31.71.86156124.1133.2
Red3303-0.3-0.13.13.23.53.6330128.3137.9
OVERALL1.01.01.31.46968115.7124.9
Day78-0.44-0.350.980.981.31.36971121.0129.9
Night730.350.440.981.021.41.56866110.0119.2
OVERALL0.981.001.31.46968115.7124.9

Future Work

More bins, comparisons with other models (HRRR-R), and especially– more data


Let’s Help Each Other

Uncertainty in reanalysis-based errors in emissions attribution requires further research [1]. Technological advances push new boundaries but exist in narrow markets where detection uncertainties are not well understood across a range of equipment, methods, or weather conditions on a global scale. There’s still a lot to do!

We are looking for collaborators!

We are consulting with experts in the field and receiving valuable input, but we aim to develop tools and materials for technical staff in the energy sector that help fill gaps, streamline processes, and ultimately save you time and effort.

Wanted: Site-Level Weather Data

  • What wind datasets are available that we can prototype through our workflow? à Compare anonymized data vs others[1] (on/offshore)
  • How much uncertainty? à What areas are of the highest importance to you?
  • How does this impact dispersion modeling that uses reanalysis?
  • This work is most useful using a wide range of locations— help us better quantify uncertainty in ways that help you!

References, Acknowledgments and Contact Information

[1] Conrad BM, Johnson MR. 2025. Accounting for spatiotemporally correlated errors in wind speed for remote surveys of methane emissions. Atmos. Meas. Tech. [Manuscript in review]

[2] NIST (overview guide): Reanalysis Data Comparison Methods for Emissions Measurement Campaigns. https://nvlpubs.nist.gov/nistpubs/ir/2025/NIST.IR.8575.pdf

[3] Christman Field, CSU Atmospheric Science. www.atmos.colostate.edu/fccwx/fccwx_latest.php and https://coagmet.colostate.edu/station/fcc01_main.html


Contact Information:

Kira Shonkwiler, PhD | Research Scientist | CSU Energy Institute | [email protected]


Acknowledgments:

This work is in preparation for future data collection efforts onshore at METEC and other project locations, as well as offshore wind data measurement deployments managed by METEC.

Equipment at the METEC Site