The fifth National Climate Assessment (NCA5) is out, primarily based on high-resolution statistically downscaled climate projections of CMIP6 Global Climate Models (GCMs). From being a poor man's methodology for getting around the scale mismatch between GCMs and policy-relevant decision-making merely a decade ago to taking center stage in a national climate assessment as the data for analyzing regional and local climate change and its impacts across the United States, statistical downscaling has had a stellar rise. But every fame comes with greater scrutiny, so there is a need to get under the hood and highlight some yeas and nays of statistical downscaling.
Downscaling in Climate Science
The most common definition of downscaling is the spatial refinement of a dataset. In climate science, dynamical downscaling refers to refining GCM simulations by using them as forcing at the lateral and lower boundaries of a higher-resolution regional climate model (RCM). RCMs are physical models configured over a limited area of interest and offer the benefit of high resolution. Moreover, parametrizations within an RCM can be tuned for specific regions, making it possible to have unique configurations optimized for distinct regions across the globe. However, generating a large multi-RCM ensemble is challenging due to computational cost and data archival needs.
Statistical downscaling, on the other hand, is strictly a mathematical procedure that involves correcting the biases and spatially refining model simulations at a grid spacing comparable to the available observations. Compared to dynamical downscaling, statistical downscaling requires much fewer resources, and therefore, generating a large ensemble of statistical downscaled datasets is relatively easy. However, it is limited to a few variables for which a long time series of observations are available.
Out of the box, the climatological presentation of statistically downscaled data is often more impressive than the dynamically downscaled counterpart, as it contains little to no bias in the historical period. In contrast, RCM simulations would still suffer from biases and most likely need bias correction before use in downstream climate change impact analyses.
This brief introduction raises a few questions: Why would we prefer one over the other? If RCM simulations still need bias correction, why couldn't we limit ourselves to statistically downscaling coarse-resolution GCMs to the grid spacing of observations?
To answer these questions convincingly, one needs supporting evidence beyond scientific conjecture. Therefore, the rest of the discussion derives its support from four data sources: a GCM (ACCESS-CM2) from CMIP6, two datasets representing statistically downscaled versions of that GCM, and one dataset representing a dynamically downscaled version of the same GCM. The statistically downscaled datasets are from NASA NEX (Bias Correction Statistical Disaggregation; BCSD) and LOcalized Constructed Analogs version 2 (LOCA2). The LOCA2 is one of the two datasets used in NCA5. The dynamically downscaled data is our in-house ORNL product using RegCM, an RCM. ACCESS-CM2 GCM grid spacing is 1.85° x 1.25°. RCM and NASA NEX downscaled data are at ~0.25° x 0.25° (1/4th), and LOCA2 is at ~0.0625° x 0.0625° (1/16th) grid spacing.
The need for spatial refinement
The need for spatially resolved climate change simulations is motivated by the desire to identify the vulnerabilities of natural and human systems to anthropogenic climate change. To this end, the accuracy of the spatial distribution of impact-relevant climate model outputs is critical. Take an example of a weather system dumping precipitation over California. The most substantial band of precipitation is typically over the Sierra Nevada mountains, where cold season accumulation is a significant water provider for irrigation, water supply, and power generation. These mountains are unresolved at typical GCM grid spacing; hence, the distribution of precipitation across California is unrealistic. Using this GCM data as it is in hydrological impact assessments and future water resource planning is not meaningful. Downscaling of GCM resolves this issue.
Figure illustration: Precipitation event in a GCM and its downscaled versions using statistical and dynamical downscaling approaches
Better representation of fine-scale processes
Resolving precipitation spatial biases does not necessarily improve precipitation-generating processes. In the United States, most storms move from west to east. East of the Rockies, a significant fraction of annual precipitation is associated with mesoscale convective systems (MCSs) that often move diagonally in a line known as a squall line. More severe forms of these storms may have a bow echo structure. At the synoptic scale, significant weather activity comes through mid-latitude cyclones that can travel long distances and last several days.
At a grid spacing of >100 km, GCMs can reasonably represent mid-latitude cyclone movement and precipitation; however, such coarse resolution is insufficient to resolve mesoscale structures associated with MCSs. Therefore, the sharp bands of squall lines or vortices in MCS may be poorly defined or nonexistent in a GCM simulation.
Statistical downscaling corrects the magnitudes of GCM precipitation at finer grid spacing. However, it cannot solve the issue of poorly defined mesoscale storm structures. If a GCM produces precipitation east of the Rockies without having squall lines and MCS vortices, then a mathematically refined version of its precipitation through statistical downscaling would likely also suffer from the same issues. On the other hand, RCM-based dynamical downscaling can significantly improve the accuracy of fine-scale precipitation-generating processes.
An animation showcasing 100 cases from two years of GCM data and its dynamically and statistically downscaled versions provides a visually appealing illustration supporting this point.
Animation illustration: This animation shows 100 distinct precipitation events in a GCM and its downscaled versions using statistical and dynamical downscaling approaches. These events are taken from two simulated years in the GCM. Note the error in precipitation units. The correct units are mmd-1.
Grid spacing versus resolved scales
Grid-based observations primarily rely on data from unevenly distributed meteorological stations. Therefore, their grid spacing often does not represent the actual resolvable scales at that spacing. This issue can be problematic for statistical downscaling, which relies on reference observations for training and correction.
Consequently, dynamically downscaled data will likely have better-revolved scales at comparable grid spacing in a dynamical versus statistical downscaled data comparison. For instance, the RCM-based dynamically downscaled data and NASA-NEX statistically downscaled (BCSD) data have approximately 1/4th degree horizontal grid spacing. However, the RCM data has a much more detailed representation of precipitation distribution than NASA-NEX (see figures and animation). This outcome is also expected in the RCM and LOCA2 comparison if we have an RCM version on 1/16th degree.
Future climate change
Understanding the fine-scale responses, feedback, and impacts of climate change is crucial in developing resilient socio-ecological systems, which requires having fine-scale climate change information on stakeholders' relevant climate variables, such as temperature and precipitation. For instance, in the western United States, rising temperatures have already caused a decrease in seasonal snow accumulation. Changes in snow hydrology because of continued warming can exacerbate hydrological changes through snow-albedo feedback. This feedback occurs when the snow melts and affects the amount of sunlight reflected to the atmosphere, leading to further warming. Therefore, regions currently receiving significant snow accumulation will likely experience pronounced warming in the coming decades if substantial changes occur to snow hydrology.
A typical GCM resolution cannot accurately represent the diverse landscapes across the western United States, resulting in biases in the topography-dependent precipitation and temperature distribution. Statistical downscaling effectively corrects these issues. However, GCM's lack of spatial resolution affects its ability to heterogeneously simulate threshold responses to anthropogenic forcings, such as those related to snow hydrology, which could impact the accuracy of projected temperature changes in the valleys and at higher elevations across the western United States.
Although statistical downscaling adjusts temperatures in historical and future simulations using reference observations, it cannot correct physical response biases in GCM simulations. Thus, statistically downscaled datasets are ineffective in correcting GCM errors in topography-dependent snow-hydrology-driven temperature responses, which may result in inconsistent outcomes in downstream fine-scale impact analyses using these datasets. On the other hand, dynamical downscaling can reduce errors in the simulation of these threshold responses by improving the spatial heterogeneity and associated fine-scale Earth system feedbacks.
A comparison of simulated climate change in spring, summer, and fall across four datasets nicely illustrates this point. A continued increase in anthropogenic forcing and warming under the higher-end SSP5 scenario will likely reduce snow accumulation in the western United States by mid-century. Consequently, an elevation-dependent response in temperature changes is expected, particularly in the mid-level elevations that will become snow-free earlier in the spring. This physically consistent response is present in RCM's simulated temperature changes in spring but nonexistent in other datasets. It appears late in summer in GCM simulation, most likely due to excessive snow biases, but does not exhibit physically consistent elevation-dependent variations due to GCM's coarse resolution.
Unfortunately, statistical downscaling techniques cannot correct this bias in GCM's simulated temperature response as they are constrained to not substantially alter the GCM's simulated climate change signal. Although LOCA2 adds some spatial variation in the GCM signal, it is hard to explain these enhancements physically.
Figure caption: Mid-century future (2041–2060) seasonal maximum temperature changes in the western United States. The black and purple line contours represent the approximate location of elevated surfaces above 1500 and 2500 meters above sea level. Changes are with reference to 1995–2014.
Biases in dynamical downscaling
Despite several listed advantages of dynamical downscaling over statistical downscaling in representing fine-scale physical processes and earth system responses, RCM simulations still suffer from significant precipitation and temperature distribution biases. Therefore, avoiding the direct use of dynamically downscaled data in climate change impact assessments is imperative. Bias correction is necessary regardless of the methodology used for downscaling GCM simulation.
Summary
Although statistical downscaling has made significant progress, process-based dynamical downscaling is still necessary to improve the representation of climate system processes and feedbacks at finer scales. A high-resolution statistical downscaling will not improve the physical consistency of GCM-simulated precipitation-producing storms or correct spatial errors in GCM's simulated threshold responses to anthropogenic forcing. Therefore, with abundant statistically downscaled data available, it is crucial to systematically understand which investigations should and should not be carried out using such datasets. This blog post provides a few examples for illustration and only partially covers some aspects that require further investigation.
For downscaling GCMs, a hybrid framework that combines the best of both worlds could be a solution, where RCMs dynamically downscale GCMs at intermediate resolution, and statistical downscaling further corrects and refines the spatial scales comparable to available observations.
Nice blog, Moetasim! To underline the point made in your last paragraph, the TRANSLATE program in Ireland (www.met.ie/science/translate, or https://www.frontiersin.org/articles/10.3389/fclim.2023.1166828 ) used a hybrid downscaling approach, with dynamical downscaling by ensembles of RCMs to get to a 4km grid, and then statistical downscaling (using quantile mapping, and "degraded observations" corrections) to get down to the "observational" grid of ~1km. Quantile mapping is particularly good insofar as it can do both bias correction and statistical downscaling at the same time.
It wasn't so long ago when any kind of downscaling at all was regarded with skepticism at best. Roger Pielke wrote an EOS article in 2012 titled "Regional climate downscaling: what's the point?" (https://agupubs.onlinelibrary.wiley.com/doi/epdf/10.1029/2012EO050008 ), which includes the statement: "... dynam…