Tropical cyclone rapid intensification events often cause destructive hurricane landfalls because they are associated with the strongest storms and forecasts with the highest errors. Multi-decade observational datasets of tropical cyclone behavior have recently enabled documentation of upward trends in tropical cyclone rapid intensification in several basins. However, a robust anthropogenic signal in global intensification trends and the physical drivers of intensification trends have yet to be identified. To address these knowledge gaps, here we compare the observed trends in intensification and tropical cyclone environmental parameters to simulated natural variability in a high-resolution global climate model. In multiple basins and the global dataset, we detect a significant increase in intensification rates with a positive contribution from anthropogenic forcing. Furthermore, thermodynamic environments around tropical cyclones have become more favorable for intensification, and climate models show anthropogenic warming has significantly increased the probability of these changes.
Statistical downscaling (SD) methods used to refine future climate change projections produced by physical models have been applied to a variety of variables. We evaluate four empirical distributional type SD methods as applied to daily precipitation, which because of its binary nature (wet vs. dry days) and tendency for a long right tail presents a special challenge. Using data over the Continental U.S. we use a ‘Perfect Model’ approach in which data from a large‐scale dynamical model is used as a proxy for both observations and model output. This experimental design allows for an assessment of expected performance of SD methods in a future high‐emissions climate‐change scenario. We find performance is tied much more to configuration options rather than choice of SD method. In particular, proper handling of dry days (i.e., those with zero precipitation) is crucial to success. Although SD skill in reproducing day‐to‐day variability is modest (~15–25%), about half that found for temperature in our earlier work, skill is much greater with regards to reproducing the statistical distribution of precipitation (~50–60%). This disparity is the result of the stochastic nature of precipitation as pointed out by other authors. Distributional skill in the tails is lower overall (~30–35%), although in some regions and seasons it is small to non‐existent. Even when SD skill in the tails is reasonably good, in some instances, particularly in the southeastern United States during summer, absolute daily errors at some gridpoints can be large (~20 mm or more), highlighting the challenges in projecting future extremes.
Statistical downscaling methods are extensively used to refine future climate change projections produced by physical models. Distributional methods, which are among the simplest to implement, are also among the most widely used, either by themselves or in conjunction with more complex approaches. Here, building off of earlier work we evaluate the performance of seven methods in this class that range widely in their degree of complexity. We employ daily maximum temperature over the Continental U. S. in a "Perfect Model" approach in which the output from a large‐scale dynamical model is used as a proxy for both observations and model output. Importantly, this experimental design allows one to estimate expected performance under a future high‐emissions climate‐change scenario.
We examine skill over the full distribution as well in the tails, seasonal variations in skill, and the ability to reproduce the climate change signal. Viewed broadly, there generally are modest overall differences in performance across the majority of the methods. However, the choice of philosophical paradigms used to define the downscaling algorithms divides the seven methods into two classes, of better vs. poorer overall performance. In particular, the bias‐correction plus change‐factor approach performs better overall than the bias‐correction only approach. Finally, we examine the performance of some special tail treatments that we introduced in earlier work which were based on extensions of a widely used existing scheme. We find that our tail treatments provide a further enhancement in downscaling extremes.
Tropical cyclones that rapidly intensify are typically associated with the highest forecast errors and cause a disproportionate amount of human and financial losses. Therefore, it is crucial to understand if, and why, there are observed upward trends in tropical cyclone intensification rates. Here, we utilize two observational datasets to calculate 24-hour wind speed changes over the period 1982–2009. We compare the observed trends to natural variability in bias-corrected, high-resolution, global coupled model experiments that accurately simulate the climatological distribution of tropical cyclone intensification. Both observed datasets show significant increases in tropical cyclone intensification rates in the Atlantic basin that are highly unusual compared to model-based estimates of internal climate variations. Our results suggest a detectable increase of Atlantic intensification rates with a positive contribution from anthropogenic forcing and reveal a need for more reliable data before detecting a robust trend at the global scale.
The cumulative distribution function transform (CDFt) downscaling method has been used widely to provide local‐scale information and bias correction to output from physical climate models. The CDFt approach is one from the category of statistical downscaling methods that operates via transformations between statistical distributions. Although numerous studies have demonstrated that such methods provide value overall, much less effort has focused on their performance with regard to values in the tails of distributions. We evaluate the performance of CDFt‐generated tail values based on four distinct approaches, two native to CDFt and two of our own creation, in the context of a "Perfect Model" setting in which global climate model output is used as a proxy for both observational and model data. We find that the native CDFt approaches can have sub‐optimal performance in the tails, particularly with regard to the maximum value. However, our alternative approaches provide substantial improvement.
Statistical downscaling is used widely to refine projections of future climate. Although generally successful, in some circumstances it can lead to highly erroneous results.
Statistical downscaling (SD) is commonly used to provide information for the assessment of climate change impacts. Using as input the output from large-scale dynamical climate models and observation-based data products, it aims to provide finer grain detail and also to mitigate systematic biases. It is generally recognized as providing added value. However, one of the key assumptions of SD is that the relationships used to train the method during a historical time period are unchanged in the future, in the face of climate change. The validity of this assumption is typically quite difficult to assess in the normal course of analysis, as observations of future climate are lacking. We approach this problem using a “Perfect Model” experimental design in which high-resolution dynamical climate model output is used as a surrogate for both past and future observations.
We find that while SD in general adds considerable value, in certain well-defined circumstances it can produce highly erroneous results. Furthermore, the breakdown of SD in these contexts could not be foreshadowed during the typical course of evaluation based only on available historical data. We diagnose and explain the reasons for these failures in terms of physical, statistical and methodological causes. These findings highlight the need for caution in the use of statistically downscaled products as well as the need for further research to consider other hitherto unknown pitfalls, perhaps utilizing more advanced “Perfect Model” designs than the one we have employed.