Klimamodelle unter Druck: Probleme mit der Prognoseleistung geben Anlass zur Sorge

Der El Nino 2015/16 ist vorüber, genauso wie die damit verundenen Feierlichkeiten der Alarmisten. Es wird immer klarer, dass die Prognosen der Klimamodelle maßlos übertrieben waren. Bereits im April 2015 gab die Duke University per Pressemitteilung bekannt, dass die schlimmsten IPCC-Temperatur-Prognosen umgehend zu den Akten gelegt werden müssten:

Global Warming More Moderate Than Worst-Case Models
A new study based on 1,000 years of temperature records suggests global warming is not progressing as fast as it would under the most severe emissions scenarios outlined by the Intergovernmental Panel on Climate Change (IPCC).  

“Based on our analysis, a middle-of-the-road warming scenario is more likely, at least for now,” said Patrick T. Brown, a doctoral student in climatology at Duke University’s Nicholas School of the Environment. “But this could change.” The Duke-led study shows that natural variability in surface temperatures — caused by interactions between the ocean and atmosphere, and other natural factors — can account for observed changes in the recent rates of warming from decade to decade. The researchers say these “climate wiggles” can slow or speed the rate of warming from decade to decade, and accentuate or offset the effects of increases in greenhouse gas concentrations. If not properly explained and accounted for, they may skew the reliability of climate models and lead to over-interpretation of short-term temperature trends.

The research, published today in the peer-reviewed journal Scientific Reports, uses empirical data, rather than the more commonly used climate models, to estimate  decade-to-decade variability. “At any given time, we could start warming at a faster rate if greenhouse gas concentrations in the atmosphere increase without any offsetting changes in aerosol concentrations or natural variability,” said Wenhong Li, assistant professor of climate at Duke, who conducted the study with Brown. The team examined whether climate models, such as those used by the IPCC, accurately account for natural chaotic variability that can occur in the rate of global warming as a result of interactions between the ocean and atmosphere, and other natural factors.

To test how accurate climate models are at accounting for variations in the rate of warming, Brown and Li, along with colleagues from San Jose State University and the USDA, created a new statistical model based on reconstructed empirical records of surface temperatures over the last 1,000 years. “By comparing our model against theirs, we found that climate models largely get the ‘big picture’ right but seem to underestimate the magnitude of natural decade-to-decade climate wiggles,” Brown said. “Our model shows these wiggles can be big enough that they could have accounted for a reasonable portion of the accelerated warming we experienced from 1975 to 2000, as well as the reduced rate in warming that occurred from 2002 to 2013.”  

Further comparative analysis of the models revealed another intriguing insight. “Statistically, it’s pretty unlikely that an 11-year hiatus in warming, like the one we saw at the start of this century, would occur if the underlying human-caused warming was progressing at a rate as fast as the most severe IPCC projections,” Brown said. “Hiatus periods of 11 years or longer are more likely to occur under a middle-of-the-road scenario.” Under the IPCC’s middle-of-the-road scenario, there was a 70 percent likelihood that at least one hiatus lasting 11 years or longer would occur between 1993 and 2050, Brown said.  “That matches up well with what we’re seeing.” There’s no guarantee, however, that this rate of warming will remain steady in coming years, Li stressed. “Our analysis clearly shows that we shouldn’t expect the observed rates of warming to be constant. They can and do change.” 

Paper: Patrick T. Brown, Wenhong Li, Eugene C. Cordero and Steven A. Mauget. Comparing the Model-Simulated Global Warming Signal to Observations Using Empirical Estimates of Unforced Noise. Scientific Reports, April 21, 2015 DOI: 10.1038/srep09957

Gerne klopfen sich die Modellierer gegenseitig auf die Schulter: Toll modelliert, Herr Kollege! Dazu gehören natürlich auch Kalibrierungstests mit der Vergangenheit. Diese beginnen in den meisten Studien jedoch erst in der Kleinen Eiszeit, der kältesten Phase der letzten 10.000 Jahre. Wenn die Modelle dann die Wiedererwärmung scheinbar reproduzieren, ist die Freude groß: Seht her, alles super. Der Haupt-Antrieb der Erwärmung ist jedoch unklar. Ist es nicht logisch, dass nach einer natürlichen Kältephase eine Wiedererwärmung stattfindet? Ist es Zufall oder zwingend erforderlich, dass das CO2 während dieser Phanse anstieg? Ehrlicher wären Kalibrierungstests, die bis in die Mittelalterliche Wärmeperiode zurückreichen. Erst wenn die vorindustriellen Wärmephasen erfolgreich reproduziert werden können, sind die Modelle bestätigt.

Im Jahr 2015 bedienten sich Gómez-Navarro und Kollegen des Kleinen-Eiszeit-Tricks. Sie begannen mit ihrer Überprüfung 1500 n. Chr., also während besagter Kaltphase. Das Ergebnis überrascht nicht: Grober Trend „bestätigt“, im Detail klappt es aber nicht. Abstract aus Climate of the Past:

A regional climate palaeosimulation for Europe in the period 1500–1990 – Part 2: Shortcomings and strengths of models and reconstructions
This study compares gridded European seasonal series of surface air temperature (SAT) and precipitation (PRE) reconstructions with a regional climate simulation over the period 1500–1990. The area is analysed separately for nine subareas that represent the majority of the climate diversity in the European sector. In their spatial structure, an overall good agreement is found between the reconstructed and simulated climate features across Europe, supporting consistency in both products. Systematic biases between both data sets can be explained by a priori known deficiencies in the simulation. Simulations and reconstructions, however, largely differ in the temporal evolution of past climate for European subregions. In particular, the simulated anomalies during the Maunder and Dalton minima show stronger response to changes in the external forcings than recorded in the reconstructions. Although this disagreement is to some extent expected given the prominent role of internal variability in the evolution of regional temperature and precipitation, a certain degree of agreement is a priori expected in variables directly affected by external forcings. In this sense, the inability of the model to reproduce a warm period similar to that recorded for the winters during the first decades of the 18th century in the reconstructions is indicative of fundamental limitations in the simulation that preclude reproducing exceptionally anomalous conditions. Despite these limitations, the simulated climate is a physically consistent data set, which can be used as a benchmark to analyse the consistency and limitations of gridded reconstructions of different variables. A comparison of the leading modes of SAT and PRE variability indicates that reconstructions are too simplistic, especially for precipitation, which is associated with the linear statistical techniques used to generate the reconstructions. The analysis of the co-variability between sea level pressure (SLP) and SAT and PRE in the simulation yields a result which resembles the canonical co-variability recorded in the observations for the 20th century. However, the same analysis for reconstructions exhibits anomalously low correlations, which points towards a lack of dynamical consistency between independent reconstructions.

Im Januar 2017 versuchten dann Benjamin Santer und Kollegen die Gültigkeit der Modelle zu rechtfertigen. Im Journal of Climate verglichen Sie Satellitendaten mit Simulationen der Temperaturentwicklung der letzten 18 Jahre. Ihr Ergebnis: Die Modelle errechnen eine Erwärmung, die mehr als anderthalb mal so hoch ist wie in der Realität gemessen. Abstract:

Comparing Tropospheric Warming in Climate Models and Satellite Data
Updated and improved satellite retrievals of the temperature of the mid-to-upper troposphere (TMT) are used to address key questions about the size and significance of TMT trends, agreement with model-derived TMT values, and whether models and satellite data show similar vertical profiles of warming. A recent study claimed that TMT trends over 1979 and 2015 are 3 times larger in climate models than in satellite data but did not correct for the contribution TMT trends receive from stratospheric cooling. Here, it is shown that the average ratio of modeled and observed TMT trends is sensitive to both satellite data uncertainties and model–data differences in stratospheric cooling. When the impact of lower-stratospheric cooling on TMT is accounted for, and when the most recent versions of satellite datasets are used, the previously claimed ratio of three between simulated and observed near-global TMT trends is reduced to approximately 1.7. Next, the validity of the statement that satellite data show no significant tropospheric warming over the last 18 years is assessed. This claim is not supported by the current analysis: in five out of six corrected satellite TMT records, significant global-scale tropospheric warming has occurred within the last 18 years. Finally, long-standing concerns are examined regarding discrepancies in modeled and observed vertical profiles of warming in the tropical atmosphere. It is shown that amplification of tropical warming between the lower and mid-to-upper troposphere is now in close agreement in the average of 37 climate models and in one updated satellite record.

Siehe auch Kommentierung auf WUWT.

Judith Curry berichtet über eine Doktorarbeit in den Niederlanden, in denen ein Praktiker, der tagtäglich mit den Resultaten von Klimamodellen umgeht, harte Kritik übt. Aus dem Vorwort der Arbeit von Alexander Bakker:

In 2006, I joined KNMI to work on a project “Tailoring climate information for impact assessments”. I was involved in many projects often in close cooperation with professional users. In most of my projects, I explicitly or implicitly relied on General Circulation Models (GCM) as the most credible tool to assess climate change for impact assessments. Yet, in the course of time, I became concerned about the dominant role of GCMs. During my almost eight year employment, I have been regularly confronted with large model biases. Virtually in all cases, the model bias appeared larger than the projected climate change, even for mean daily temperature. It was my job to make something ’useful’ and ’usable’ from those biased data. More and more, I started to doubt that the ’climate modelling paradigm’ can provide ’useful’ and ’usable’ quantitative estimates of climate change.

After finishing four peer-reviewed articles, I concluded that I could not defend one of the major principles underlying the work anymore. Therefore, my supervisors, Bart van den Hurk and Janette Bessembinder, and I agreed to start again on a thesis that intends to explain the caveats of the ’climate modelling paradigm’ that I have been working in for the last eight years and to give direction to alternative strategies to cope with climate related risks. This was quite a challenge. After one year hard work a manuscript had formed that I was proud of and that I could defend and that had my supervisors’ approval. Yet, the reading committee thought differently. 

According to Bart, he has never supervised a thesis that received so many critical comments. Many of my propositions appeared too bold and needed some nuance and better embedding within the existing literature. On the other hand, working exactly on the data-related intersection between the climate and impact community may have provided me a unique position where contradictions and nontrivialities of working in the ’climate modelling paradigm’ typically come to light. Also, not being familiar with the complete relevant literature may have been an advantage. In this way, I could authentically focus on the ’scientific adequacy’ of climate assessments and on the ’non- trivialities’ of translating the scientific information to user applications, solely biased by my daily practice.

Weiterlesen bei Judith Curry.

 

Teilen: