19 minute read
3.2.BRIEF TYPOLOGY OF SIMULATION & MODELING SYSTEMS
“I wanted to explain why observing the ocean was so difficult, and why it is so tricky to predict with any degree of confidence such important climate elements as its heat and carbon storage and transports in 10 or 100 years. I am distrustful of prediction scenarios for details of the ocean circulation that rely on extremely complicated coupled models that run out for decades to thousands of years. The science is not sufficiently mature to say which of the many complex elements of such forecasts are skillful” Carl Wunsh278
We’re going to try to provide some schematics to the situations one may encounter saying that there exists formal, convergent, divergent and totally chaotic systems. In fact with the same underlying theory (e.g. Kepler’s laws) you can sometimes face one or the other of these situations. For example, you can compute the orbit of a double star (i.e. a solution to the differential equation the system obeys to) and have a quick convergent way to compute the orbit, say some tens or hundreds of iterations like in the spreadsheet provided here (Poyet, 2017c) and given the trigonometric parallax, one can immediately compute the sum of the masses for the binary system by using the third Kepler’s law (the square of the orbital period of a planet is directly proportional to the cube of the semi-major axis of its orbit). Then you can miss one important parameter (e.g. the trigonometric parallax) but overcome the situation by means of an approximate relation, e.g. between the mass and luminosity (MLR) of main-sequence stars, which was predicted by Eddington (1924) and leads to the calculation of dynamic parallaxes (Russell, 1928), (Kuiper, 1938), (Baize, 1943), (Baize and Romani, 1946), (Baize, 1947), (Couteau, 1971), enabling the knowledge of each individual mass of the binary system. Using this method and coupling it to simple iterative calculations in a spreadsheet enables to derive in less than ten iterations stable absolute bolometric magnitudes for each star A and B, individual masses for A and B, the dynamic parallax and the sum of masses to serve as a crosscheck.
Thus, one has both a formal and convergent means to compute the values he / she is interested in, this is the best situation. But at the same time, when one faces a n-body problem (e.g. solar system), the same theory does not provide any longer for a formal solution and by numerical integration, with billions of small steps (i.e. iterations) you know that the system will unfortunately be divergent over the very long run. This situation still enables to make very reliable solar system ephemeris over decades but totally prevent from knowing where the planets will be, say in 100 million years “The motion of the Solar System is thus shown to be chaotic, not quasi-periodic. In particular, predictability of the orbits of the inner planets, including the Earth, is lost within a few tens of millions of years ” (Laskar, 1990). This hints to the limits of the theory and of the knowledge available (as you miss of a formal solution to a n-body problem) as well as of the frontier introduced by the technology used, as billions of small increments required over a 100 million years simulation will erase any reliable accuracy due to minimal rounding over such a very long term.
Then you have chaotic systems, like in meteorology or worse climate (the same but over longer timescales) and the situation gets a lot more desperate, this is the Lorentz effect and designates the instability of the solutions of certain systems of equations (non-linear) compared to the initial conditions; it can mean that the system that the equations want to describe is actually "chaotic" or that the equations used do not correctly describe the system. This instability of the discretization programs of the fluid equations limits to a few days the quality of the forecasts of meteorology… which are inapplicable in climatology. As reported by Snider (2016) «It's the proverbial butterfly effect said Clara Deser, a senior climate scientist at the National Center for Atmospheric Research (NCAR). Could a butterfly flapping its wings in Mexico set off these little motions in the atmosphere that cascade into large-scale changes to atmospheric circulation?».
It is also the CACE syndrome, Change Anything Changes Everything. Believing that by averaging n runs, here n=30, provides for a « mean » having some significance is total delusion, it is like thinking that by throwing 30 times 2 (or more) dices and by averaging the results of the draws would give any insight into what will come out next ! Here we have left science and delved into beliefs, illusions when one think that because the map was calculated by a supercomputer it bears some meaning, it contains information. When one faces the wall of reality, like with meteorological forecasts, scientists know that they deal with a totally chaotic system that gives them no chance to make any meaningful prevision beyond 2 weeks, but when they are climate tinkerers they delude themselves thinking that extending the timescales to decades and furthermore adding complexity, would enable them to produce some reliable result, by wizardry. As reminded by Hansen (2016), «Averaging 30 results produced by the mathematical
278http://www.realclimate.org/index.php/archives/2007/03/swindled-carl-wunsch-responds/
chaotic behavior of any dynamical system model does not average out the natural variability in the system modeled. It does not do anything even resembling averaging out natural variability. Averaging 30 chaotic results produces only the average of those particular 30 chaotic results».
Think of it, just make 30 meteorological forecasts at six months letting the (super)computer run, average these and see whether it makes any sense to claim that the fantasy maps you have colored represent any reality from which you are going to subtract the observed weather to assert that the natural variability can be assessed in this way. The situation is crystal clear, the six months fantasy colored maps bear no information, has no relationship to any reality, has no intellectual nor economic value and claiming otherwise is at best self-delusional and at worst an intentional cheating.
But the amazing thing is that IPCC goes into a circular deception in his main reports. First, Randall et al. (2007) states “Note that the limitations in climate models’ ability to forecast weather beyond a few days do not limit their ability to predict long-term climate changes, as these are very different types of prediction”. They are not different types of predictions, the climate is the integral over time of the weather and when one is unable to make forecasts beyond 15 days, because we deal with a chaotic system, he deludes himself and deceit others when he claims that he can “model” the climate thousands or tens of thousands years ago or in the future. The very definition of the climate is “the composite or generally prevailing weather conditions of a region, as temperature, air pressure, humidity, precipitation, sunshine, cloudiness, and winds, throughout the year, averaged over a series of years. a region or area characterized by a given climate”279 .
Then Randall et al. (2007) p. 601, when addressing the question “How Reliable Are the Models Used to Make Projections of Future Climate Change?” affirms an incredible thing “A third source of confidence comes from the ability of models to reproduce features of past climates and climate changes. Models have been used to simulate ancient climates, such as the warm mid-Holocene of 6,000 years ago or the last glacial maximum of 21,000 years ago ( see Chapter 6)” and does not provide one single reference to some published work that would have claimed and proved to do so and demonstrated how. When one goes to Chapter 6, which is the only reference provided, i.e. (Jansen et al., 2007) p. 440, instead of finding references to recognized work you get a circular reference to Chapter 8 ! and the following “In principle the same climate models that are used to simulate present-day climate, or scenarios for the future, are also used to simulate episodes of past climate, using differences in prescribed forcing and (for the deep past) in configuration of oceans and continents. The full spectrum of models (see Chapter 8) is used (Claussen et al., 2002), ranging from simple conceptual models, through Earth System Models of Intermediate Complexity (EMICs) and coupled General Circulation Models (GCMs)”.
So, in order to demonstrate that climate “models”, “simulations” - call them the way you like – have been able to render the climate back to the LGM long before the Holocene, the trick is the circular reference: Chapter 8 says GO TO Chapter 6 which says GO TO Chapter 8 ! The only reference given, i.e. Claussen et al. (2002) starts by reminding the evidence, that there is a close link between the weather and climate "Following the traditional concept of von Hann (1908), climate has been considered as the sum of all meteorological phenomena which characterize the mean state of the atmosphere at any point on Earth's surface”, so one can hardly see how one could make decent climatic forecasts made of the sum of unreliable forecasts beyond 15 days; then the authors address the typology of climate systems and the place of Earth system Models of Intermediate Complexity (EMICs) and certainly do not provide any proof that anyone has ever managed to reproduce the climate all back throughout the Holocene or even further. If one were to do so, and cheating, the system would simply operate as a best fit to pre-recorded data and regurgitate the tape. In fact, this might not even be possible as from the study of Hessler et al. (2014) it is stated that “ We present and examine a multi-sensor global compilation of mid-Holocene (MH) sea surface temperatures (SST), based on Mg / Ca and alkenone palaeothermometry and reconstructions obtained using planktonic foraminifera and organic-walled dinoflagellate cyst census counts. Overall, the uncertainties associated with the SST reconstructions are generally larger than the MH anomalies. Thus, the SST data currently available cannot serve as a target for benchmarking model simulations”, thus it will not even be possible to “read” and “regurgitate” the tape, as there is no such reliable tape.
What is extremely annoying to say the least, is that once unproven statements have been written in the IPCC report, it is like “seen at the TV”, people copy word for word not even checking the plausibility of what was asserted as a proof and which is not even a deception but a mere lie and regurgitate the sentences not even putting quotes around, this is what happened with Llyod (2012) p. 395. One reminds of the high impact of the various oscillations (e.g. ENSO, PDO, NAO, etc.), really driving the weather as a response to the insolation triggers and to the long term heat storage capacity
279https://www.dictionary.com/browse/climate
of the oceans, and one also remembers the trouble for even the latest generation Complex coupled Global Circulation Models (CGCMs) to tackle the issue on a semi-meteorological timescale, i.e. just accurately forecasting what the next event will be. Actually, CGCMs fail at predicting with certainty if one should expect a El Niño or a La Ninã next, when and with which intensity and also fail by contradicting each others on longer timescales, some forecasting more El Niño and others suggesting a tendency towards greater La Niña-like conditions (Steig, et al., 2013).
So, let's not speak of decades, millennial timescales or the Holocene or more, let's be reasonable, science would benefit of honest reports of where we stand. CGCMs are remarkable pieces of software, but nobody should deceit the public and others by making fanciful assertions with respect to their capabilities as IPCC and Lloyd (2012) p. 395, did. In fact, as stated by Collins et al. (2010) many reasons simply make CGCMs unable to just project ENSO events decades in the future such as “because of limitations in: (1) computer resources, which typically restrict climate model resolutions to fewer grid cells than are needed to adequately resolve relevant small-scale physical processes; (2) our ability to create parameterization schemes or include some relevant physical and biological processes that are not explicitly resolved by climate models; (3) the availability of relevant high-quality observational data; and (4) our theoretical understanding of ENSO, which evolves constantly”, all these limitations make assertions that CGCMs or any other software system would reproduce climate back to the LGM fantasies that would make blush any science-fiction novelist.
In the end, not only Chapter 6, i.e. (Jansen et al., 2007) p. 481, does not provide any hints as to how models would magically reproduce the climate back to the Holocene and further, but more realistically and modestly state “ It is difficult to constrain the climate sensitivity from the proxy records of the last millennium (see Chapter 9). As noted above, the evidence for hemispheric temperature change as interpreted from the different proxy records, and for atmospheric trace greenhouse gases, inferred solar forcing and reconstructed volcanic forcing, is to varying degrees uncertain”, and furthermore, p. 483, concludes “Even though a great deal is known about glacial-interglacial variations in climate and greenhouse gases, a comprehensive mechanistic explanation of these variations remains to be articulated. Similarly, the mechanisms of abrupt climate change (for example, in ocean circulation and drought frequency) are not well enough understood, nor are the key climate thresholds that, when crossed, could trigger an acceleration in sea level rise or regional climate change. Furthermore, the ability of climate models to simulate realistic
abrupt change in ocean circulation, drought frequency, flood frequency, ENSO behaviour and monsoon strength is
uncertain. Neither the rates nor the processes by which ice sheets grew and disintegrated in the past are known well enough”.
So, how could computerized simulations, which are as per Jansen et al. (2007) unable to render any abrupt climate change in any area of interest, be it oceanic circulation or else, be capable how accounting for the climate changes that naturally happened all throughout the Holocene and further to the LGM ? Explain to me! Show to me! Prove to me!
Well what it seems is that the best of these GCMs are not only not covering the Holocene or properly backtracking to the LGM, but actually and more realistically they are just still unable to make any valuable forecast for the next season to come, summer or winter as you wish. What is next comes from a report to the Australian House of Representatives, with respect to future collaboration between the Aussie BoM and CSIRO and the United Kingdom Meteorological Office’s (UKMO). Let's observe that there are two main approaches to seasonal climate forecasting: 1) statistical methods using statistical relationships between atmospheric or oceanic indicators and seasonal climate variables such as rainfall or temperature, and 2) dynamical methods using global atmospheric and oceanic circulation models. It is then reported that the direction being taken by most weather forecasting groups internationally, as in Australia, is to replace existing empirically based statistical schemes with systems based on dynamic models, when the dynamic systems have comparable or better skill than the existing statistical systems. Thus, it is in that context that a collaboration with the the UKMO is presented to the MPs. The UKMO Unified Model is a high-powered computer-based climate and weather prediction program considered the best in the world (2009), a sophisticated coupled GCM, released by the Hadley Centre. Let's see how well it fares with respect to six months forecasts, as stated in (HRC, 2009), not the Holocene! Here you go “The Committee heard evidence that the UK model has not had a high success rate with long term weather forecasts. John McLean, an information technology specialist who has applied his skills in analysis to various issues relating to climate change, provided written evidence of the lack of success of the model from 2007 until 2009. He told the Committee: … in the UK the Met Office has been using modelling for seasonal forecasts over the last few years. 2007 was one of the wettest summers since, I think, 1913 and they had predicted a very hot summer . They tried again the next year and it was, again, a very wet summer. Last winter they predicted quite a mild and dry winter, and they had very heavy snow. They ran out of salt and grit for the roads”.
Figure 98. The dramatic cold spell that left dozens of people dead in Texas alone, show the abysmal failure of “seasonal forecast” (top figure) made on Jan 21, 2021 @ 1:16 PM as compared to real “temperature map only” observed on Feb 13, 2021 @7:35 AM.
So, instead of making climate-change forecasts that would span decades or centuries in the realm of fantasy land based on computerized over-gifted soothsayers and dare pretend that they account for the Holocene, that they go back to the LGM and why not more, the Eemian? Why not the entire Quaternary? They just ran out of salt and grit for the roads for the next winter in the UK. But, things went even worse in the US, because as the winters were supposed to get warmer due to global warming, the electricity production systems had been shifted to renewable. Texas instead of a mild weather as anticipated by the “seasonal forecast” faced record-low temperatures this February 2021 and snow and ice made roads impassable, the state’s electric grid collapsed, leaving millions without access to electricity. As the blackouts extended from hours to days, people died in the dozens and weeks will be required to make a complete assessment of the final death toll. Before thinking to making climate so-called scenarios for 2100, it would be good enough not to have fantasy seasonal forecasts as displayed in Figure 98, where the temperature outlook for February 2021 was announced on Jan 21, 2021 @ 1:16 PM as expected to be “much above average” whereas the reality that struck the Texans is displayed by the real temperature map displayed on Feb 13, 2021 @7:35 AM.
When climate-illusionists claim that weather and climate predictions are unrelated and that they can fail abysmally on 20 days “seasonal forecast” and dare pretend that they know what the the climate will be in 30 years or in 2100 is not only an ugly deception but an outright fraud. Turning the CO2 knob and adjusting a so called “climate sensitivity” is no recipe for any credible climate scenario. That's totally delusional. Indeed, would you place any credence in such dismal systems? That does not bode well for the future climate-policies based on these lunacies.
I must commend Janusz Pudykiewicz280 for his constructive, wise and lucid comments on that matter made public on “Researchgate” on December 11, 2020: “The essential task is to predict extremes in the extended range weather prediction (from weeks to a season) as well as in the sub-climatic range of forecast. As soon as we leave the deterministic or quasi-deterministic range of prediction that is of the order of 10 days, we have to develop the methods to deal with randomness. Perhaps the Fokker-Plank equation to describe the evolution of probabilities can supplement the traditional fluid dynamics and thermodynamics equations? We still don't know how to develop better methods. The projection of the state on a combination of empirical eigenfunctions can be another way of addressing the problem. Based on the observation of trends in science in general, I think that solution will be surprising and ingenious”. Pudykiewicz and Brunet (2008) remind us of the great achievement that 10 days meteorological forecasts have represented and the extraordinary benefits for society that arose from them. Now, the emphasis as underlined above is more on delineating the risks coming with extreme events (Sillmann et al., 2017). Climate forecasting remains beyond scope and better, original methods need to be devised.
Hopefully, Janusz Pudykiewicz will be right and climate science will not follow the fate of another yet sadly unresolved issue. Another domain which bears some resemblance with the chaotic features displayed by meteorological models is earthquake predictions, which unfortunately offers dire perspectives since the work of Geller et al. (1997), actually climate tinkering is much worse as it does not attempt to forecast weeks ahead but to tell us what will be the temperature, the pattern of precipitations and where will the sea-level be in 2100! Alas, the central hypothesis contended by Geller et al. (1997) has not been refuted since and states «Citing recent results from the physics of nonlinear systems “chaos theory,” they argue that any small earthquake has some chance of cascading into a large event. According to research cited by the authors, whether or not this happens depends on unmeasurably fine details of conditions in Earth's interior. Earthquakes are therefore inherently unpredictable. Geller et al. suggest that controversy over prediction lingers because prediction claims are not stated as objectively testable scientific hypotheses, and due to overly optimistic reports in the mass media». We are back to chaotic systems and it would be wise to learn from Geller’s savvy recommendation that whenever science is not based on «objectively testable scientific hypotheses» one embarks on a futile attempts, such as forecasting long term mean temperatures, sea-levels, rainfalls, droughts, etc. Refuting that CO2 is the culprit of observed climate changes, based on all scientific evidences, is what has been honestly done here and If it were not for a pure ideological stance which has lasted too long, it should logically lead to trying to assess which are the causes of these climate changes.
Finally, some systems are not chaotic but are not entirely deterministic either as the trading expert-system developed by Poyet and Besse (2005a-b). The software ensures that the best decisions be taken at each stage of the reasoning process according to the situation encountered, here managing portfolios of listed securities on the stock markets. To benchmark and validate how such systems operate, one must make hundreds of «runs» each lasting several hours as the logic of the expert-system is checked against decades of market data. The starting date of each validation run is shifted say of one week and the software creates an initial portfolio which will somehow differ from what would have been done one week after or one week before, because the best in time opportunities might have been slightly different. After years of operations on the market, decisions tend to level out and performances converge towards the efficiency of the trading logics used, still showing some deviations with respect to some mean return. Such systems, are inherently conscious of their limitations (seemingly making them different from climate tinkering) and enforce strong risk mitigation techniques to ensure that unforeseen events will not lead to extremely adverse results. Doing so has consequences on the performances (i.e. the returns) and finding the right balance between risk control and high performances is always a trade-off that can only be decided according to the portfolio’s owners.
In fact, TEXSOL (Trading Expert-System On Line) developed by Poyet and Besse (2005a-b) shares some resemblance with the Large Ensemble Techniques (LSE) currently implemented by “climate-groups” but to their difference, where they claim to know what the climate will be decades ahead, will never pretend to know what will be the balance of an account a decade ahead! γνῶθι σεαυτόν281
280https://www.researchgate.net/profile/Janusz_Pudykiewicz 281https://en.wikipedia.org/wiki/Know_thyself