Response to a comment by AJ on Tamino's blog

This expands on my one-word comment "[edit]", in reply to the immediately preceding post by AJ, on one of Tamino's threads. The part starting "Response: This is not your blog" explains why the full text of my reply is posted here instead of there, where the blog owner has replaced it by "[edit]".

@AJ: The rationale for this approach is that the ln(accumulated co2) forcing curve can be well fitted quadratically, meaning that the rate of change is linear (i.e. its derivative). If this forcing is being realized at a rate that can be reasonably estimated as linearly increasing, then a quadratic fit (i.e. its integral) might be a more appropriate method for removing the climate signal from the SST.

(I made the mistake of focusing on what Willard drew my attention to, namely Tamino's criticism of me, before reading the whole thread. I see now that there are several people who view things more or less my way whom Tamino has also criticized, AJ's being perhaps the closest, and also that I was mistaken in taking Tamino's "I don't want to argue with you" personally since (a) I'm not the only one he dismisses with that sentiment and (b) he continues to argue even after saying he won't. My apologies to Tamino therefore for my getting my back up for no good reason.)

@AJ: In preparation for my first post to this thread asking what I had said that was wrong in earlier criticism of me made on this thread before I'd ever even heard of it, I did essentially what you are suggesting here at this page . This fits a quadratic to the HADCRUT3 data smoothed to running averages of 12, 55, 62.5, and 75 years (Figures 1-4) and then plots 1/(1 - r2) to show the progression of the quality of fit (Figure 5). Personally I find Figure 5 very striking, though if no one else does then perhaps I'm exaggerating its strangeness. Comments anyone? Pretty please?

Several months before that however I went one better than a quadratic fit, on the theory that there is no basis in physics for fitting a quadratic. Furthermore doing so delivers nonsense results both well before the 20th century and well after the 21st century, in particular it would show intolerable heat in Shakespeare's day as well as beyond 2200.

Instead I used the late David Hofmann's 2009 formula, essentially H(y) = 280 + 2^((y - 1790)/32.5) for CO2 as a function of year y (Hofmann writes it a bit differently) to detrend HADCRUT3 by composing the Arrhenius formula A(c) = log(c) (for a suitable base of log) with H(y) to give what I've been calling the Arrhenius-Hofmann law AHL(y) = A(H(y)). Unlike a quadratic fit, this curve behaves very nicely in centuries both earlier and later. In combination with the observed cancelling of the dominant 56- and 75-year ocean oscillations during the 17th century, the curve predicts (postdicts?) no significant temperature variation between Shakespeare and Voltaire. And several centuries hence it predicts a steady rise of 1 °C every 18 years. Any quadratic fit must follow much steeper slopes at both extremes.

----------------------

It is natural to ask whether there is any physical or other basis for either Hofmann's or Arrhenius's laws. If one accepts a linear correspondence between anthropogenic CO2 and fossil fuel consumption, then the exponentially growing record here of the latter over the past two centuries supports Hofmann's law quite reasonably apart from suggesting Hofmann's 32.5 doubling period might be a couple of years high. The Keeling curve on the other hand suggests it is a few years too low, so Hofmann's middle ground of 32.5 years timed from 1790 may be a reasonable compromise there.

Arrhenius's law is a reasonable approximation to what actually happens when CO2 is increased. Absorption lines do not shut off (in the sense of exceeding unit optical thickness over the total atmosphere) at a strictly steady rate with increasing CO2, but over the range 50 ppmv to 3100 ppmv (0.31% by volume) the fluctuations in shut-off rate hover around 60-70 lines worth shutting off per doubling, as the following table shows.

|CO2 level| |Closed lines| |Increment|
0.8 ppm 3 +3
1.5 ppm 17 +14
3.0 ppm 38 +21
6.1 ppm 54 +16
12.2 ppm 64 +9
24.4 ppm 77 +13
48.8 ppm 114 +37
97.5 ppm 192 +78
195.0 ppm 250 +59
390.0 ppm 311 +61
0.08% 382 +71
0.16% 467 +85
0.31% 527 +59
0.62% 659 +132
1.25% 813 +154
2.50% 1027 +214
4.99% 1219 +192
9.98% 1420 +201
19.97% 1679 +259

In particular the increase from 195 to 390 ppmv shut off 61 lines while a further doubling can be expected to shut off an additional 71 lines. And at 20%, 500 times the present level, 1679 lines are shut off, leaving still open some 15,150 effective lines (defined below).

Judging by both the HADCRUT3 and GISTEMP data, this rate of 60-70 lines shutting off per doubling of CO2 seems to correspond to taking the base of the log in the Arrhenius law A(c) = log(c) to be 1.458, or expressed in terms of climate sensitivity, 1.837 (= log(2)/log(1.458) in any base) degrees per doubling of CO2.

Less relevant to Arrhenius's law but of independent interest, the distribution today (390 ppmv) of lines between totally closed (defined here as less than 10% of the photons escaping from Earth to space) and totally open (between 90% and 100% of photons doing so) is given by this table. (For perspective a typical line blocks about 0.07 cm⁻¹ or 2.33 GHz of the OLR spectrum of radiation leaving Earth's surface for outer space.)

0.1: 207
0.2: 32
0.3: 23
0.4: 23
0.5: 26
0.6: 31
0.7: 34
0.8: 63
0.9: 70
1.0: 16247

That is, 207 lines are now totally closed, 26 lines are between 40% and 50% closed, and 17,983 lines are still totally open.

Because the 84,963 ¹²C¹⁶O₂ lines relevant to 90% of the radiation from a 288 K planet overlap strongly, out of a total of 128,170 lines for that CO2 species, I've only counted effective lines defined as an obstruction of 0.07 cm⁻¹ of the spectrum, this being the average width of a line. Hence since CO2 occupies 1530-352 = 1178 cm⁻¹, this carves up that spectrum into 1178/.07 = 16,830 effective lines. But since there is relatively little overlap of the fewer than 2000 lines having any impact below 20% CO2 there is not much difference between effective lines and actual lines, I'm just being cautious here.

Caveat: these numbers assume a crude line shape model and can be expected to change slightly when I refine it to a pressure-sensitive Lorentz shape.

If this is merely duplicative of more accurately done calculation of the same information I'd welcome pointers to the detailed numbers.

------------------

Having justified AHL(y) as a plausible model of CO2 warming, one can now examine the result of detrending with it. This is the cyan (pale blue) curve shown here.

Now one might naively say this looks like an oscillation. Yet we have Tamino's argument that it only looks periodic, but period analysis shows it not to be so.

I claim that being periodic and being an oscillation are not the same thing. I further claim that in noisy situations one is more likely to mistake a periodic curve for a nonperiodic one than to mistake an oscillation for a non-oscillation. Hence a computation showing that a curve is unlikely to be periodic does not immediately imply that it is unlikely to be an oscillation.

This might sound paradoxical: after all, every oscillation is periodic, but a periodic signal like Ray Ladbury's favorite example 18281828 is clearly not an oscillation of the kind the cyan curve seems to approximate. Hence a curve chosen at random is more likely to be periodic than an oscillation. So how could my claim be true?

To see this for a more easily analyzed situation, suppose we are able to observe points in the Euclidean plane with a precision such that 95% of our observations of a given point lie within a circle of area A centered on that point. We now pose the following decision problem. Let S and T be two sources of data: S produces lattice points (points with integer coordinates) of the plane chosen uniformly at random while T produces arbitrary points chosen uniformly. Decide whether the source is S or T on the basis of a single point delivered by it.

One plausible strategy is to declare the source to be S when the delivered point lies within a circle of area A centered on some lattice point, and T otherwise. If the source is S then if A < 0.5 we will be wrong very slightly less than 5% of the time (less because if A is not much smaller than 0.5 there is a slight chance that the observation will look like some other lattice point so that we give the right answer for the wrong reason; for much smaller A this chance will be negligible).

If the source is T however then we will be wrong a fraction exactly A of the time, because a random point has a chance A of landing near a lattice point.

So if A < 0.05 we will in either case be wrong less than 5% of the time. For A < 0.05 therefore this strategy meets the usual 2σ criterion for both Type I and Type II errors: incorrect positive and incorrect negative calls.

Now consider a third source U producing "superlattice" points defined as those whose coordinates are integer multiples of 10. How should we decide between sources T and U?

It should be immediately obvious from geometric considerations (just scale everything accordingly) that the same strategy amended to use circles of area 100A instead of A will be exactly as reliable for distinguishing T and U sources as the original strategy was for distinguishing S and T sources.

But this implies that the second decision problem can be solved just as well as the first in the presence of ten times the noise.

Without taking the trouble of computing exactly what proportion of periodic signals should be considered oscillations according to a suitable criterion, hopefully the foregoing reasoning has made clear that a strategy for deciding whether a signal is an oscillation can be more tolerant of noise than one for deciding whether it is periodic.

Now I would claim that even if period analysis proves that the above cyan curve is not periodic, it should be obvious merely from eyeballing the cyan curve that spectral analysis is going to show that it is an oscillation. There is no contradiction here since as we've observed above, a test for periodicity can fail where the corresponding test for an oscillation can succeed. If Tamino doubts that the cyan curve is not only not periodic but also not an oscillation, let him show the statistics that backs this up. As far as I'm concerned it's obviously of an oscillatory character.

So much so in fact that we can ask whether it is a simple sinusoid or the sum of two sinusoids. Observing that the oscillation seems to die down slightly on each side of the zero crossing at 1925, and moreover by roughly the same amount, it is reasonable to suppose that the oscillation is actually a sum of two sinusoids that drifted into phase in 1925. With this hypothesis a least-squares fit makes the two sinusoids of respective periods 56 and 75 years. Interestingly the former figure appears in the literature as the period of one significant Atlantic oscillation, see e.g. Figure 2 on page 666 of Delworth and Mann (2000), "Observed and simulated multidecadal variability in the Northern Hemisphere", Climate Dynamics 16:661-676, though disappointingly I have been unable to find as precise a figure in the literature corresponding to the 75-year period, which only speaks vaguely of periods in the vicinity of 70-80 years.

Two such signals should cancel in the middle of the 17th century, and according to Figure 2 on page 2 of Gray et al (2004), "A tree-ring based reconstruction of the Atlantic Multidecadal Osciallation since 1567", Geophysical Research Letters, 32:L12205, this is exactly what happens.

For many juries this would be enough evidence to hang a man. It should surely be enough to show that there is a long-running pair of cycles in the past several centuries of temperature data obtained from various sources.

------------

Fitting data is always at risk of overfitting. One test for overfitting is whether new data significantly shifts the parameters obtained with the old data. Overfitting is indicated only when the new data shifts the parameters significantly.

A simple test of whether more data is likely to shift the parameters significantly is to see whether less data does so. Applying this test to the temperature and CO2 data by deleting the last 30 years worth, we obtain the parameters and associated models of both AMO and global warming shown here.

Strikingly the parameters barely move. Even more strikingly, even though the temperature was essentially flat for the preceding quarter-century, the fitted models predict that the temperature is about to take an unprecedented rise!

Anyone on either side of the AGW debate who predicted such a dramatic rise in 1981 would be pilloried in the press as an alarmist of the worst kind. Just as LA made it illegal a few decades ago to predict earthquakes, much as the first amendment does not extend to allow crying "Fire" in a crowded theater, laws would have been passed by the more concerned authorities making it illegal to predict global warming on such a terrifying scale.

Yet the HADCRUT, GISTEMP, and RSS/MSU data today all bear out this prediction, and moreover with striking precision.

The two main obstacles to sustaining the predictive power of this approach that I see are (a) if CO2 emissions are reduced significantly, and (b) if release of Arctic methane increases well above the present levels. Barring those two possibilities, along with megavolcanoes and giant meteor strikes, I am confident that 2041 will bear out this 2011 prediction much as 2011 has borne out the prediction that could and should have been made in 1981.