Aletho News

ΑΛΗΘΩΣ

Retracted Ocean Warming Paper & the IPCC

A new UN report relies on discredited research – and on academics who conceal vital information

By Donna Laframboise | Big Picture News | October 14, 2019

Last Halloween, the Washington Post ran a dramatic headline: Startling new research finds large buildup of heat in the oceans, suggesting a faster rate of global warming.

This story was huge news worldwide. Fortune magazine quoted Laure Resplandy, the Princeton University oceanographer who was the research paper’s lead author. “The planet warmed more than we thought,” she said. “It was hidden from us just because we didn’t sample it right.”

In fact, the problem wasn’t hiding in the ocean, but in the paper’s own mathematical calculations. Within days Nic Lewis, a UK private citizen and math whiz, had published the first of four detailed critiques of the paper’s statistical methodology (see here, here, here, and here).

We’re told that research published in prestigious scientific journals is reliable, and that peer review is meaningful. Yet 19 days after those Halloween headlines, the journal announced the authors had acknowledged a number of errors.

Two weeks ago, presumably after months of attempting to rescue the paper, the journal threw in the towel and retracted it wholesale.

What happened in between? The Intergovernmental Panel on Climate Change (IPCC) released a 1,200-page report about oceans. Chapter 5 of that report cites this now-retracted research (see pages 5-27 and 5-183 here).

In fairness, this single citation may just be a typo. There’s a good chance the IPCC meant to cite a different 2018 paper, in which Resplandy was also the lead author.

But the matter doesn’t end there. The UK-based Global Warming Policy Foundation (GWPF) is now pointing out that a crucial conclusion of the IPCC’s report relies heavily on a second paper titled How fast are the oceans warming?

Written by Lijing Cheng and colleagues John Abraham, Zeke Hausfather, and Kevin Trenberth, it was published in January 2019 in Science. The journal calls it a ‘Perspective,’ because rather than being a research paper, it’s more of an argument.

In three places, the Halloween research is cited to support its conclusions. Nowhere do Cheng and his colleagues acknowledge that the statistical methodology of the Halloween research had already been torn to shreds, that the paper’s authors had already conceded it was flawed.

The bottom line? Chapters 4 and 5 of the IPCC’s ocean report rely on the 2019 Cheng ‘Perspective.’ The Cheng ‘Perspective’ relies on research that has now been officially retracted.

The even worse bottom line? Lijing Cheng – an academic who concealed vital information in an article published in Science this year – was intimately involved in the preparation of the IPCC’s ocean report. He was a lead author for Chapter 1. He was a contributing author for Chapters 3 and 5. And he helped draft the Summary for Policymakers.

October 14, 2019 Posted by | Deception, Science and Pseudo-Science | , | 1 Comment

Mysterious IPCC Expertise

The IPCC publishes the citizenship and gender of its authors – but says nothing about their scientific expertise

By Donna Laframboise | Big Picture News | October 17, 2018

The Intergovernmental Panel on Climate Change (IPCC) claims to be a scientific organization. But it’s really a political one.

An obvious tell is how it describes its personnel. In the old days, IPCC reports listed people according to their role and their country. Matters have improved since then.

Today, the IPCC gives us six data points about its personnel rather than three. A webpage associated with its latest report tells us each individual’s:

  1. name
  2. IPCC role (coordinating lead author, lead author, review editor)
  3. gender
  4. country of residence
  5. citizenship
  6. institutional affiliation

But this only looks like progress. In the real world, the additional info is irrelevant. Science doesn’t care where someone lives or what citizenship they hold. Science doesn’t care if they’re a man or a woman.

If the IPCC is a panel of experts, the critical issue is: What is each of these people an expert in? More than 30 years after its founding, the IPCC still thinks it doesn’t need to talk about this.

For the UN bureaucrats who run the show, some things are important. Some are not. The nature of an author’s scientific expertise clearly isn’t a burning issue. But lots of attention is being paid to checking diversity boxes.

October 17, 2018 Posted by | Science and Pseudo-Science | | 6 Comments

The Dark Story Behind Global Warming aka Climate Change

By F. William Engdahl – New Eastern Outlook – 16.10.2018

The recent UN global warming conference under auspices of the deceptively-named International Panel on Climate Change (IPCC) concluded its meeting in South Korea discussing how to drastically limit global temperature rise. Mainstream media is predictably retailing various panic scenarios “predicting” catastrophic climate change because of man-made emissions of Greenhouse Gases, especially CO2, if drastic changes in our lifestyle are not urgently undertaken. There is only one thing wrong with all that. It’s based on fake science and corrupted climate modelers who have reaped by now [many] billions in government research grants to buttress the arguments for radical change in our standard of living. We might casually ask “What’s the point?” The answer is not positive.

The South Korea meeting of the UN IPCC discussed measures needed, according to their computer models, to limit global temperature rise to below  1.5 Centigrade above levels of the pre-industrial era. One of the panel members and authors of the latest IPCC Special Report on Global Warming, Drew Shindell, at Duke University told the press that to meet the arbitrary 1.5 degree target will require world CO2 emissions to drop by a staggering 40% in the next 12 years. The IPCC calls for a draconian “zero net emissions” of CO2 by 2050. That would mean complete ban on gas or diesel engines for cars and trucks, no coal power plants, transformation of the world agriculture to burning food as biofuels. Shindell modestly put it, “These are huge, huge shifts.”

The new IPCC report, SR15, declares that global warming of 1.5°C will “probably“ bring species extinction, weather extremes and risks to food supply, health and economic growth. To avoid this the IPCC estimates required energy investment alone will be $2.4 trillion per year. Could this explain the interest of major global banks, especially in the City of London in pushing the Global Warming card?

This scenario assumes an even more incredible dimension as it is generated by fake science and doctored data by a tight-knit group of climate scientists internationally that have so polarized scientific discourse that they label fellow scientists who try to argue as not mere global warming skeptics, but rather as “Climate Change deniers.” What does that bit of neuro-linguistic programming suggest? Holocaust deniers? Talk about how to kill legitimate scientific debate, the essence of true science. Recently the head of the UN IPCC proclaimed, “The debate over the science of climate change is well and truly over.”

What the UN panel chose to ignore was the fact the debate was anything but “over.” The Global Warming Petition Project, signed by over 31,000 American scientists states, “There is no convincing scientific evidence that human release of carbon dioxide, methane, or other greenhouse gasses is causing or will, in the foreseeable future, cause catastrophic heating of the Earth’s atmosphere and disruption of the Earth’s climate. Moreover, there is substantial scientific evidence that increases in atmospheric carbon dioxide produce many beneficial effects upon the natural plant and animal environments of the Earth.”

‘Chicken Little’

Most interesting, about the dire warnings of global catastrophe if dramatic changes to our living standards are not undertaken urgently, is that the dire warnings are always attempts to frighten based on future prediction. When the “tipping point” of so-called irreversibility is passed with no evident catastrophe, they invent a new future point.

In 1982 Mostafa Tolba, executive director of the UN Environment Program (UNEP), warned the “world faces an ecological disaster as final as nuclear war within a couple of decades unless governments act now.” He predicted lack of action would bring “by the turn of the century, an environmental catastrophe which will witness devastation as complete, as irreversible as any nuclear holocaust.”In 1989 Noel Brown, of the UN Environmental Program (UNEP), said entire nations could be wiped off the face of the earth by rising sea levels if the global warming trend is not reversed by the year 2000. James Hansen, a key figure in the doomsday scenarios declared at that time that 350 ppm of CO2 was the upper limit, “to preserve a planet similar to that on which civilization developed and to which life on Earth is adapted.” Rajendra Pachauri, then the chief of the UN IPCC, declared that 2012 was the climate deadline by which it was imperative to act: “If there’s no action before 2012, that’s too late.” Today the measured level is 414.

As UK scientist Philip Stott notes, “In essence, the Earth has been given a 10-year survival warning regularly for the last fifty or so years. …Our post-modern period of climate change angst can probably be traced back to the late-1960s… By 1973, and the ‘global cooling’ scare, it was in full swing, with predictions of the imminent collapse of the world within ten to twenty years…Environmentalists were warning that, by the year 2000, the population of the US would have fallen to only 22 million. In 1987, the scare abruptly changed to ‘global warming’, and the IPCC (the Intergovernmental Panel on Climate Change) was established (1988)…”

Flawed Data

A central flaw to the computer models cited by the IPCC is the fact that they are purely theoretical models and not real. The hypothesis depends entirely on computer models generating scenarios of the future, with no empirical records that can verify either these models or their flawed prediction. As one scientific study concluded, “The computer climate models upon which ‘human-caused global warming’ is  based have  substantial  uncertainties  and  are  markedly unreliable. This is not surprising, since the climate is a coupled, non-linear dynamical system. It is very complex.” Coupled refers to the phenomenon that the oceans cause changes in the atmosphere and the atmosphere in turn affects the oceans. Both are complexly related to solar cycles. No single model predicting global warming or 2030 “tipping points” is able or even tries to integrate the most profound influence on Earth climate and weather, the activity of the sun and solar eruption cycles which determine ocean currents, jet stream activity, El ninos and our daily weather.

An Australian IT expert and independent researcher, John McLean, recently did a detailed analysis of the IPCC climate report. He notes that HadCRUT4 is the primary dataset used by the Intergovernmental Panel on Climate Change (IPCC) to make its dramatic claims about “man-made global warming”, to justify its demands for trillions of dollars to be spent on “combating climate change.” But McLean points to egregious errors in the HadCRUT4 used by IPCC. He notes, “It’s very careless and amateur. About the standard of a first-year university student.” Among the errors, he cites places where temperature “averages were calculated from next to no information. For two years, the temperatures over land in the Southern Hemisphere were estimated from just one site in Indonesia.” In another place he found that for the Caribbean island, St Kitts temperature was recorded at 0 degrees C for a whole month, on two occasions. TheHadCRUT4 dataset is a joint production of the UK Met Office’s Hadley Centre and the Climatic Research Unit at the University of East Anglia. This was the group at East Anglia that was exposed several years ago for the notorious Climategate scandals of faking data and deleting embarrassing emails to hide it. Mainstream media promptly buried the story, turning attention instead on “who illegally hacked East Anglia emails.”

Astonishing enough when we do a little basic research, we find that the IPCC never carried out a true scientific inquiry into the possible cases of change in Earth climate. Man made sources of change were arbitrarily asserted, and the game was on.

Malthusian Maurice Strong

Few are aware however of the political and even geopolitical origins of Global Warming theories. How did this come about? So-called Climate Change, aka Global Warming, is a neo-malthusian deindustrialization agenda originally developed by circles around the Rockefeller family in the early 1970’s to prevent the rise of independent industrial rivals, much as Trump’s trade wars today. In my book, Myths, Lies and Oil Wars, I detail how the highly influential Rockefeller group also backed creation of the Club of Rome, Aspen Institute, Worldwatch Institute and MIT Limits to Growth report. A key early organizer of Rockefeller’s ‘zero growth’ agenda in the early 1970s was David Rockefeller’s longtime friend, a Canadian oilman named Maurice Strong. Strong was one of the early propagators of the scientifically unfounded theory that man-made emissions from transportation vehicles, coal plants and agriculture caused a dramatic and accelerating global temperature rise which threatens civilization, so-called Global Warming.

As chairman of the 1972 Earth Day UN Stockholm Conference, Strong promoted an agenda of population reduction and lowering of living standards around the world to “save the environment.” Some years later the same Strong restated his radical ecologist stance: “Isn’t the only hope for the planet that the industrialized civilizations collapse? Isn’t it our responsibility to bring that about?” Co-founder of the Rockefeller-tied Club of Rome, Dr Alexander King admitted the fraud in his book, The First Global Revolution. He stated, “In searching for a new enemy to unite us, we came up with the idea that pollution, the threat of global warming, water shortages, famine and the like would fit the bill… All these dangers are caused by human intervention… The real enemy, then, is humanity itself.”

Please reread that, and let it sink in. Humanity, and not the 147 global banks and multi-nationals who de facto determine today’s environment, bear the responsibility.

Following the Earth Summit, Strong was named Assistant Secretary General of the United Nations, and Chief Policy Advisor to Kofi Annan. He was the key architect of the 1997-2005 Kyoto Protocol that declared man made Global Warming, according to “consensus,” was real and that it was “extremely likely” that man-made CO2 emissions have predominantly caused it. In 1988 Strong was key in creation of the UN IPCC and later the UN Framework Convention on Climate Change at the Rio Earth Summit which he chaired, and which approved his globalist UN Agenda 21.

The UN IPCC and its Global Warming agenda is a political and not a scientific project. Their latest report is, like the previous ones, based on fake science and outright fraud. MIT Professor Richard S Lindzen in a recent speech criticized politicians and activists who claim “the science is settled,” and demand “unprecedented changes in all aspects of society.” He noted that it was totally implausible for such a complex “multifactor system” as the climate to be summarized by just one variable, global mean temperature change, and primarily controlled by just a 1-2 per cent variance in the energy budget due to CO2. Lindzen described how “an implausible conjecture backed by false evidence, repeated incessantly, has become ‘knowledge,’ used to promote the overturn of industrial civilization.” Our world indeed needs a “staggering transformation,” but one that promotes health and stability of the human species instead.

October 16, 2018 Posted by | Corruption, Fake News, Mainstream Media, Warmongering, Science and Pseudo-Science | | 3 Comments

IPCC Pretends the Scientific Publishing Crisis Doesn’t Exist

By Donna Laframboise | Big Picture News | October 8, 2018

The Intergovernmental Panel on Climate Change (IPCC) issued a press release today. It tells us the IPCC assesses “thousands of scientific papers published each year,” and that its latest report relies on “more than 6,000 references.”

That sounds impressive until one remembers that academic publishing is in the grips of a reproducibility crisis. A disturbing percentage of the research published in medicine, economics, computer science, psychology, and other fields simply doesn’t stand up. Whenever independent third parties attempt to reproduce/replicate this work – carrying out the same research in order to achieve the same findings – the success rate is dismal.

The influential 2005 paper, Why Most Published Research Findings Are False, is now very old news. Headlines declaring that ‘science is broken’ have become commonplace. In 2015, the editor-in-chief of The Lancet declared that “much of the scientific literature, perhaps half, may simply be untrue.”

So here’s the bottom line: We know that studies about promising drugs typically fail when strangers attempt to reproduce those studies. We know that flashy physics research published in Science and Nature has been wholly fraudulent. We know that half of economics papers can’t be replicated, even with assistance from their own authors. We know political bias distorts the peer-review process in psychology. (All of this is discussed in a report I wrote in 2016).

We therefore have no earthly reason to imagine that climate science is exempt from these kinds of problems.

If half of the scientific literature is untrue, it therefore follows that half of climate research is also untrue.

This means that 3,000 of the IPCC’s 6,000 references aren’t worth the paper they’re written on.

BACKGROUND: The IPCC is a UN bureaucracy. Governments select scientists to write climate reports – one of which has just been completed.

These scientists are further asked to summarize their work. But the scientist-crafted summary is only a draft. At the meeting that just ended in South Korea, the draft was re-written by politicians, diplomats, and bureaucrats representing the political establishments of various countries.

At that point, the summary forfeited any conceivable claim to be a scientific document and became, instead, a politically-negotiated statement.

Today’s press release announces that the politicized summary was “approved by governments” and has therefore been made public (download it here).

Please note: the report itself has not been made public. Nor has the draft summary containing the scientists’ own words. (Although the IPCC claims to be ultra-transparent, its website says the original/draft version of the Summary for Policymakers is available only to “authorised users” such as government officials.)

This is the IPCC’s standard MO. It controls the message by feeding the media a politically-negotiated Summary of its latest work. Then it stands back and lets gullible reporters mislead the public about what the science says.

LINKS:

The Delinquent Teenager Who Was Mistaken for the World’s Top Climate Expert

October 8, 2018 Posted by | Corruption, Deception, Science and Pseudo-Science | | 3 Comments

What is the Meaningful 97% in the Climate Debate?

By Dr. Tim Ball | Watts Up With That? | September 29, 2018

For a brief period, the New York Times added a column to their best-seller book list. It identified the percentage of people who finished reading the book. As I recall, the outright winner for lowest percentage was Umberto Eco’s Name of the Rose with only 6%. It is an excellent and fascinating book if you understand the Catholic church, its theological disputes, know much about medieval mythology, understand Catholic religious orders, and are familiar with the history of Italy in the Middle Ages. As one reviewer wrote, “I won’t lie to you. It is absolutely a slog at times.” This phrase struck me because it is exactly what a lawyer told me after reading my book “The Deliberate Corruption of Climate Science.”

I told him it was a slog to research because it required reading all the Reports of the Intergovernmental Panel on Climate Change (IPCC), a task that few, certainly fewer than 6%, ever achieve, including most of the people involved with the production. This is the tragedy. There are so many people with such strong, definitive views, including among skeptics and the general science community who have never read the Reports at all. The challenge is made more difficult by the deliberate attempt to separate truth and reality from propaganda and the political agenda.

In media interviews or discussions with the public, the most frequent opening challenge is; “But don’t 97% of scientists agree?” It is usually said obliquely to imply that you know a lot, and I don’t understand, but I assume you are wrong because you are in the minority. I don’t attempt to refute the statistics. Instead, I explain the difference in definitions between science and society. Then I point out that the critical 97% figure is that at least 97% of scientists have never read the claims of the IPCC Reports. How many people reading this article have read all the IPCC Reports, or even just one of them? If you have, it is probably the deliberately deceptive Summary for Policymakers (SPM). Even fewer will have read the Report of Working Group I: The Physical Science Basis. Naively, people, especially other scientists, assume scientists would not falsify, mislead, misrepresent, or withhold information. It is worse, because the IPCC deliberately created the false claim of consensus.

I wrote earlier about the problem of communications between groups and the general public because of the different definition of terms. Among the most damaging, especially in the public debate, is the word consensus. Exploitation of the confusion was deliberate. On 22 December 2004, RealClimate, the website created to manipulate the global warming story, provided this insight;

We’ve used the term “consensus” here a bit recently without ever really defining what we mean by it. In normal practice, there is no great need to define it – no science depends on it. But it’s useful to record the core that most scientists agree on, for public presentation. The consensus that exists is that of the IPCC reports, in particular the working group I report (there are three WG’s. By “IPCC”, people tend to mean WG I).

In other words, it is what the creators of the Reports consider a consensus. This is classic groupthink on display. One characteristic of which says they have,

“…a culture of uniformity where individuals censor themselves and others so that the facade of group unanimity is maintained.”

The source of the 97% claim in the public arena came from John Cook et al., and was published in 2013 in Environmental Research Letters. It was titled “Quantifying the consensus on anthropogenic global warming in the scientific literature.” I acknowledge to people some of the brilliant dissections of this claim, such as Lord Monckton’s comment, “0.3% consensus, not 97.1%.” If I have time, I explain how the plan to exploit the idea of consensus was developed by the same people and corrupted science exposed in the emails leaked from the Climatic Research Unit (CRU) in November 2009.

Harvard graduate, medical doctor, and world-famous science fiction writer, Michael Crichton provides an excellent riposte.

“I want to pause here and talk about this notion of consensus, and the rise of what has been called consensus science. I regard consensus science as an extremely pernicious development that ought to be stopped cold in its tracks. Historically, the claim of consensus has been the first refuge of scoundrels; it is a way to avoid debate by claiming that the matter is already settled. Whenever you hear the consensus of scientists agrees on something or other, reach for your wallet, because you’re being had.

Let’s be clear: the work of science has nothing whatever to do with consensus. Consensus is the business of politics. Science, on the contrary, requires only one investigator who happens to be right, which means that he or she has results that are verifiable by reference to the real world. In science consensus is irrelevant. What is relevant is reproducible results. The greatest scientists in history are great precisely because they broke with the consensus.”

The attempt to deceive and divert was built into the structure, format, and procedures of the IPCC. Few people know that a major part of the deception is to identify all the problems with the science but only identify them in the Report of Working Group I: The Physical Science Basis. They know most won’t read or understand it and can easily marginalize the few who do. In 2012 I created a list of several of these acknowledgments, but only one is sufficient here to destroy the certainty of their claims about future climates. Section 14.2.2. of the Scientific Section of Third IPCC Assessment Report, (2001) titled “Predictability in a Chaotic System” says:

“The climate system is particularly challenging since it is known that components in the system are inherently chaotic; there are feedbacks that could potentially switch sign, and there are central processes that affect the system in a complicated, non-linear manner. These complex, chaotic, non-linear dynamics are an inherent aspect of the climate system.”

“In sum, a strategy must recognise what is possible. In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible” (My emphasis).

This is not reported in the Summary for Policymakers (SPM) that is deliberately different. David Wojick, an IPCC expert reviewer, explained,

“What is systematically omitted from the SPM are precisely the uncertainties and positive counter evidence that might negate the human interference theory. Instead of assessing these objections, the Summary confidently asserts just those findings that support its case. In short, this is advocacy, not assessment.”

He should add, it is deliberate advocacy, as the RealClimate quote shows.

The SPM receives scant attention from the media and the public, except for the temperature predictions and then only the most extreme figure is selected. The Science Report receives even less attention, but that is by instruction because it is released months later. All of this is why I quoted German physicist and meteorologist Klaus Eckart Puls (English translation version) on the cover of both my books.

“Ten years ago, I simply parroted what the IPCC told us. One day I started checking the facts and data – first I started with a sense of doubt but then I became outraged when I discovered that much of what the IPCC and the media were telling us was sheer nonsense and was not even supported by any scientific facts and measurements. To this day I still feel shame that as a scientist I made presentations of their science without first checking it.” “Scientifically it is sheer absurdity to think we can get a nice climate by turning a CO2 adjustment knob.”

The real challenge of the 97% consensus claim is to get more of the 97% to do what Puls did, read the Reports and find out what the IPCC did and said. They need to do it because the misuse and loss of credibility of science aren’t restricted to the climate deception. As I read and hear from all sectors of science and society, it is endemic (fake news) and potentially devastating. I think one of the most important achievements of my successful trial with Andrew Weaver was to go beyond the defamation charge, against my lawyer’s advice, and show that the misuse of science will and must elicit passionate reactions. So, next time you are confronted with the 97% oblique charge, simply ask the person if they have read any of the IPCC Reports. Just be prepared for the invective.

September 30, 2018 Posted by | Science and Pseudo-Science, Timeless or most popular | | 2 Comments

BBC Ignores Widely Publicized IPCC Problems

By Donna Laframboise | Big Picture News | September 26, 2018

The BBC recently issued a document telling its journalists how to approach climate stories. That document treats the findings of a UN entity known as the Intergovernmental Panel on Climate Change (IPCC) as gospel.

The “best science on the issue,” it says, is expressed by the IPCC, “which drew on the expertise of a huge number of the world’s top scientists.

Cripes. Out here in the real world, it’s 2018. But the last decade may as well not have happened as far as the BBC is concerned. In the bubble in which BBC bureaucrats reside it’s still 2007, the year Al Gore and the IPCC were each awarded half of the Nobel Peace Prize – not for their scientific prowess, but for their role in raising the alarm about climate change.

The world was more innocent back then. The InterAcademy Council (IAC) – an international collection of science entities – wouldn’t strike a committee to examine the IPCC’s internal workings until two years later.

The release of the IAC’s August 2010 report should have been a game changer. After all, the report identified “significant shortcomings in each major step of IPCC’s assessment process” (see the first paragraph of Chapter 2).

The New Scientist magazine considered the report so devastating it called for the resignation of the IPCC’s chairman in an article titled Time for Rajendra Pachauri to go.

The Financial Times similarly ran an editorial that urged Mr. Pachauri “to move on.”

Geoffrey Lean, then Britain’s longest-serving environmental correspondent, said the report revealed the IPCC to be an “amateurish, ramshackle operation.”

Louise Gray, environment correspondent for Telegraph, began her account with these words: “In a damning report out earlier this week…”

Over at the Daily Mail, writer Fiona Macrae called it a “scathing report.”

Environmental studies professor Roger Pielke Jr. thought the report “remarkably hard hitting” – and was quoted by the Associated Press saying the IPCC might be redeemed via this flavour of “tough love.”

A headline in the London Times declared: This discredited science body must be purged. Two others – in India and America – used the word “slams” when characterizing the IAC’s conclusions.

Precious few improvements have occurred since then. Being a UN bureaucracy, the IPCC is essentially a law unto itself, an entrenched culture with no meaningful oversight mechanisms.

But the BBC wouldn’t know that. Because rather than performing due diligence to determine how much progress has been made since 2010, the BBC chooses to behave as though the IAC report doesn’t exist. The IPCC’s fall from grace simply never happened.

September 28, 2018 Posted by | Deception, Science and Pseudo-Science | , | Leave a comment

The BBC’s Naive View of the UN’s Climate Machine

Big Picture News | September 24, 2018

SPOTLIGHT: Bureaucracies put their trust in other bureaucracies.

BIG PICTURE: A few weeks back, Joanne Nova perfectly captured the position of the British Broadcasting Corporation (BBC) regarding the scandalous UN entity known as the Intergovernmental Panel on Climate Change (IPCC).

A recent internal document gives BBC journalists advice about how to report on climate matters. In Nova’s words, it declares that the “IPCC is God, can not be wrong.”

The document’s exact words:

What’s the BBC’s position?

  • Man-made climate change exists: If the science proves it we should report it. The BBC accepts that the best science on the issue is the IPCC’s position, set out above. [italics added]

Well, here’s the problem. The IPCC does not do science. The IPCC is a bureaucracy whose purpose is to write reports.

The primary function of those reports is to pave the way for UN climate treaties. A set of facts need to be agreed-upon by all parties in advance, so that negotiators can start from the same page.

IPCC reports get written by government-appointed scientists, according to predetermined guidelines. Portions of IPCC reports then get re-written by politicians, bureaucrats, and diplomats (in effect, this is an unofficial round of negotiating, in advance of the official negotiations that take place later).

International treaties are political instruments. The IPCC exists to make climate treaties possible. The ‘science’ involved has therefore been selected and massaged to serve a political purpose.

Let’s ditch the naiveté. How likely is it that experts appointed by governments that have spent billions fighting climate change, would conclude that man-made climate change doesn’t exist?

TOP TAKEAWAY: Journalists are part of a system of checks and balances that help keep governments and large organizations honest. The BBC is a huge bureaucracy. The geniuses running it have declared another bureaucracy – the UN’s IPCC – a font of scientific truth. How pathetic.

LINKS:

September 25, 2018 Posted by | Deception, Mainstream Media, Warmongering, Science and Pseudo-Science | , | 1 Comment

IPCC to release “October surprise” on climate change

Watts Up With That? | September 24, 2018

With all the crazy talk about “Russian meddling” in the 2016 Presidential election, one wonders if the same sort of crazy talk might be applied to the release of a special climate report just weeks before the U.S. mid-term elections. Given the timing, you can be sure that whatever is in the report will be front page news and used by the left as a political tool. Here is a press release from the IPCC, h/t to Dr. Willie Soon


Save the Date: IPCC Special Report Global Warming of 1.5ºC

The Intergovernmental Panel on Climate Change (IPCC) will meet in Incheon, Republic of Korea, on 1-5 October 2018, to consider the Special Report Global Warming of 1.5ºC. Subject to approval, the Summary for Policymakers will be released on Monday 8 October with a live-streamed press conference.

The press conference, addressed by the IPCC Chair and Co-Chairs from the three IPCC Working Groups, will be open to registered media, and take place at 10:00 local time (KST), 03:00 CEST, 02:00 BST, 01:00 GMT and 21:00 (Sunday 7 October) EDT.

Registered media will also be able to access the Summary for Policymakers and press release under embargo, once they are available. They will also be able to attend the opening session of the meeting at 10:00-11:00 on Monday 1 October. All other sessions of the IPCC meeting are closed to the public and to media.

The opening session of the meeting will include statements by the Chair of the IPCC, senior officials the IPCC’s two parent bodies World Meteorological Organization (WMO) and United Nations Environment Programme (UN Environment) and of the United Nations Framework Convention on Climate Change (UNFCCC), and senior officials of the Republic of Korea.

The IPCC meetings and the press conference will take place at Songdo Convensia in Incheon.

Arrangements for media registration, submitting questions remotely, booking interviews, and broadcast facilities will be communicated in the coming weeks.

The report, whose full name is Global Warming of 1.5°C, an IPCC special report on the impacts of global warming of 1.5°C above pre-industrial levels and related global greenhouse gas emission pathways, in the context of strengthening the global response to the threat of climate change, sustainable development, and efforts to eradicate poverty, is being prepared under the scientific leadership of all three IPCC Working Groups.

Formally, the meeting will start with the 48th Session of the IPCC. Next a joint session of the three Working Groups chaired by their Co-Chairs will consider the Summary for Policymakers line by line for approval. Then the 48th Session of the IPCC will resume to accept the Summary for Policymakers and overall report.

The IPCC decided to prepare the report, in response to an invitation from the UNFCCC Conference of the Parties at its 21st meeting in December 2015 when the Paris Agreement was signed.

Source: http://www.ipcc.ch/news_and_events/ma-p48.shtml

September 24, 2018 Posted by | Science and Pseudo-Science | , , | Leave a comment

Manufacturing consensus: the early history of the IPCC

By Judith Curry | Climate Etc. | January 3, 2018

Short summary: scientists sought political relevance and allowed policy makers to put a big thumb on the scale of the scientific assessment of the attribution of climate change.

Bernie Lewin has written an important new book:

SEARCHING FOR THE CATASTROPHE SIGNAL:The Origins of The Intergovernmental Panel on Climate Change

The importance of this book is reflected in its acknowledgements, in context of assistance and contributions from early leaders and participants in the IPCC:

This book would not have been possible without the documents obtained via Mike MacCracken and John Zillman. Their abiding interest in a true and accurate presentation of the facts prevented my research from being led astray. Many of those who participated in the events here described gave generously of their time in responding to my enquiries, they include Ben Santer, Tim Barnett, Tom Wigley, John Houghton, Fred Singer, John Mitchell, Pat Michaels . . . and many more.

You may recall a previous Climate Etc. post Consensus by Exhaustion, on Lewin’s 5 part series on Madrid 1995: The last day of climate science.

Read the whole book, it is well worth reading. The focus of my summary of the book is on Chapters 8-16 in context of the theme of ‘detection and attribution’, ‘policy cart in front of the scientific horse’ and ‘manufacturing consensus’. Annotated excerpts from the book are provided below.

The 1970’s energy crisis

In a connection that I hadn’t previously made, Lewin provides historical context for the focus on CO2 research in the 1970’s, motivated by the ‘oil crisis’ and concerns about energy security. There was an important debate surrounding whether coal or nuclear power should be the replacement for oil. From Chapter 8:

But in the struggle between nuclear and coal, the proponents of the nuclear alternative had one significant advantage, which emerged as a result of the repositioning of the vast network of government-funded R&D laboratories within the bureaucratic machine. It would be in these ‘National Laboratories’ at this time that the Carbon Dioxide Program was born. This surge of new funding meant that research into one specific human influence on climate would become a major branch of climatic research generally. Today we might pass this over for the simple reason that the ‘carbon dioxide question’ has long since come to dominate the entire field of climatic research—with the very meaning of the term ‘climate change’ contracted accordingly.

This focus was NOT driven by atmospheric scientists:

The peak of interest in climate among atmospheric scientists was an international climate conference held in Stockholm in 1974 and a publication by the ‘US Committee for GARP’ [GARP is Global Atmospheric Research Programme] the following year. The US GARP report was called ‘Understanding climate change: a program for action’, where the ‘climate change’ refers to natural climatic change, and the ‘action’ is an ambitious program of research.

[There was] a coordinated, well-funded program of research into potentially catastrophic effects before there was any particular concern within the meteorological community about these effects, and before there was any significant public or political anxiety to drive it. It began in the midst of a debate over the relative merits of coal and nuclear energy production [following the oil crisis of the 1970’s]. It was coordinated by scientists and managers with interests on the nuclear side of this debate, where funding due to energy security anxieties was channelled towards investigation of a potential problem with coal in order to win back support for the nuclear option.

The emergence of ‘global warming’

In February 1979, at the first ever World Climate Conference, meteorologists would for the first time raise a chorus of warming concern. The World Climate Conference may have drowned out the cooling alarm, but it did not exactly set the warming scare on fire.

While the leadership of UNEP (UN Environmental Programme) became bullish on the issue of global warming, the bear prevailed at the WMO (World Meteorological Organization). When UNEP’s request for climate scenario modelling duly arrived with the WCRP (World Climate Research Programme) committee, they balked at the idea: computer modelling remained too primitive and, especially at the regional level, no meaningful results could be obtained. Proceeding with the development of climate scenarios would only risk the development of misleading impact assessments.

It wasn’t long before we see scientific research on climate change becoming marginalized in the policy process, in context of the precautionary principle:

At Villach in 1985, at the beginning of the climate treaty movement, the rhetoric of the policy movement was already breaking away from its moorings in the science. Doubts raised over the wildest speculation were turned around, in a rhetoric of precautionary action: we should act anyway, just in case. With the onus of proof reversed, the research can continue while the question remains (ever so slightly) open.

Origins of the IPCC

With regards to the origins of the IPCC:

Jill JÅNager gave her view that one reason the USA came out in active support for an intergovernmental panel on climate change was that the US Department of State thought the situation was ‘getting out of hand’, with ‘loose cannons’ out ‘potentially setting the agenda’, when governments should be doing so. An intergovernmental panel, so this thinking goes, would bring the policy discussion back under the control of governments. It would also bring the science closer to the policymakers, unmediated by policy entrepreneurs. After an intergovernmental panel agreed on the science, so this thinking goes, they could proceed to a discussion of any policy implications.

While the politics were already making the science increasingly irrelevant, Bert Bolin and John Houghton brought a focus back to the science:

Within one year of the first IPCC session, its assessment process would transform from one that would produce a pamphlet sized country representatives’ report into one that would produce three large volumes written by independent scientists and experts at the end of the most complex and expensive process ever undertaken by a UN body on a single meteorological issue. The expansion of the assessment, and the shift of power back towards scientists, came about at the very same time that a tide of political enthusiasm was being successfully channelled towards investment in the UN process, with this intergovernmental panel at its core.

John Houghton (Chair of Working Group I) moved the IPCC towards a model more along the lines of an expert-driven review: he nominated one or two scientific experts—‘lead authors’—to draft individual chapters and he established a process through which these would be reviewed at lead-author meetings.

The main change was that it shifted responsibility away from government delegates and towards practising scientists. The decision to recruit assessors who were leaders in the science being assessed also opened up another problem, namely the tendency for them to cite their own current work, even where unpublished.

However, the problem of marginalization of the science wasn’t going away:

With the treaty process now run by career diplomats, and likely to be dominated by unfriendly southern political agitators, the scientists were looking at the very real prospect that their climate panel would be disbanded and replaced when the Framework Convention on Climate Change came into force.

And many scientists were skeptical:

With the realisation that there was an inexorable movement towards a treaty, there was an outpouring of scepticism from the scientific community. This chorus of concern was barely audible above the clamour of the rush to a treaty and it is now largely forgotten.

At the time, John Zillman presented a paper to a policy forum that tried to provide those engaged with the policy debate some insight into just how different was the view from inside the research community.  Zillman stated that:

. . . that the greenhouse debate has now become decoupled from the scientific considerations that had triggered it; that there are many agendas but that they do not include, except peripherally, finding out whether and how climate might change as a result of enhanced greenhouse forcing and whether such changes will be good or bad for the world.

To give some measure of the frustration rife among climate researchers at the time, Zillman quoted the director of WCRP. It was Pierre Morel, he explained, who had ‘driven the international climate research effort over the past decade’. A few months before Zillman’s presentation, Morel had submitted a report to the WCRP committee in which he assessed the situation thus:

The increasing direct involvement of the United Nations. . . in the issues of global climate change, environment and development bears witness to the success of those scientists who have vied for ‘political visibility’ and ‘public recognition’ of the problems associated with the earth’s climate. The consideration of climate change has now reached the level where it is the concern of professional foreign-affairs negotiators and has therefore escaped the bounds of scientific knowledge (and uncertainty).

The negotiators, said Morel, had little use for further input from scientific agencies including the IPCC ‘and even less use for the complicated statements put forth by the scientific community’.

There was a growing gap between the politics/policies and the science:

The general feeling in the research community that the policy process had surged ahead of the science often had a different effect on those scientists engaged with the global warming issue through its expanded funding. For them, the situation was more as President Bush had intimated when promising more funding: the fact that ‘politics and opinion have outpaced the science’ brought the scientists under pressure ‘to bridge the gap’.

In fact, there was much scepticism of the modelling freely expressed in and around the Carbon Dioxide Program in these days before the climate treaty process began. Those who persisted with the search for validation got stuck on the problem of better identifying background natural variability.

The challenge of ‘detection and attribution’

Regarding Jim Hansen’s 1998 Congressional testimony:

An article in Science the following spring gives some insight into the furore. In ‘Hansen vs. the world on greenhouse threat’, the science journalist Richard Kerr explained that while ‘scientists like the attention the greenhouse effect is getting on Capitol Hill’, nonetheless they ‘shun the reputedly unscientific way their colleague James Hansen went about getting that attention’.

Clearly, the scientific opposition to any detection claims was strong in 1989 when IPCC assessment got underway.

Detection and attribution of the anthropogenic climate signal was the key issue:

During the IPCC review process (for the First Assessment Report), Wigley was asked to answer the question: When is detection likely to be achieved? He responded with an addition to the IPCC chapter that explains that we would have to wait until the half-degree of warming that had occurred already during the 20th century is repeated. Only then are we likely to determine just how much of it is human-induced. If the carbon dioxide driven warming is at the high end of the predictions, then this would be early in the 21st century, but if the warming was slow then we may not know until 2050.

The IPCC First Assessment Report didn’t help the policy makers’ ‘cause.’ In the buildup to the Rio Earth Summit:

To support the discussions of the Framework Convention at the Rio Earth Summit, it was agreed that the IPCC would provide a supplementary assessment. This ‘Rio supplement’ explains:

. . . the climate system can respond to many forcings and it remains to be proven that the greenhouse signal is sufficiently distinguishable from other signals to be detected except as a gross increase in tropospheric temperature that is so large that other explanations are not likely.

Well, this supplementary assessment didn’t help either. The scientists, under the leadership of Bolin and Houghton, are to be commended for not bowing to pressure. But the IPCC was risking marginalization in the treaty process.

In the lead up to CoP1 in Berlin, the IPCC itself was badgering the negotiating committee to keep it involved in the political process, but tensions arose when it refused to compromise its own processes to meet the political need.

However, the momentum for action in the lead up to Rio remained sufficiently strong that these difficulties with the scientific justification could be ignored.  

Second Assessment Report

In context of the treaty activities, the second assessment report of the IPCC was regarded as very important for justifying implementation for the Kyoto Protocol.

In 1995, the IPCC was stuck between its science and its politics. The only way it could save itself from the real danger of political oblivion would be if its scientific diagnosis could shift in a positive direction and bring it into alignment with policy action.  

The key scientific issue at the time was detection and attribution:

The writing of Chapter 8 (the chapter concerned with detection and attribution) got off to a delayed start due to the late assignment of its coordinating lead author. It was not until April that someone agreed to take on the role. This was Ben Santer, a young climate modeller at Lawrence Livermore Laboratory.

The chapter that Santer began to draft was greatly influenced by a paper principally written by Tim Barnett, but it also listed Santer as an author. It was this paper that held, in a nutshell, all the troubles for the ‘detection’ quest. It was a new attempt to get beyond the old stumbling block of ‘first detection’ research: to properly establish the ‘yardstick’ of natural climate variability. The paper describes how this project failed to do so, and fabulously so.

The detection chapter that Santer drafted for the IPCC makes many references to this study. More than anything else cited in Chapter 8, it is the spoiler of all attribution claims, whether from pattern studies, or from the analysis of the global mean. It is the principal basis for  the Chapter 8 conclusion that. . .

. . .no study to date has both detected a significant climate change and positively attributed all or part of that change to anthropogenic causes.

For the second assessment, the final meeting of the 70-odd Working Group 1 lead authors . . . was set to finalise the draft Summary for Policymakers, ready for intergovernmental review. The draft Houghton had prepared for the meeting was not so sceptical on the detection science as the main text of the detection chapter drafted by Santer; indeed it contained a weak detection claim.

This detection claim appeared incongruous with the scepticism throughout the main text of the chapter and was in direct contradiction with its Concluding Summary. It represented a change of view that Santer had only arrived at recently due to a breakthrough in his own ‘fingerprinting’ investigations. These findings were so new that they were not yet published or otherwise available, and, indeed, Santer’s first opportunity to present them for broader scientific scrutiny was when Houghton asked him to give a special presentation to the meeting of lead authors.

However, the results were also challenged at this meeting: Santer’s fingerprint finding and the new detection claim were vigorously opposed by several experts in the field.

On the first day of the Madrid session of Working Group 1 in November 1995, Santer again gave an extended presentation of his new findings, this time to mostly non-expert delegates. When he finished, he explained that because of what he had found, the chapter was out of date and needed changing. After some debate John Houghton called for an ad-hoc side group to come to agreement on the detection issue in the light of these important new findings and to redraft the detection passage of the Summary for Policymakers so that it could be brought back to the full meeting for agreement. While this course of action met with general approval, it was vigorously opposed by a few delegations, especially when it became clear that Chapter 8 would require changing, and resistance to the changes went on to dominate the three-day meeting. After further debate, a final version of a ‘bottom line’ detection claim was decided:

The balance of evidence suggests a discernible human influence on global climate.

All of this triggered accusations of ‘deception’:

An opinion editorial written by Frederick Seitz ‘Major deception on “global warming” appeared in the Wall Street Journal on 12 June 1996.

This IPCC report, like all others, is held in such high regard largely because it has been peer-reviewed. That is, it has been read, discussed, modified and approved by an international body of experts. These scientists have laid their reputations on the line. But this report is not what it appears to be—it is not the version that was approved by the contributing scientists listed on the title page. In my more than 60 years as a member of the American scientific community, including service as president of both the NAS and the American Physical Society, I have never witnessed a more disturbing corruption of the peer-review process than the events that led to this IPCC report.

When comparing the final draft of Chapter with the version just published, he found that key statements sceptical of any human attribution finding had been changed or deleted. His examples of the deleted passages include:

  • ‘None of the studies cited above has shown clear evidence that we can attribute the observed [climate] changes to the specific cause of increases in greenhouse gases.’
  • ‘No study to date has positively attributed all or part [of the climate change observed to date] to anthropogenic [manmade] causes.’
  • ‘Any claims of positive detection of significant climate change are likely to remain controversial until uncertainties in the total natural variability of the climate system are reduced.’

On 4 July, Nature finally published Santer’s human fingerprint paper. In Science, Richard Kerr quoted Barnett saying that he is not entirely convinced that the greenhouse signal had been detected and that there remain ‘a number of nagging questions’. Later in the year a critique striking at the heart of Santer’s detection claim would be published in reply.

The IPCC’s manufactured consensus

What we can see from all this activity by scientists in the close vicinity of the second and third IPCC assessments is the existence of a significant body of opinion that is difficult to square with the IPCC’s message that the detection of the catastrophe signal provides the scientific basis for policy action.

The scientific debate on detection and attribution was effectively quelled by the IPCC Second Assessment Report:

Criticism would continue to be summarily dismissed as the politicisation of science by vested interests, while the panel’s powerful political supporters would ensure that its role as the scientific authority in the on-going climate treaty talks was never again seriously threatened.

And of course the ‘death knell’ to scientific arguments concerned about detection was dealt by the Third Assessment Report, in which the MBH Hockey Stick analysis of Northern Hemisphere paleoclimates effectively eliminated the existence of a hemispheric medieval warm period and Little Ice Age, ‘solving’ the detection conundrum.

JC reflections

Bernie Lewin’s book provides a really important and well documented history of the context and early  history of the IPCC.

I was discussing Lewin’s book with Garth Partridge, who was involved in the IPCC during the early years, he emailed this comment:

I am a bit upset because I was in the game all through the seventies to early nineties, was at a fair number of the meetings Lewin talked about, spent a year in Geneva as one of the “staff” of the early WCRP, another year (1990) as one of the staff of the US National Program Office in the Washington DC, met most of the characters he (Lewin) talked about…… and I simply don’t remember understanding what was going on as far as the politics was concerned.  How naive can one be??  Partly I suspect it was because lots of people in my era were trained(??) to deliberately ignore, and/or laugh at, all the garbage that was tied to the political shenanigans of international politics in the scientific world. Obviously the arrogance of scientists can be quite extraordinary!

Scientific scepticism about AGW was alive and well prior to 1995; took a nose-dive following publication of the Second Assessment Report, and then was was dealt what was hoped to be a fatal blow by the Third Assessment Report and the promotion of the Hockey Stick.

A rather flimsy edifice for a convincing, highly-confident attribution of recent warming to humans.

I think Bernie Lewin is correct in identifying the 1995 meeting in Madrid as the turning point. It was John Houghton who inserted the attribution claim into the draft Summary for Policy Makers, contrary to the findings in Chapter 8.  Ben Santer typically gets ‘blamed’ for this, but it is clearly Houghton who wanted this and enabled this, so that he and the IPCC could maintain a seat at the big policy table involved in the Treaty.

One might forgive the IPCC leaders for dealing with new science and a very challenging political situation in 1995 during which they overplayed their hand.  However, it is the 3rd Assessment Report where Houghton’s shenanigans with the Hockey Stick really reveal what was going on (including selection of recent Ph.D. recipient Michael Mann as lead author when he was not nominated by the U.S. delegation). The Hockey Stick got rid of that ‘pesky’ detection problem.

I assume that the rebuttal of the AGW  ‘true believers’ to all this is that politics are messy, but look, the climate scientists were right all along, and the temperatures keep increasing. Recent research increases confidence in attribution, that we have ‘known’ for decades.

Well, increasing temperatures say nothing about the causes of climate change.  Scientists are still debating the tropical upper troposphere ‘hot spot’, which was the ‘smoking gun’ identified by Santer in 1995 [link]. And there is growing evidence that natural variability on decadal to millennial time scales is much larger than previous thought (and larger than climate model simulations) [link].

I really need to do more blog posts on detection and attribution, I will do my best to carve out some time.

And finally, this whole history seems to violate the Mertonian norm of universalism:

universalism: scientific validity is independent of the sociopolitical status/personal attributes of its participants

Imagine how all this would have played out if Pierre Morel or John Zillman had been Chair of WG1, or if Tom Wigley or Tim Barnett or John Christy had been Coordinating Lead Author of Chapter 8. And what climate science would look like today.

I hope this history of manufacturing consensus gives rational people reason to pause before accepting arguments from consensus about climate change.

January 3, 2018 Posted by | Book Review, Corruption, Deception, Nuclear Power, Science and Pseudo-Science, Timeless or most popular | | 1 Comment

A veneer of certainty stoking climate alarm

In private, climate scientists are much less certain than they tell the public. – Rupert Darwall

Rupert Darwall has written a tour-de-force essay “A Veneer of Certainty Stoking Climate Alarm“, which has been published by CEI [link to full essay].

Foreword

I was invited to write a Foreword to the essay, which provides context for the essay:

While the nations of the world met in Bonn to discuss implementation of the Paris Climate Agreement, the Trump administration was working to dismantle President Obama’s Clean Power Plan and to establish a climate “red team” to critically evaluate the scientific basis for dangerous human-caused climate change and the policy responses.

The mantra of “settled science” is belied by the inherent complexity of climate change as a scientific problem, the plethora of agents and processes that influence the global climate, and disagreements among scientists. Manufacture and enforcement of a “consensus” on the topic of human-caused climate change acts to the detriment of the scientific process, our understanding of climate change, and the policy responses. Indeed, it becomes a fundamentally anti-scientific process when debate, disagreement, and uncertainty are suppressed.

This essay by Rupert Darwall explores the expressions of public certainty by climate scientists versus the private expressions of uncertainty, in context of a small Workshop on Climate organized by the American Physical Society (APS). I was privileged to participate in this workshop, which included three climate scientists who support the climate change consensus and three climate scientists who do not—all of whom were questioned by a panel of distinguished physicists.

The transcript of the workshop is a remarkable document. It provides, in my opinion, the most accurate portrayal of the scientific debates surrounding climate change. While each of the six scientists agreed on the primary scientific evidence, we each had a unique perspective on how to reason about the evidence, what conclusions could be drawn and with what level of certainty.

Rupert Darwall’s essay provides a timely and cogent argument for a red/blue team assessment of climate change that provides both sides with an impartial forum to ask questions and probe the other side’s case. Such an assessment would both advance the science and open up the policy deliberations to a much broader range of options.

Excerpts

Here are some highlights from the full essay (but you’ll want to read the whole thing!):

Introduction. How dependable is climate science? Global warming mitigation policies depend on the credibility and integrity of climate science. In turn, that depends on a deterministic model of the climate system in which it is possible to quantify the role of carbon dioxide (CO2) with a high degree of confidence. This essay explores the contrast between scientists’ expressions of public confidence and private admissions of uncertainty on critical aspects of the science that undergird the scientific consensus.

Instead of debating, highlighting and, where possible, resolving disagreement, many mainstream climate scientists work in a symbiotic relationship with environmental activists and the news media to stoke fear about allegedly catastrophic climate change, providing a scientific imprimatur for an aggressive policy response while declining to air private doubts and the systematic uncertainties.

Two Statements, Two Perspectives. Two statements by two players in the climate debate illustrate the gap between the certainty that we are asked to believe and a branch of science shot through with uncertainty. “Basic physics explains it. If global warming isn’t happening, then virtually everything we know about physics is wrong,” states Jerry Taylor, president of a group that advocates for imposing a carbon tax on the United States. In so many words, Taylor says that the case for cutting carbon dioxide emissions is incontrovertible: Science demands conservatives support a carbon tax.

The second statement was made by an actual climate scientist, Dr. William Collins of the Lawrence Berkeley National Laboratory. Speaking in 2014 at an American Physical Society climate workshop, Collins, who was a lead author of the chapter evaluating climate models in the 2013 Intergovernmental Panel on Climate Change’s (IPCC) Fifth Assessment Report, talked of the challenges of dealing with several sources of uncertainty. “One of them is the huge uncertainties even in the historical forcings,” he said. Commenting on the “structural certainty” of climate models, he observed that there were “a number of processes in the climate system we just do not understand from basic physical principles. … We understand a lot of the physics in its basic form. We don’t understand the emergent behavior that results from it.

The 2014 APS Climate Workshop: A Perfect Venue for Open Debate. Things are different when climate scientists are on the stand alongside their peers who know the science as well as they do, but disagree with the conclusions they draw from the same body of knowledge. Such open debate was on display at the 2014 American Physical Society climate workshop, which took place in Brooklyn and lasted just over seven hours. A unique event in the annals of the climate debate, it featured three climate scientists who support the climate change consensus and three climate scientists who do not. That format required an unusual degree of honesty about the limitations of the current understanding of the climate system. For the most part, circumspection, qualification, and candid admissions of lack of knowledge were the order of the day.

The IPCC’s Use and Abuse of Climate Models. The discussion in Brooklyn shows that putting the words “gold standard” and “IPCC” in the same sentence demonstrates a serious misunderstanding of the reliability of IPCC-sanctioned climate science.

“It’s clouds that prevent us from fundamentally in some reductive fashion understanding the climate system,” Princeton Atmospheric and Oceanic Sciences Professor Isaac Held, senior research scientist at the National Oceanic and Atmospheric Administration’s (NOAA) Geophysical Fluid Dynamics Laboratory, declared from the IPCC climate consensus bench. Collins made a similar point toward the end of the session. “My sense, to be honest with you, is that, and I think this all makes us a little bit nervous,” he said; “climate is not a problem that is amenable necessarily to reductionist treatment.”

Yet the IPCC’s top-line judgment in its Fifth Assessment Report—that it is “extremely likely” that the human emissions of greenhouse gases are the dominant cause of the warming since the mid-20th century—was described by Dr. Ben Santer of the Lawrence Livermore National Laboratory as likely to be conservative. The basis for this claim? General circulation models.

Santer’s claim would have sounded impressive if earlier in the day Collins had not presented charts showing GCMs performing poorly in reproducing temperature trends in the first half of the 20th century. Lindzen asked, what in the models causes the 1919-1940 warming? “Well, they miss the peak of the warming,” Held replied. While the IPCC is extremely certain that the late 20th century warming is mostly man-made, to this day it cannot collectively decide whether the earlier warming, which is of similar magnitude to the one that started in the mid-1970s, is predominantly man-made or natural. “It actually turns to be very hard to use the past as prologue,” Collins conceded before explaining: “We do not have a first principles theory that tells us what we have to get right in order to have an accurate projection.” And, as Held noted, over the satellite era from 1979, GCMs over- estimated warming in the tropics and the Arctic.

Steven Koonin, chairing the APS workshop, read an extract from chapter 10 of the IPCC’s Fifth Assessment Report. Model-simulated responses to forcings—including greenhouse gas forcings—“can be scaled up or down.” To match observations, some of the forcings in some of the models had to be scaled down. But when it came to making the centennial projections, the scaling factors were removed, probably resulting in a 25 to 30 percent over-projection of the 2100 warming, Koonin said. Only the transcript does full justice to the exchange that followed.

Dr. Koonin: But if the model tells you that you got the response to the forcing wrong by 30 percent, you should use that same 30 percent factor when you project out a century.
Dr. Collins: Yes. And one of the reasons we are not doing that is we are not using the models as [a] statistical projection tool.

Dr. Koonin: What are you using them as?
Dr. Collins: Well, we took exactly the same models that got the forcing wrong and which got sort of the projections wrong up to 2100.
Dr. Koonin: So, why do we even show centennial-scale projections?
Dr. Collins: Well, I mean, it is part of the [IPCC] assessment process.

“It is part of the assessment process” is not a scientific justification for using assumptions that are known to be empirically wrong to produce projections that help drive the political narrative of a planet spinning toward a climate catastrophe.

Climate Science and Falsifiability. A lively exchange developed between Christy and Santer. Georgia Tech’s Dr. Judith Curry, the third member of the critics’ bench, had crossed swords with Santer on whether the IPCC’s statement that more than half the observed warming was anthropogenic was more than expert judgment. In subsequent testimony to the House Science, Space, and Technology Committee, Curry explained:

Science is often mischaracterized as the assembly and organization of data and as a collection of facts on which scientists agree. Science is correctly characterized as a process in which we keep exploring new ideas and changing our understanding of the world, to find new representations of the world that better explain what is observed. … Science is driven by uncertainty, disagreement, and ignorance—the best scientists cultivate doubt.

Curry’s approach to science stands firmly on the methods and philosophical standards of the scientific revolution—mankind’s single greatest intellectual achievement.

Politicized Science vs. Red/Blue Team Appraisals. The APS workshop provides the strongest corrective to date to the politicized IPCC process. It revealed the IPCC’s unscientific practice of using different assumptions for projecting future temperature increases from those used to get models to reproduce past temperature. One need not be a climate expert to see that something is seriously amiss with the near certainties promulgated by the IPCC. “I have got to say,” Koonin remarked to climate modeler William Collins, “that this business is even more uncertain than I thought, uncertainties in the forcing, uncertainties in the modelling, uncertainties in historical data. Boy, this is a tough business to navigate.”

Koonin came away championing Christy’s idea of a red/blue team appraisal, a term drawn from war-gaming assessments performed by the military rather than from politics, which EPA Administrator Scott Pruitt has since adopted.

A revealing indicator of its potential value is the response to it. A June 2017 Washington Post op-ed, condemned calls for red/blue team appraisals as “dangerous attempts to elevate the status of minority opinions.”

Conclusion: Climate Policy’s Democratic Deficit. Open debate is as crucial in science as it is in a democracy. It would be contrary to democratic principles to dispense with debate and rely on the consensus of experts. The latter mode of inquiry inevitably produces prepackaged answers. But, as we have seen, relying on “consensus” buttresses erroneous science rather than allow it to be falsified.

The IPCC  was created to persuade, not provide objectivity and air disagreement. By contrast, the APS workshop gave both sides an impartial forum in which they could ask questions and probe the other side’s case. In doing so, it did more to expose the uncertainty, disagreement, and ignorance—to borrow Judith Curry’s words—around climate science than thousands of pages of IPCC assessment reports.

EPA Administrator Scott Pruitt’s proposal for red/blue team assessment is a logical progression from the workshop. The hostile reaction it elicited from leading consensus advocates strongly suggests that they fear debate. Climate scientists whose mission is to advance scientific understanding have nothing to fear and much to gain. Those who seek to use climate science as a policy battering ram have good reason to feel uncomfortable at the prospect. The biggest winner from a red/blue team assessment will be the public. If people are to buy into policies that will drastically alter their way of life, they should be fully informed of the consequences and justifications. To do otherwise would represent a subversion of democracy.

JC reflections

I’m very pleased to have this opportunity to revisit the APS Workshop on Climate Change, and am delighted to see that a journalist of Darwall’s caliber interpreting this. Drawl’s essay provides an eloquent argument in support of a climate red team, perhaps more so than what I, Steve Koonin or John Christy have provided.

The thing that really clicked in my brain was this statement by Bill Collins:

We understand a lot of the physics in its basic form. We don’t understand the emergent behavior that results from it.

Trying to leverage our understanding of the infrared emission spectra from CO2 and other ‘greenhouse’ gases into a ‘consensus’ on what has caused the recent warming in a complex chaotic climate system is totally unjustified — this is eloquently stated by Bill Collins.

My key take away conclusion from the APS Workshop is that the scientists on both sides are considering the same datasets and generally agree on their utility (the exception being the 2 decade debate between Santer and Christy on the uncertainties/utility of the satellite derived tropospheric temperature data). There is some disagreement regarding climate models, although I can’t say that I disagreed with much if anything said by Held and Collins in this regard.

The real issue is the logics used in linking the varied and sundry lines of evidence into drawing conclusions and assessing the uncertainties and areas of ambiguity and ignorance. I don’t think that any of the 6 scientists were using the same chain of reasoning about all this, even among scientists on the same ‘sides’.

Darwall’s essay deserves to be widely read. Here’s to hoping that it will reignite the discussion surrounding the climate red team.

November 29, 2017 Posted by | Science and Pseudo-Science, Timeless or most popular | , | 2 Comments

Thoughts on the Public Discourse over Climate Change

Image via MIT.edu
By Richard Lindzen | Merion West | April 25, 2017

MIT atmospheric science professor Richard Lindzen suggests that many claims regarding climate change are exaggerated and unnecessarily alarmist.

Introduction:

For over 30 years, I have been giving talks on the science of climate change. When, however, I speak to a non-expert audience, and attempt to explain such matters as climate sensitivity, the relation of global mean temperature anomaly to extreme weather, that warming has decreased profoundly for the past 18 years, etc., it is obvious that the audience’s eyes are glazing over. Although I have presented evidence as to why the issue is not a catastrophe and may likely be beneficial, the response is puzzlement. I am typically asked how this is possible. After all, 97% of scientists agree, several of the hottest years on record have occurred during the past 18 years, all sorts of extremes have become more common, polar bears are disappearing, as is arctic ice, etc. In brief, there is overwhelming evidence of warming, etc. I tended to be surprised that anyone could get away with such sophistry or even downright dishonesty, but it is, unfortunately, the case that this was not evident to many of my listeners. I will try in this brief article to explain why such claims are, in fact, evidence of the dishonesty of the alarmist position.

The 97% meme:

This claim is actually a come-down from the 1988 claim on the cover of Newsweek that all scientists agree. In either case, the claim is meant to satisfy the non-expert that he or she has no need to understand the science. Mere agreement with the 97% will indicate that one is a supporter of science and superior to anyone denying disaster. This actually satisfies a psychological need for many people. The claim is made by a number of individuals and there are a number of ways in which the claim is presented. A thorough debunking has been given in the Wall Street Journal by Bast and Spencer. One of the dodges is to poll scientists as to whether they agree that CO2 levels in the atmosphere have increased, that the Earth has been warming (albeit only a little) and that man has played some part. This is, indeed, something almost all of us can agree on, but which has no obvious implication of danger. Nonetheless this is portrayed as support for catastrophism. Other dodges involve looking at a large number of abstracts where only a few actually deal with danger. If among these few, 97% support catastrophism, the 97% is presented as pertaining to the much larger totality of abstracts. One of my favorites is the recent claim in the Christian Science Monitor (a once respected and influential newspaper): “For the record, of the nearly 70,000 peer-reviewed articles on global warming published in 2013 and 2014, four authors rejected the idea that humans are the main drivers of climate change.” I don’t think that it takes an expert to recognize that this claim is a bizarre fantasy for many obvious reasons. Even the United Nations Intergovernmental Panel on Climate Change (this body, generally referred to as the IPCC is the body created by the UN to provide ‘authoritative’ assessments of manmade climate change) doesn’t agree with the claim.

Despite the above, I am somewhat surprised that it was necessary to use the various shenanigans described above. Since this issue fully emerged in public almost 30 years ago (and was instantly incorporated into the catechism of political correctness), there has been a huge increase in government funding of the area, and the funding has been predicated on the premise of climate catastrophism. By now, most of the people working in this area have entered in response to this funding. Note that governments have essentially a monopoly over the funding in this area. I would expect that the recipients of this funding would feel obligated to support the seriousness of the problem. Certainly, opposition to this would be a suicidal career move for a young academic. Perhaps the studies simply needed to properly phrase their questions so as to achieve levels of agreement for alarm that would be large though perhaps not as large as was required for the 97% meme especially if the respondents are allowed anonymity.

The ‘warmest years on record’ meme:

 
Figure 1a Figure 1b Figure 1c

This simple claim covers a myriad of misconceptions. Under these circumstances, it is sometimes difficult to know where to begin. As in any demonization project, it begins with the ridiculous presumption that any warming whatsoever (and, for that matter, any increase in CO2) is bad, and proof of worse to come. We know that neither of these presumptions is true. People retire to the Sun Belt rather than to the arctic. CO2 is pumped into greenhouses to enhance plant growth. The emphasis on ‘warmest years on record’ appears to have been a response to the observation that the warming episode from about 1978 to 1998 appeared to have ceased and temperatures have remained almost constant since 1998. Of course, if 1998 was the hottest year on record, all the subsequent years will also be among the hottest years on record. None of this contradicts the fact that the warming (ie, the increase of temperature) has ceased. Yet, somehow, many people have been led to believe that both statements cannot be simultaneously true. At best, this assumes a very substantial level of public gullibility. The potential importance of the so-called pause (for all we know, this might not be a pause, and the temperature might even cool), is never mentioned and rarely understood. Its existence means that there is something that is at least comparable to anthropogenic forcing. However, the IPCC attribution of most of the recent (and only the recent) warming episode to man depends on the assumption in models that there is no such competitive process.

The focus on the temperature record, itself, is worth delving into a bit. What exactly is this temperature that is being looked at? It certainly can’t be the average surface temperature. Averaging temperatures from places as disparate as Death Valley and Mount Everest is hardly more meaningful than averaging phone numbers in a telephone book (for those of you who still remember phone books). What is done, instead, is to average what are called temperature anomalies. Here, one takes thirty year averages at each station and records the deviations from this average. These are referred to as anomalies and it is the anomalies that are averaged over the globe. The only attempt I know of to illustrate the steps in this process was by the late Stan Grotch at the Lawrence Livermore Laboratory. Figure 1a shows the scatter plot of the station anomalies. Figure 1b then shows the result of averaging these anomalies. Most scientists would conclude that there was a remarkable degree of cancellation and that the result was almost complete cancellation. However, instead, one stretches the temperature scale by almost a factor of 10 so as to make the minuscule changes in Figure 1b look more significant. The result is shown in Figure 1c. There is quite a lot of random noise in Figure 1c, and this noise is a pretty good indication of the uncertainty of the analysis (roughly +/- 0.2C). The usual presentations show something considerably smoother. Sometimes this is the result of smoothing the record with something called running means. It is also the case that Grotch used data from the UK Meteorological Office which was from land based stations. Including data from the ocean leads to smoother looking series but the absolute accuracy of the data is worse given that the ocean data mixes very different measurement techniques (buckets in old ship data, ship intakes after WW1, satellite measurements of skin temperature (which is quite different from surface temperature), and buoy data).

Figure 2

These issues are summarized in Figure 2 which presents an idealized schematic of the temperature record and its uncertainty. We see very clearly that because the rise ceases in 1998, that this implies that 18 of the 18 warmest years on record (for the schematic presentation) have occurred during the last 18 years. We also see that the uncertainty together with the smallness of the changes offers ample scope for adjustments that dramatically alter the appearance of the record (note that uncertainty is rarely indicated on such graphs).

At this point, one is likely to run into arguments over the minutia of the temperature record, but this would simply amount to muddying the waters so to speak. Nothing can alter the fact that the changes one is speaking about are small. Of course ‘small’ is relative. Consider three measures of smallness.

Figure 3

Figure 3 shows the variations in temperature in Boston over a one month period. The dark blue bars show the actual range of temperatures for each day. The dark gray bars show the climatological range of temperatures for that date, and the light gray bars show the range between the record-breaking low and record-breaking high for that date. In the middle is a red line. The width of that line corresponds to the range of temperature in the global mean temperature anomaly record for the past 175 years. This shows that the temperature change that we are discussing is small compared to our routine sensual experience. Keep this in mind when someone claims to ‘feel’ global warming.

The next measure is how does the observed change compare with what we might expect from greenhouse warming. Now, CO2 is not the only anthropogenic greenhouse gas.

Figure 4. Red bar represents observations. Gray bars show model predictions.

When all of them are included, the UN IPCC finds that we are just about at the greenhouse forcing of climate that one expects from a doubling of CO2, and the temperature increase has been about 0.8C. If man’s emissions are responsible for all of the temperature change over that past 60 years, this still points to a lower sensitivity (sensitivity, by convention, generally refers to the temperature increase produced by a doubling of CO2 when the system reaches equilibrium) than produced by the least sensitive models (which claim to have sensitivities of from 1.5-4.5C for a doubling of CO2). And, the lower sensitivities are understood to be unproblematic. However, the IPCC only claims man is responsible for most of the warming. The sensitivity might then be much lower. Of course, the situation is not quite so simple, but calculations do show that for higher sensitivities one has to cancel some (and often quite a lot) of the greenhouse forcing with what was assumed to be unknown aerosol cooling in order for the models to remain consistent with past observations (a recent article in the Bulletin of the American Meteorological Society points out that there are, in fact, quite a number of arbitrary adjustments made to models in order to get some agreement with the past record). As the aerosol forcing becomes less uncertain, we see that high sensitivities have become untenable. This is entirely consistent with the fact that virtually all models used to predict ‘dangerous’ warming over-predict observed warming after the ‘calibration’ periods. That is to say, observed warming is small compared to what the models upon which concerns are based are predicting. This is illustrated in Figure 4. As I have mentioned, uncertainties allow for substantial adjustments in the temperature record. One rather infamous case involved NOAA’s adjustments in a paper by Karl et al that replace the pause with continued warming. But it was easy to show that even with this adjustment, models continued to show more warming than even the ‘adjusted’ time series showed. Moreover, most papers since have rejected the Karl et al adjustment (which just coincidentally came out with much publicity just before the Paris climate conference).

The third approach is somewhat different. Instead of arguing that the change is not small, it argues that the change is ‘unprecedented.’ This is Michael Mann’s infamous ‘hockey stick.’ Here, Mann used tree rings from bristle cone pines to estimate Northern Hemisphere temperatures back hundreds of years. This was done by calibrating the tree ring data with surface observations for a thirty year period, and using this calibration to estimate temperatures in the distant past in order to eliminate the medieval warm period. Indeed, this reconstruction showed flat temperatures for the past thousand years. The usual test for such a procedure would be to see how the calibration worked for observations after the calibration period. Unfortunately, the results failed to show the warming found in the surface data. The solution was starkly simple and stupid. The tree ring record was cut off at the end of the calibration period and replaced by the actual surface record. In the Climategate emails (Climategate refers to a huge release of emails from various scientists supporting alarm where the suppression of opposing views, the intimidation of editors, the manipulation of data, etc. were all discussed), this was referred to as Mann’s trick.

The whole point of the above was to make clear that we are not concerned with warming per se, but with how much warming. It is essential to avoid the environmental tendency to regard anything that may be bad in large quantities to be avoided at any level however small. In point of fact small warming is likely to be beneficial on many counts. If you have assimilated the above, you should be able to analyze media presentations like this one to see that amidst all the rhetoric, the author is pretty much saying nothing while even misrepresenting what the IPCC says.

The extreme weather meme:

Every line weather forecaster knows that extreme events occur someplace virtually every day. The present temptation to attribute these normally occurring events to climate change is patently dishonest. Roger Pielke, Jr. actually wrote a book detailing the fact that there is no trend in virtually any extreme event (including tornados, hurricanes, droughts, floods, etc.) with some actually decreasing. Even the UN’s IPCC acknowledges that there is no basis for attributing such events to anthropogenic climate change.

Figure 5. Temperature map for North America.

The situation with respect to extreme temperatures actually contradicts not just observations but basic meteorological theory. Figure 5 shows a map of temperatures for North America on February 27, 2008. Extreme temperatures at any location occur when air motions carry air from the coldest or warmest points on the map. Now, in a warmer climate, it is expected that the temperature difference between the tropics and the high latitudes will decrease. Thus the range of possible extremes will be reduced. More important is the fact that the motions that carry these temperatures arise from a process called baroclinic instability, and this instability derives from the magnitude of the aforementioned temperature difference. Thus, in a warmer world, these winds will be weaker and less capable of carrying extreme temperatures to remote locations. Claims of greater extremes in temperature simply ignore the basic physics, and rely, for their acceptance, on the ignorance of the audience.

The claims of extreme weather transcend the usual use of misleading claims. They often amount to claims for the exact opposite of what is actually occurring. The object of the claims is simply to be as scary as possible, and if that requires claiming the opposite of the true situation, so be it.

Sea level rise:

Globally averaged sea level appears to have been rising at the rate of about 6 inches a century for thousands of years. Until the advent of satellites, sea level was essentially measured with tide gauges which measure the sea level relative to the land level. Unfortunately, the land level is also changing, and as Emery and Aubrey note, tectonics are the major source of change at many locations. Beginning in 1979 we began to use satellites to measure actual sea level. The results were surprisingly close to the previous tide gauge estimates, but slightly higher, but one sees from Wunsch et al (DOI: 10.1175/2007JCLI1840.1) that one is in no position to argue that small differences from changing methodologies represents acceleration. Regardless, the changes are small compared to the claims that suggest disastrous changes. However, even in the early 1980’s advocates of warming alarm like S. Schneider argued that sea level would be an easily appreciated scare tactic. The fact that people like Al Gore and Susan Solomon (former head of the IPCC’s Scientific Assessment) have invested heavily in ocean front property supports the notion that the issue is propagandistic rather than scientific.

Arctic sea ice:

Satellites have been observing arctic (and Antarctic) sea ice since 1979. Every year there is a pronounced annual cycle where the almost complete winter coverage is much reduced each summer. During this period there has been a noticeable downtrend is summer ice in the arctic (with the opposite behavior in the Antarctic), though in recent years, the coverage appears to have stabilized. In terms of climate change, 40 years is, of course, a rather short interval. Still, there have been the inevitable attempts to extrapolate short period trends leading to claims that the arctic should have already reached ice free conditions. Extrapolating short term trends is obviously inappropriate. Extrapolating surface temperature changes from dawn to dusk would lead to a boiling climate in days. This would be silly. The extrapolation of arctic summer ice coverage looks like it might be comparably silly. Moreover, although the satellite coverage is immensely better than what was previously available, the data is far from perfect. The satellites can confuse ice topped with melt water with ice free regions. In addition, temperature might not be the main cause of reduced sea ice coverage. Summer ice tends to be fragile, and changing winds play an important role in blowing ice out of the arctic sea. Associating changing summer sea ice coverage with climate change is, itself, dubious. Existing climate models hardly unambiguously predict the observed behavior. Predictions for 2100 range from no change to complete disappearance. Thus, it cannot be said that the sea ice behavior confirms any plausible prediction.

It is sometimes noted that concerns for disappearing arctic sea ice were issued in 1922, suggesting that such behavior is not unique to the present. The data used, at that time, came from the neighborhood of Spitzbergen. A marine biologist and climate campaigner has argued that what was described was a local phenomenon, but, despite the claim, the evidence presented by the author is far from conclusive. Among other things, the author was selective in his choice of ‘evidence.’

All one can say, at this point, is that the behavior of arctic sea ice represents one of the numerous interesting phenomena that the earth presents us with, and for which neither the understanding nor the needed records exist. It probably pays to note that melting sea ice does not contribute to sea level rise. Moreover, man has long dreamt of the opening of this Northwest Passage. It is curious that it is now viewed with alarm. Of course, as Mencken noted, “The whole aim of practical politics is to keep the populace alarmed (and hence clamorous to be led to safety) by an endless series of hobgoblins, most of them imaginary.” The environmental movement has elevated this aim well beyond what Mencken noted.

Polar bear meme:

I suspect that Al Gore undertook considerable focus-group research to determine the remarkable effectiveness of the notion that climate change would endanger polar bears. His use of an obviously photo shopped picture of a pathetic polar bear on an ice float suggests this. As Susan Crockford, a specialist in polar bear evolution, points out, there had indeed been a significant decrease in polar bear population in the past due to hunting and earlier due to commercial exploitation of polar bear fur. This has led to successful protective measures and sufficient recovery of polar bear population, that hunting has again been permitted. There is no evidence that changes in summer sea ice have had any adverse impact on polar bear population, and, given that polar bears can swim for over a hundred miles, there seems to be little reason to suppose that it would. Nonetheless, for the small community of polar bear experts, the climate related concerns have presented an obvious attraction.

Ocean acidification:

This is again one of those obscure claims that sounds scary but doesn’t stand up to scrutiny. Ever since the acid rain scare, it has been realized that the public responds with alarm to anything with the word ‘acid’ in it. In point of fact, the ocean is basic rather than acidic (ie, its ph is always appreciably higher than 7, and there is no possibility of increasing levels of atmospheric CO2 bringing it down to 7; note that ph is a measure of acidity or basicness: values greater than 7 are basic and less than 7 acid.), and the purported changes simply refer to making the ocean a bit less basic. However, such a more correct description would lack the scare component. As usual, there is so much wrong with this claim that it takes a fairly long article to go over it all. I recommend the following source.

Death of coral reefs:

The alleged death of coral reefs is partly linked to the acidification issue above, and as we see, the linkage is almost opposite to what is claimed. There is also the matter of warming per se leading to coral bleaching. A typical alarmist presentation can be found here.

The article is behind a pay wall, but most universities provide access to Nature. The reasoned response to this paper is provided here.

As Steele, the author of the above, points out, bleaching has common causes other than warming and is far from a death sentence for corals whose capacity to recover is substantial. This article is a bit polemical, but essentially correct.

Global warming as the cause of everything:

As we see from the above, there is a tendency to blame everything unpleasant on global warming. The absurd extent of this tendency is illustrated on the following here. That hasn’t stopped the EPA from using such stuff to claim large health benefits for its climate change policies. Moreover, I fear that with so many claims, there is always the question ‘what about ….?’ Hardly anyone has the time and energy to deal with the huge number of claims. Fortunately, most are self-evidently absurd. Nation magazine recently came up with what is a bit of a champion is this regard. CO2, it should be noted, is hardly poisonous. On the contrary, it is essential for life on our planet and levels as high as 5000 ppm are considered safe on our submarines and on the space station (current atmospheric levels are around 400 ppm, while, due to our breathing, indoor levels can be much higher). The Nation article is typical in that it makes many bizarre claims in a brief space. It argues that a runaway greenhouse effect on Venus led to temperatures hot enough to melt lead. Of course, no one can claim that the earth is subject to such a runaway, but even on Venus, the hot surface depends primarily on the closeness of Venus to the sun and the existence of a dense sulfuric acid cloud covering the planet. Relatedly, Mars, which also has much more CO2 than the earth, is much further from the sun and very cold. As we have seen many times already, such matters are mere details when one is in the business of scaring the public.

Concluding remarks:

The accumulation of false and/or misleading claims is often referred to as the ‘overwhelming evidence’ for forthcoming catastrophe. Without these claims, one might legitimately ask whether there is any evidence at all.

Despite this, climate change has been the alleged motivation for numerous policies, which, for the most part, seem to have done more harm  than the purported climate change, and have the obvious capacity to do much more. Perhaps the best that can be said for these efforts is that they are acknowledged to have little impact on either CO2 levels or temperatures despite their immense cost. This is relatively good news since there is ample evidence that both changes are likely to be beneficial although the immense waste of money is not.

I haven’t spent much time on the details of the science, but there is one thing that should spark skepticism in any intelligent reader. The system we are looking at consists in two turbulent fluids interacting with each other. They are on a rotating planet that is differentially heated by the sun. A vital constituent of the atmospheric component is water in the liquid, solid and vapor phases, and the changes in phase have vast energetic ramifications. The energy budget of this system involves the absorption and reemission of about 200 watts per square meter. Doubling CO2 involves a 2% perturbation to this budget. So do minor changes in clouds and other features, and such changes are common. In this complex multifactor system, what is the likelihood of the climate (which, itself, consists in many variables and not just globally averaged temperature anomaly) is controlled by this 2% perturbation in a single variable? Believing this is pretty close to believing in magic. Instead, you are told that it is believing in ‘science.’ Such a claim should be a tip-off that something is amiss. After all, science is a mode of inquiry rather than a belief structure.

Richard Lindzen is the Alfred P. Sloan Professor of Atmospheric Sciences, Emeritus at Massachusetts Institute of Technology.

May 2, 2017 Posted by | Deception, Science and Pseudo-Science, Timeless or most popular | | Leave a comment

What The IPCC Said About Glaciers In 1990

By Paul Homewood | Not A Lot Of People Know That | April 25, 2017

When we talk about glaciers retreating, it is worth recalling what the first IPCC Report in 1990 had to say about the matter:

 

image

http://www.ipcc.ch/ipccreports/far/wg_I/ipcc_far_wg_I_full_report.pdf

 

In other words, glaciers began receding in the second half of the 19thC, and the fastest rate of retreat was 1920-60, before CO2 emissions could have had any significant impact.

 

The IPCC also added the following chart showing how, on a range of glaciers, rapid retreat began in the 19thC.

image

 

They also added:

image

 

And:

image

April 25, 2017 Posted by | Science and Pseudo-Science, Timeless or most popular | | Leave a comment