Erroneous climate change study reported far and wide, corrections few and far between
RT | November 15, 2018
As the world grapples with extreme weather and wildfires, the issue of climate change is at the forefront of policy decisions, scientific research and media coverage – but bias towards alarmism is proving somewhat irresistible.
A new study recently published in the journal Nature suggested that “ocean warming is at the high end of previous estimates,” based on atmospheric data taken between 1991 and 2016. Ocean temperatures are 60-percent higher per year than the estimates offered by the United Nations’ Intergovernmental Panel on Climate Change in 2014, the authors claim.
The research was co-authored by an expert – a Princeton geoscientist no less – so the disturbing news spread like… well, wildfire across the news media, with each headline more breathless than the last. The only problem was, the numbers used to generate the conclusions in the research were off; way off.
One climate change researcher and statistician wasn’t so convinced by the study: Nicholas Lewis took a closer look at the numbers and spotted a few glaring errors in the researchers’ calculations.
“Unfortunately their work involves many assumptions where there is scope for subjective choices by the authors, so it is difficult to validate those assumptions,” Lewis told Reason.com.
In fact, the warming of the world’s oceans was overstated by approximately 30 percent, a substantial margin of error by most standards.
Lewis also questioned the “failure of the original peer review and editorial process to pick up the fairly obvious statistical problems in the original paper.”
In response, the study’s co-author and Scripps Institution of Oceanography climate scientist Ralph Keeling has acknowledged that there may be issues with the numbers, but insists that once they are rectified, it won’t affect the overall conclusion.
The issues “do not invalidate the study’s methodology or the new insights into ocean biogeochemistry on which it is based,” Keeling said in an addendum to the original news release.
While there may be less intense cause for immediate alarm about ocean warming, the entire episode does create concern over the validity of, and scrutiny placed on, research that bows to the scientific consensus rather than that which challenges it.
However, at the time of writing, only a handful of outlets have published news of the correction in comparison with the multitude who shared the original, erroneous findings. Correction coverage just doesn’t generate the clicks quite like alarmism after all.
How the CDC Uses Fear to Increase Demand for Flu Vaccines
Collective Evolution | November 9, 2018
The CDC claims that its recommendation that everyone aged six months and up should get an annual flu shot is firmly grounded in science. The mainstream media reinforce this characterization by misinforming the public about what the science says.
A New York Times article from earlier this year, for example, in order to persuade readers to follow the CDC’s recommendation, cited scientific literature reviews of the prestigious Cochrane Collaboration to support its characterization of the influenza vaccine as both effective and safe. The Times claimed that the science showed that the vaccine represented “a big payoff in public health” and that harms from the vaccine were “almost nonexistent”.
What the Cochrane researchers actually concluded, however, was that their findings “seem to discourage the utilization of vaccination against influenza in healthy adults as a routine public health measure” (emphasis added). Furthermore, given the known serious harms associated with specific flu vaccines and the CDC’s recommendation that infants as young as six months get a flu shot despite an alarming lack of safety studies for children under two, “large-scale studies assessing important outcomes, and directly comparing vaccine types are urgently required.”
The CDC also recommends the vaccine for pregnant women despite the total absence of randomized controlled trials assessing the safety of this practice for both expectant mother and unborn child. (This is all the more concerning given that multi-dose vials of the inactivated influenza vaccine contain mercury, a known neurotoxin that can cross both the placental and blood-brain barriers and accumulate in the brain.)
The Cochrane researchers also found “no evidence” to support the CDC’s assumptions that the vaccine reduces transmission of the virus or the risk of potentially deadly complications—the two primary justifications claimed by the CDC to support its recommendation.
The CDC nevertheless pushes the influenza vaccine by claiming that it prevents large numbers of hospitalizations and deaths from flu. To reinforce its message that everyone should get an annual flu shot, the CDC claims that hundreds of thousands of people are hospitalized and tens of thousands die each year from influenza. These numbers are generally relayed by the mainstream media as though representative of known cases of flu. The aforementioned New York Times article, for example, stated matter-of-factly that, of the 9 million to 36 million people whom the CDC estimates get the flu each year, “Somewhere between 140,000 and 710,000 of them require hospitalization, and 12,000 to 56,000 die each year.”
… the average number of deaths each year for which the cause is actually attributed on death certificates to the influenza virus is little more than 1000.
On September 27, the CDC issued the claim at a press conference that 80,000 people died from the flu during the 2017 – 2018 flu season, and the media parroted this number as though fact.
What is not being communicated to the public is that the CDC’s numbers do not represent known cases of influenza. They do not come directly from surveillance data, but are rather controversial estimates based on controversial mathematical models that may greatly overestimate the numbers.
To put the matter into perspective, the average number of deaths each year for which the cause is actually attributed on death certificates to the influenza virus is little more than 1,000.
The consequence of the media parroting the CDC’s numbers as though uncontroversial is that the public is routinely misinformed about the impact of influenza on society and the ostensible benefits of the vaccine. Evidently, that’s just the way the CDC wants it, since the agency has also outlined a public relations strategy of using fear marketing to increase demand for flu shots.
In other words, the CDC considers it to be a problem that people are increasingly doing their own research and becoming more adept at educating themselves about health-related issues.
The CDC’s “Problem” of “Growing Health Literacy”
Before looking at some of the problems with the CDC’s estimates, it’s useful to examine the mindset at the agency with respect to how CDC officials view their role in society. An instructive snapshot of this mindset was provided in a presentation by the CDC’s director of media relations on June 17, 2004, at a workshop for the Institute of Medicine (IOM).
In its presentation, the CDC outlined a “‘Recipe’ for Fostering Public Interest and High Vaccine Demand”. It called for encouraging medical experts and public health authorities to “state concern and alarm” about “and predict dire outcomes” from the flu season. To inspire the necessary fear, the CDC encouraged describing each season as “very severe”, “more severe than last or past years”, and “deadly”.
One problem for the CDC is the accurate view among healthy adults that they are not at high risk of serious complications from the flu. As the presentation noted, “achieving consensus by ‘fiat’ is difficult”—meaning that just because the CDC makes the recommendation doesn’t mean that people will actually follow it. Therefore it was necessary to cause “concern, anxiety, and worry” among young, healthy adults who regard the flu as an inconvenience rather than something to be terribly afraid of.
The larger conundrum for the CDC is the proliferation of information available to the public on the internet. As the CDC bluntly stated it, “Health literacy is a growing problem”.
In other words, the CDC considers it to be a problem that people are increasingly doing their own research and becoming more adept at educating themselves about health-related issues. And, as we have already seen, the CDC has very good reason to be concerned about people doing their own research into what the science actually tells us about vaccines.
One prominent way the CDC inspires the necessary fear, of course, is with its estimates of the numbers of people who are hospitalized or die each year from the flu.
… many if not most people diagnosed with ‘the flu’ may not have actually been infected with the influenza virus at all, given the large number of other viruses that cause the same symptoms and the general lack of lab confirmation.
The Problems with the CDC’s Estimates of Annual Flu Deaths
Among the relevant facts that are routinely not relayed to the public by the media when the CDC’s numbers are cited is that only about 7% to 15% of what are called “influenza-like illnesses” are actually caused by influenza viruses. In fact, there are over 200 known viruses that cause influenza-like illnesses, and to determine whether an illness was actually caused by the influenza virus requires laboratory testing—which isn’t usually done.
Furthermore, as the authors of a 2010 Cochrane review stated, “At best, vaccines may only be effective against influenza A and B, which represent about 10% of all circulating viruses” that are known to cause influenza-like symptoms. (That’s the same review, by the way, that the Times mischaracterized as having found the vaccine to be “a big payoff in public health”.)
While the CDC now uses a range of numbers to describe annual deaths attributed to influenza, it used to claim that on average “about 36,000 people per year in the United States die from influenza”. The CDC switched to using a range in response to criticism that the average was misleading because there is great variability from year to year and decade to decade. And while switching to the range did address that criticism, other serious problems remain.
One major problem with “the much publicized figure of 36,000”, as Peter Doshi observed in a 2005 BMJ article, was that it “is not an estimate of yearly flu deaths, as widely reported in both the lay and scientific press, but an estimate—generated by a model—of flu-associated death.”
Of course, as the media routinely remind us when it comes to the subject of vaccines and autism (but seem to forget when it comes to the CDC’s flu numbers), temporal association does not necessarily mean causation. Just because someone dies after an influenza infection does not mean that it was the flu that killed him. And, furthermore, many if not most people diagnosed with “the flu” may not have actually been infected with the influenza virus at all, given the large number of other viruses that cause the same symptoms and the general lack of lab confirmation.
The “36,000” number came from a 2003 CDC study published in JAMA that acknowledged the difficulty of estimating deaths attributable to influenza, given that most cases are not lab-confirmed. Yet, rather than acknowledging the likelihood that a substantial percentage of reported cases actually had nothing to do with the influenza virus, the CDC researchers treated it as though it only meant that flu-related deaths must be significantly higher than the reported numbers.
The study authors pointed out that seasonal influenza is “associated with increased hospitalizations and mortality for many diagnoses”, including pneumonia, and they assumed that many cases attributed to other illnesses were actually caused by influenza. They therefore developed a mathematical model to estimate the number by instead using as their starting point all “respiratory and circulatory” deaths, which include all “pneumonia and influenza” deaths.
In his aforementioned BMJ article, Peter Doshi reasonably asked, “Are US flu death figures more PR than science?”
Of course, not all respiratory and circulatory deaths are caused by the influenza virus. Yet the CDC treats this number as “an upper bound”—as though it was possible that 100% of all respiratory and circulatory deaths occurring in a given flu season were caused by influenza. The CDC also treats the total number of pneumonia and influenza deaths as “a lower bound for deaths associated with influenza”. The CDC states on its website that reported pneumonia and influenza deaths “represent only a fraction of the total number of deaths from influenza”—as though all pneumonia deaths were caused by influenza!
The CDC certainly knows better. In fact, at the same time, the CDC contradictorily acknowledges that not all pneumonia and influenza deaths are flu-related; it has estimatedthat in an average year 2.1% of all respiratory and circulatory deaths and 8.5% of all pneumonia and influenza deaths are influenza-associated.
So how can the CDC maintain both (a) that 8.5% of pneumonia and influenza deaths are flu-related, and (b) that the combined total of all pneumonia and influenza deaths represents only a fraction of flu-caused deaths? How can both be true?
The answer is that the CDC simply assumes that influenza-associated deaths are so greatly underreported within the broader category of deaths coded under “respiratory and circulatory” that they dwarf all those coded under “pneumonia and influenza”.
In his aforementioned BMJ article, Peter Doshi reasonably asked, “Are US flu death figures more PR than science?” As he put it, “US data on influenza deaths are a mess.” The CDC “acknowledges a difference between flu death and flu associated death yet uses the terms interchangeably. Additionally, there are significant statistical incompatibilities between official estimates and national vital statistics data. Compounding these problems is a marketing of fear—a CDC communications strategy in which medical experts ‘predict dire outcomes’ during flu seasons.”
Setting aside pneumonia and looking just at influenza-associated deaths from 1979 to 2002, the annual average according to the NCHS data was only 1,348.
Illustrating the problem, Doshi observed that for the year 2001, the total number of reported pneumonia and influenza deaths was 62,034. Yet, of those, less than one half of one percent were attributed to influenza. Furthermore, of the mere 257 cases blamed on the flu, only 7% were laboratory confirmed. That’s only 18 cases of lab confirmed influenza out of 62,034 pneumonia and influenza deaths—or just 0.03%, according to the CDC’s own National Center for Health Statistics (NCHS).
Setting aside pneumonia and looking just at influenza-associated deaths from 1979 to 2002, the annual average according to the NCHS data was only 1,348.
The CDC’s mortality estimates would be compatible with the NCHS data, Doshi argued, “if about half of the deaths classed by the NCHS as pneumonia were actually flu initiated secondary pneumonias.” But the NCHS criteria itself strongly indicated otherwise, stating that “Cause-of-death statistics are based solely on the underlying cause of death … defined by WHO as ‘the disease or injury which initiated the train of events leading directly to death.’”
The CDC researchers who authored the 2003 study acknowledged that underlying cause-of-death coding “represents the disease or injury that initiated the chain of morbid events that led directly to the death”—yet they fallaciously coupled pneumonia deaths with influenza deaths in their model anyway.
At the time Doshi was writing, the CDC was publicly claiming that each year “about 36,000 [Americans] die from flu”, and as seen with the example from the New York Times, the range of numbers is likewise presented as though representative of known cases of flu-caused deaths. Yet the lead author of that very CDC study, William Thompson of the CDC’s National Immunization Program, acknowledged that the number rather represented “a statistical association” that does not necessarily mean causation. In Thompson’s own words, “Based on modelling, we think it’s associated. I don’t know that we would say that it’s the underlying cause of death.” (Emphasis added.)
Of course, the CDC does say it’s the underlying cause of death in its disingenuous public relations messaging. As Doshi noted, Thompson’s acknowledgment is “incompatible” with the CDC’s “misrepresentation” of its flu deaths estimates. The CDC, Doshi further observed, was “working in manufacturers’ interest by conducting campaigns to increase flu vaccination” based on estimates that are “statistically biased”, including by “arbitrarily linking flu with pneumonia”.
… there are otherwise significant limitations of the CDC’s models that potentially result in spurious attribution of deaths to influenza.
More “Limitations” of the CDC’s Models
While the media present the CDC’s numbers as though uncontroversial, there is in fact “substantial controversy” surrounding flu death estimates, as a 2005 study published in the American Journal of Epidemiology noted. One problem is that the CDC’s models use virus surveillance data that “have not been made available in the public domain”, which means that its results or not reproducible. (As the journal Cell reminds, “the reproducibility of science” is “a lynch pin of credibility”.) And there are otherwise “significant limitations” of the CDC’s models that potentially result in “spurious attribution of deaths to influenza.”
To illustrate, when Peter Doshi requested access to virus circulation data, the CDC refused to allow it unless he granted the CDC co-authorship of the study he was undertaking—which Doshi appropriately refused.
While the number of confirmed H1N1-related child deaths was 371, the CDC’s claimed number was 1,271 or more.
In the New York Review of Books, Helen Epstein has pointed out how the CDC’s dire warnings about the 2009 H1N1 “swine flu” never came to pass, as well as how “some experts maintain that the CDC’s estimates studies overestimate influenza mortality, particularly among children.” While the number of confirmed H1N1-related child deaths was 371, the CDC’s claimed number was 1,271 or more. To arrive at its number, the CDC used a multiplier based on certain assumptions. One assumption is that some cases are missed either because lab confirmation wasn’t sought or because the children weren’t in a hospital when they died and so weren’t tested. Another is that a certain percentage of test results will be false negatives.
However, Epstein pointed out, “according to CDC guidelines at the time”, any child hospitalized with severe influenza symptoms should have been tested for H1N1. Furthermore, “deaths in children from infectious diseases are rare in the US, and even those who didn’t die in hospitals would almost certainly have been autopsied (and tested for H1N1)…. Also, the test is accurate and would have missed few cases. Because it’s unlikely that large numbers of actual cases of US child deaths from H1N1 were missed, the lab-confirmed count (371) is probably much closer to the modeled numbers … which are in any case impossible to verify.”
As already indicated, another assumption the CDC makes is that excess mortality in winter is mostly attributable to influenza. A 2009 Slate article described this as among a number of “potential glitches” that make the CDC’s reported flu deaths the “‘least bad’ estimate”. Referring to earlier methods that associated flu deaths with wintertime deaths from all causes, the article observed that this risked blaming influenza for deaths from car accidents caused by icy roads. And while the updated method presented in the 2003 CDC study excluded such causes of death implausibly linked to flu, related problems remain.
As the aforementioned American Journal of Epidemiology study noted, the updated method “reduces, but does not eliminate, the potential for spurious correlation and spurious attribution of deaths to influenza.” Furthermore, “Methods based on seasonal pattern begin from the assumption that influenza is the major source of excess winter death.” The CDC’s models therefore still “are in danger of being confounded by other seasonal factors.” The authors also stated that they could not conclude from their own study “that influenza is a more important cause of winter mortality on an annual timescale than is cold weather.”
Once the CDC has its estimated hospitalization rate, it then multiplies that number by the ratio of deaths to hospitalizations to arrive at its estimated mortality rate. Thus, any overestimation of the hospitalization rate is also compounded into its estimated death rate.
As a 2002 BMJ study stated, “Cold weather alone causes striking short term increases in mortality, mainly from thrombotic and respiratory disease. Non-thermal seasonal factors such as diet may also affect mortality.” (Emphasis added.) The study estimated that of annual excess winter deaths, only “2.4% were due to influenza either directly or indirectly.” It concluded that, “With influenza causing such a small proportion of excess winter deaths, measures to reduce cold stress offer the greatest opportunities to reduce current levels of winter mortality.”
CDC researchers themselves acknowledge that their models are “subject to some limitations.” In a 2009 study published in the American Journal of Public Health, CDC researchers admitted that “simply counting deaths for which influenza has been coded as the underlying cause on death certificates can lead to both over- and underestimates of the magnitude of influenza-associated mortality.” (Emphasis added.) Yet they offered no comment on how, then, their models account for the likelihood that many reported cases of “flu” had nothing whatsoever to do with the influenza virus. Evidently, this is because they don’t, as indicated by the CDC’s treatment of all influenza deaths plus pneumonia deaths as a “lower bound”.
For another illustration, since it takes two or three years before the data is available to be able to estimate flu hospitalizations and deaths by the usual means, the CDC has also developed a method to make preliminary estimates for a given year by “adjusting” the numbers of reported lab-confirmed cases from selected surveillance areas around the country. The “80,000” figure claimed for last season’s flu deaths is just such an estimate. The way the CDC “adjusts” the numbers is by multiplying the number of lab-confirmed cases by a certain amount, ostensibly “to correct for underreporting”. To determine the multiplier, the CDC makes a number of assumptions to estimate (a) the likelihood that a person hospitalized for any respiratory illness would be tested for influenza and (b) the likelihood that a person with influenza would test positive.
Caveats such as that, however, are not communicated to the general public by the CDC in its press releases or by the mainstream media so that people can make a truly informed choice about whether it’s worth the risk to get a flu shot.
Once the CDC has its estimated hospitalization rate, it then multiplies that number by the ratio of deaths to hospitalizations to arrive at its estimated mortality rate. Thus, any overestimation of the hospitalization rate is also compounded into its estimated death rate.
One obvious problem with this is the underlying assumption that the percentage of people who (a) are hospitalized for respiratory illness and have the flu is the same as (b) the percentage of those who are hospitalized for respiratory illness, are actually tested, and test positive. This implies that doctors are not more likely to seek lab confirmation for people who actually have influenza than they are for people whose respiratory symptoms are due to some other cause.
Assuming that doctors can do better than a pair of rolled dice at picking out patients with influenza, it further implies that doctors are no more likely to order a lab test for patients whom they suspect of having the flu than they are to order a lab test for patients whose respiratory symptoms they think are caused by something else.
The CDC’s assumption thus introduces a selection bias into its model that further calls into question the plausibility of its conclusions, as it is bound to result in overestimation. In a 2015 study published in PLoS One that detailed this method, CDC researchers acknowledged that, “If physicians were more likely to recognize influenza patients clinically and select those patients for testing, we may have over-estimated the magnitude of under-detection.” And that, of course, would result in an overestimation of both hospitalizations and deaths associated with influenza.
Caveats such as that, however, are not communicated to the general public by the CDC in its press releases or by the mainstream media so that people can make a truly informed choice about whether it’s worth the risk to get a flu shot.
Conclusion
In summary, to avoid underestimating influenza-associated hospitalizations and deaths, the CDC relies on models that instead appear to greatly overestimate the numbers due to the fallacious assumptions built into them. These numbers are then mispresented to the public by both public health officials and the mainstream media as though uncontroversial and representative of known cases of influenza-caused illnesses and deaths from surveillance data. Consequently, the public is grossly misinformed about the societal disease burden from influenza and the ostensible benefit of the vaccine.
It is clear that the CDC does not see its mission as being to educate the public in order to be able to make an informed choice about vaccination. After all, that would be incompatible with its view that growing health literacy is a threat to its mission and an obstacle to be overcome. On the other hand, a misinformed populace aligns perfectly with the CDC’s stated goal of using fear marketing to generate more demand for the pharmaceutical industry’s influenza vaccine products.
This article is an adapted and expanded excerpt from part two of the author’s multi-part exposé on the influenza vaccine.
News Media Gave Blanket Coverage To Flawed Climate Paper
Global Warming Policy Forum – 07/11/18
A week ago, we were told that climate change was worse than we thought. But the underlying science contains a major error.
Independent climate scientist Nicholas Lewis has uncovered a major error in a recent scientific paper that was given blanket coverage in the English-speaking media. The paper, written by a team led by Princeton oceanographer Laure Resplandy, claimed that the oceans have been warming faster than previously thought. It was announced, in news outlets including the BBC, the New York Times, the Washington Post and Scientific American that this meant that the Earth may warm even faster than currently estimated.
However Lewis, who has authored several peer-reviewed papers on the question of climate sensitivity and has worked with some of the world’s leading climate scientists, has found that the warming trend in the Resplandy paper differs from that calculated from the underlying data included with the paper.
“If you calculate the trend correctly, the warming rate is not worse than we thought – it’s very much in line with previous estimates,” says Lewis.
In fact, says Lewis, some of the other claims made in the paper and reported by the media, are wrong too.
“Their claims about the effect of faster ocean warming on estimates of climate sensitivity (and hence future global warming) and carbon budgets are just incorrect anyway, but that’s a moot point now we know that about their calculation error”.
And now that the errors have been uncovered, Lewis points out that it is important that the record is corrected.
“The original findings of the Resplandy paper were given blanket coverage by the media, who rarely question hyped-up findings of this kind. Let’s hope some of them are willing to correct the record”.
Cut Emissions? Who, Me?
By Paul Homewood | Not A Lot Of People Know That | October 22, 2018
![]()
The IPCC says we have got to start cutting emissions radically immediately, but the rest of the world is not listening!
1) Australia rejects UN call to phase out coal
Australia has rejected a call by scientists to phase out coal use by 2050 to prevent the world overshooting targets in the Paris Climate Change agreement with potentially disastrous consequences.
The world’s biggest coal exporter on Tuesday said it would be “irresponsible” to comply with the recommendation by the UN’s Intergovernmental Panel on Climate Change (IPCC) to stop using coal to generate electricity.
Canberra also reiterated its priority is to cut domestic electricity prices rather than curb greenhouse gas emissions, which have risen for four consecutive years.
“To say that it [coal] has to be phased out by 2050 is drawing a very long bow,” said Melissa Price, Australia’s environment minister, who previously worked in the mining industry.
“I just don’t know how you could say by 2050 that you’re not going to have the technology that’s going to enable good, clean technology when it comes to coal. That would be irresponsible of us.” … https://www.ft.com/content/326d7228-cb83-11e8-b276-b9069bde0956
2) Japan Will Defy Calls By The IPCC To Phase Out Coal By Mid Century
Japan’s ambassador to Australia has confirmed Tokyo will defy calls by the Intergovernmental Panel on Climate Change to phase out coal by mid-century as part of a scientific appeal to limit global temperature increases to 1.5C.
Sumio Kusaka told The Australian that Japan would consider “all practical ways to further advance decarbonisation” but would need to bolster coal supply in the immediate future. He said Japanese plans to reduce reliance on fossil fuels in line with its international commitments would see a greater focus on nuclear energy, a form of power prohibited in Australia since 1998.
In recent weeks, Tony Abbott and Ziggy Switkowski, former chair of the Australian Nuclear Science and Technology Organisation, have called for the prohibition on nuclear power to be lifted to provide for the arrival of small modular reactors that can power towns of 100,000 people.
“I am aware the recent IPCC report contains some firm recommendations in relation to coal,” Mr Kusaka told The Australian.
“However, Japan is a country with very limited resources of its own, and bearing in mind our energy security requirements, it would be difficult for us to eliminate coal- fired power altogether.
“With a view to 2050, we are also considering all practical ways to further advance decarbonisation. In relation to this, some of the technologies we are looking at include renewable energy, nuclear energy and carbon capture and storage.’’
Mr Kusaka said Tokyo would continue to buy coal from Australia to secure its energy needs into the future. Japan was the largest importer of Australian thermal coal last year. – https://www.thegwpf.com/japan-will-defy-calls-by-the-ipcc-to-phase-out-coal-by-mid-century/
3) China To Speed Up End Of Green Energy Subsidies
SHANGHAI (Reuters) – China will speed up efforts to ensure its wind and solar power sectors can compete without subsidies and achieve “grid price parity” with traditional energy sources like coal, according to new draft guidelines issued by the energy regulator.
As it tries to ease its dependence on polluting fossil fuels, China has encouraged renewable manufacturers and developers to drive down costs through technological innovations and economies of scale.
The country aims to phase out power generation subsidies, which have become an increasing burden on the state.
The guidelines said some regions with cost and market advantages had already “basically achieved price parity” with clean coal-fired power and no longer required subsidies, and others should learn from their experiences.
They also urged local transmission grid companies to provide more support for subsidy-free projects and ensure they have the capacity to distribute all the power generated by wind and solar plants…
China’s solar sector is still reeling from a decision to cut subsidies and cap new capacity at 30 gigawatts (GW) this year, down from a record 53 GW in 2017, with the government concerned about overcapacity and a growing subsidy backlog.
According to the NEA, the government owed around 120 billion yuan ($17.46 billion) in subsidies to solar plants by the middle of this year.
4) Germany’s Merkel Promises New Law To Ward Off Diesel Driving Bans (And To Save Her Floundering Government)
BERLIN (Reuters) – German Chancellor Angela Merkel, campaigning for her Christian Democrats (CDU) to retain control of the crucial state of Hesse in next Sunday’s election, promised legislation to ward off the threat of air pollution leading to driving bans.
Speaking at a news conference on Sunday evening, Merkel said it would be disproportionate to ban dirty diesel cars from the road in places like Frankfurt, Hesse’s largest city, where nitrogen emissions limits were only marginally exceeded.
Following her allies’ disastrous showing in Bavaria’s regional elections last week, Merkel faces murmurs of dissent within her party. Defeat in the state to the resurgent Greens could prove fatal to her premiership.
Mysterious IPCC Expertise
The IPCC publishes the citizenship and gender of its authors – but says nothing about their scientific expertise
By Donna Laframboise | Big Picture News | October 17, 2018
The Intergovernmental Panel on Climate Change (IPCC) claims to be a scientific organization. But it’s really a political one.
An obvious tell is how it describes its personnel. In the old days, IPCC reports listed people according to their role and their country. Matters have improved since then.
Today, the IPCC gives us six data points about its personnel rather than three. A webpage associated with its latest report tells us each individual’s:
- name
- IPCC role (coordinating lead author, lead author, review editor)
- gender
- country of residence
- citizenship
- institutional affiliation
But this only looks like progress. In the real world, the additional info is irrelevant. Science doesn’t care where someone lives or what citizenship they hold. Science doesn’t care if they’re a man or a woman.
If the IPCC is a panel of experts, the critical issue is: What is each of these people an expert in? More than 30 years after its founding, the IPCC still thinks it doesn’t need to talk about this.
For the UN bureaucrats who run the show, some things are important. Some are not. The nature of an author’s scientific expertise clearly isn’t a burning issue. But lots of attention is being paid to checking diversity boxes.
The Dark Story Behind Global Warming aka Climate Change
By F. William Engdahl – New Eastern Outlook – 16.10.2018
The recent UN global warming conference under auspices of the deceptively-named International Panel on Climate Change (IPCC) concluded its meeting in South Korea discussing how to drastically limit global temperature rise. Mainstream media is predictably retailing various panic scenarios “predicting” catastrophic climate change because of man-made emissions of Greenhouse Gases, especially CO2, if drastic changes in our lifestyle are not urgently undertaken. There is only one thing wrong with all that. It’s based on fake science and corrupted climate modelers who have reaped by now [many] billions in government research grants to buttress the arguments for radical change in our standard of living. We might casually ask “What’s the point?” The answer is not positive.
The South Korea meeting of the UN IPCC discussed measures needed, according to their computer models, to limit global temperature rise to below 1.5 Centigrade above levels of the pre-industrial era. One of the panel members and authors of the latest IPCC Special Report on Global Warming, Drew Shindell, at Duke University told the press that to meet the arbitrary 1.5 degree target will require world CO2 emissions to drop by a staggering 40% in the next 12 years. The IPCC calls for a draconian “zero net emissions” of CO2 by 2050. That would mean complete ban on gas or diesel engines for cars and trucks, no coal power plants, transformation of the world agriculture to burning food as biofuels. Shindell modestly put it, “These are huge, huge shifts.”
The new IPCC report, SR15, declares that global warming of 1.5°C will “probably“ bring species extinction, weather extremes and risks to food supply, health and economic growth. To avoid this the IPCC estimates required energy investment alone will be $2.4 trillion per year. Could this explain the interest of major global banks, especially in the City of London in pushing the Global Warming card?
This scenario assumes an even more incredible dimension as it is generated by fake science and doctored data by a tight-knit group of climate scientists internationally that have so polarized scientific discourse that they label fellow scientists who try to argue as not mere global warming skeptics, but rather as “Climate Change deniers.” What does that bit of neuro-linguistic programming suggest? Holocaust deniers? Talk about how to kill legitimate scientific debate, the essence of true science. Recently the head of the UN IPCC proclaimed, “The debate over the science of climate change is well and truly over.”
What the UN panel chose to ignore was the fact the debate was anything but “over.” The Global Warming Petition Project, signed by over 31,000 American scientists states, “There is no convincing scientific evidence that human release of carbon dioxide, methane, or other greenhouse gasses is causing or will, in the foreseeable future, cause catastrophic heating of the Earth’s atmosphere and disruption of the Earth’s climate. Moreover, there is substantial scientific evidence that increases in atmospheric carbon dioxide produce many beneficial effects upon the natural plant and animal environments of the Earth.”
‘Chicken Little’
Most interesting, about the dire warnings of global catastrophe if dramatic changes to our living standards are not undertaken urgently, is that the dire warnings are always attempts to frighten based on future prediction. When the “tipping point” of so-called irreversibility is passed with no evident catastrophe, they invent a new future point.
In 1982 Mostafa Tolba, executive director of the UN Environment Program (UNEP), warned the “world faces an ecological disaster as final as nuclear war within a couple of decades unless governments act now.” He predicted lack of action would bring “by the turn of the century, an environmental catastrophe which will witness devastation as complete, as irreversible as any nuclear holocaust.”In 1989 Noel Brown, of the UN Environmental Program (UNEP), said entire nations could be wiped off the face of the earth by rising sea levels if the global warming trend is not reversed by the year 2000. James Hansen, a key figure in the doomsday scenarios declared at that time that 350 ppm of CO2 was the upper limit, “to preserve a planet similar to that on which civilization developed and to which life on Earth is adapted.” Rajendra Pachauri, then the chief of the UN IPCC, declared that 2012 was the climate deadline by which it was imperative to act: “If there’s no action before 2012, that’s too late.” Today the measured level is 414.
As UK scientist Philip Stott notes, “In essence, the Earth has been given a 10-year survival warning regularly for the last fifty or so years. …Our post-modern period of climate change angst can probably be traced back to the late-1960s… By 1973, and the ‘global cooling’ scare, it was in full swing, with predictions of the imminent collapse of the world within ten to twenty years…Environmentalists were warning that, by the year 2000, the population of the US would have fallen to only 22 million. In 1987, the scare abruptly changed to ‘global warming’, and the IPCC (the Intergovernmental Panel on Climate Change) was established (1988)…”
Flawed Data
A central flaw to the computer models cited by the IPCC is the fact that they are purely theoretical models and not real. The hypothesis depends entirely on computer models generating scenarios of the future, with no empirical records that can verify either these models or their flawed prediction. As one scientific study concluded, “The computer climate models upon which ‘human-caused global warming’ is based have substantial uncertainties and are markedly unreliable. This is not surprising, since the climate is a coupled, non-linear dynamical system. It is very complex.” Coupled refers to the phenomenon that the oceans cause changes in the atmosphere and the atmosphere in turn affects the oceans. Both are complexly related to solar cycles. No single model predicting global warming or 2030 “tipping points” is able or even tries to integrate the most profound influence on Earth climate and weather, the activity of the sun and solar eruption cycles which determine ocean currents, jet stream activity, El ninos and our daily weather.
An Australian IT expert and independent researcher, John McLean, recently did a detailed analysis of the IPCC climate report. He notes that HadCRUT4 is the primary dataset used by the Intergovernmental Panel on Climate Change (IPCC) to make its dramatic claims about “man-made global warming”, to justify its demands for trillions of dollars to be spent on “combating climate change.” But McLean points to egregious errors in the HadCRUT4 used by IPCC. He notes, “It’s very careless and amateur. About the standard of a first-year university student.” Among the errors, he cites places where temperature “averages were calculated from next to no information. For two years, the temperatures over land in the Southern Hemisphere were estimated from just one site in Indonesia.” In another place he found that for the Caribbean island, St Kitts temperature was recorded at 0 degrees C for a whole month, on two occasions. TheHadCRUT4 dataset is a joint production of the UK Met Office’s Hadley Centre and the Climatic Research Unit at the University of East Anglia. This was the group at East Anglia that was exposed several years ago for the notorious Climategate scandals of faking data and deleting embarrassing emails to hide it. Mainstream media promptly buried the story, turning attention instead on “who illegally hacked East Anglia emails.”
Astonishing enough when we do a little basic research, we find that the IPCC never carried out a true scientific inquiry into the possible cases of change in Earth climate. Man made sources of change were arbitrarily asserted, and the game was on.
Malthusian Maurice Strong
Few are aware however of the political and even geopolitical origins of Global Warming theories. How did this come about? So-called Climate Change, aka Global Warming, is a neo-malthusian deindustrialization agenda originally developed by circles around the Rockefeller family in the early 1970’s to prevent the rise of independent industrial rivals, much as Trump’s trade wars today. In my book, Myths, Lies and Oil Wars, I detail how the highly influential Rockefeller group also backed creation of the Club of Rome, Aspen Institute, Worldwatch Institute and MIT Limits to Growth report. A key early organizer of Rockefeller’s ‘zero growth’ agenda in the early 1970s was David Rockefeller’s longtime friend, a Canadian oilman named Maurice Strong. Strong was one of the early propagators of the scientifically unfounded theory that man-made emissions from transportation vehicles, coal plants and agriculture caused a dramatic and accelerating global temperature rise which threatens civilization, so-called Global Warming.
As chairman of the 1972 Earth Day UN Stockholm Conference, Strong promoted an agenda of population reduction and lowering of living standards around the world to “save the environment.” Some years later the same Strong restated his radical ecologist stance: “Isn’t the only hope for the planet that the industrialized civilizations collapse? Isn’t it our responsibility to bring that about?” Co-founder of the Rockefeller-tied Club of Rome, Dr Alexander King admitted the fraud in his book, The First Global Revolution. He stated, “In searching for a new enemy to unite us, we came up with the idea that pollution, the threat of global warming, water shortages, famine and the like would fit the bill… All these dangers are caused by human intervention… The real enemy, then, is humanity itself.”
Please reread that, and let it sink in. Humanity, and not the 147 global banks and multi-nationals who de facto determine today’s environment, bear the responsibility.
Following the Earth Summit, Strong was named Assistant Secretary General of the United Nations, and Chief Policy Advisor to Kofi Annan. He was the key architect of the 1997-2005 Kyoto Protocol that declared man made Global Warming, according to “consensus,” was real and that it was “extremely likely” that man-made CO2 emissions have predominantly caused it. In 1988 Strong was key in creation of the UN IPCC and later the UN Framework Convention on Climate Change at the Rio Earth Summit which he chaired, and which approved his globalist UN Agenda 21.
The UN IPCC and its Global Warming agenda is a political and not a scientific project. Their latest report is, like the previous ones, based on fake science and outright fraud. MIT Professor Richard S Lindzen in a recent speech criticized politicians and activists who claim “the science is settled,” and demand “unprecedented changes in all aspects of society.” He noted that it was totally implausible for such a complex “multifactor system” as the climate to be summarized by just one variable, global mean temperature change, and primarily controlled by just a 1-2 per cent variance in the energy budget due to CO2. Lindzen described how “an implausible conjecture backed by false evidence, repeated incessantly, has become ‘knowledge,’ used to promote the overturn of industrial civilization.” Our world indeed needs a “staggering transformation,” but one that promotes health and stability of the human species instead.

