Aletho News

ΑΛΗΘΩΣ

Gospel science: We found only one-third of published psychology research is reliable – now what?

What does it mean if the majority of what’s published in journals can’t be reproduced?

By Maggie Villiger | The Conversation | August 27, 2015

The ability to repeat a study and find the same results twice is a prerequisite for building scientific knowledge. Replication allows us to ensure empirical findings are reliable and refines our understanding of when a finding occurs. It may surprise you to learn, then, that scientists do not often conduct – much less publish – attempted replications of existing studies.

Journals prefer to publish novel, cutting-edge research. And professional advancement is determined by making new discoveries, not painstakingly confirming claims that are already on the books. As one of our colleagues recently put it, “Running replications is fine for other people, but I have better ways to spend my precious time.”

Once a paper appears in a peer-reviewed journal, it acquires a kind of magical, unassailable authority. News outlets, and sometimes even scientists themselves, will cite these findings without a trace of skepticism. Such unquestioning confidence in new studies is likely undeserved, or at least premature.

A small but vocal contingent of researchers – addressing fields ranging from physics to medicine to economics – has maintained that many, perhaps most, published studies are wrong. But how bad is this problem, exactly? And what features make a study more or less likely to turn out to be true?

We are two of the 270 researchers who together have just published in the journal Science the first-ever large-scale effort trying to answer these questions by attempting to reproduce 100 previously published psychological science findings.

Attempting to re-find psychology findings

Publishing together as the Open Science Collaboration and coordinated by social psychologist Brian Nosek from the Center for Open Science, research teams from around the world each ran a replication of a study published in three top psychology journals – Psychological Science ; Journal of Personality and Social Psychology ; and Journal of Experimental Psychology : Learning, Memory, and Cognition. To ensure the replication was as exact as possible, research teams obtained study materials from the original authors, and worked closely with these authors whenever they could.

Almost all of the original published studies (97%) had statistically significant results. This is as you’d expect – while many experiments fail to uncover meaningful results, scientists tend only to publish the ones that do.

What we found is that when these 100 studies were run by other researchers, however, only 36% reached statistical significance. This number is alarmingly low. Put another way, only around one-third of the rerun studies came out with the same results that were found the first time around. That rate is especially low when you consider that, once published, findings tend to be held as gospel.

The bad news doesn’t end there. Even when the new study found evidence for the existence of the original finding, the magnitude of the effect was much smaller — half the size of the original, on average.

One caveat: just because something fails to replicate doesn’t mean it isn’t true. Some of these failures could be due to luck, or poor execution, or an incomplete understanding of the circumstances needed to show the effect (scientists call these “moderators” or “boundary conditions”). For example, having someone practice a task repeatedly might improve their memory, but only if they didn’t know the task well to begin with. In a way, what these replications (and failed replications) serve to do is highlight the inherent uncertainty of any single study – original or new.

More robust findings more replicable

Given how low these numbers are, is there anything we can do to predict the studies that will replicate and those that won’t? The results from this Reproducibility Project offer some clues.

There are two major ways that researchers quantify the nature of their results. The first is a p-value, which estimates the probability that the result was arrived at purely by chance and is a false positive. (Technically, the p-value is the chance that the result, or a stronger result, would have occurred even when there was no real effect.) Generally, if a statistical test shows that the p-value is lower than 5%, the study’s results are considered “significant” – most likely due to actual effects.

Another way to quantify a result is with an effect size – not how reliable the difference is, but how big it is. Let’s say you find that people spend more money in a sad mood. Well, how much more money do they spend? This is the effect size.

We found that the smaller the original study’s p-value and the larger its effect size, the more likely it was to replicate. Strong initial statistical evidence was a good marker of whether a finding was reproducible.

Studies that were rated as more challenging to conduct were less likely to replicate, as were findings that were considered surprising. For instance, if a study shows that reading lowers IQs, or if it uses a very obscure and unfamiliar methodology, we would do well to be skeptical of such data. Scientists are often rewarded for delivering results that dazzle and defy expectation, but extraordinary claims require extraordinary evidence.

Although our replication effort is novel in its scope and level of transparency – the methods and data for all replicated studies are available online – they are consistent with previous work from other fields. Cancer biologists, for instance, have reported replication rates as low as 11%25%.

We have a problem. What’s the solution?

Some conclusions seem warranted here.

We must stop treating single studies as unassailable authorities of the truth. Until a discovery has been thoroughly vetted and repeatedly observed, we should treat it with the measure of skepticism that scientific thinking requires. After all, the truly scientific mindset is critical, not credulous. There is a place for breakthrough findings and cutting-edge theories, but there is also merit in the slow, systematic checking and refining of those findings and theories.

Of course, adopting a skeptical attitude will take us only so far. We also need to provide incentives for reproducible science by rewarding those who conduct replications and who conduct replicable work. For instance, at least one top journal has begun to give special “badges” to articles that make their data and materials available, and the Berkeley Initiative for Transparency in the Social Sciences has established a prize for practicing more transparent social science.

Better research practices are also likely to ensure higher replication rates. There is already evidence that taking certain concrete steps – such as making hypotheses clear prior to data analysis, openly sharing materials and data, and following transparent reporting standards – decreases false positive rates in published studies. Some funding organizations are already demanding hypothesis registration and data sharing.

Although perfect replicability in published papers is an unrealistic goal, current replication rates are unacceptably low. The first step, as they say, is admitting you have a problem. What scientists and the public now choose to do with this information remains to be seen, but our collective response will guide the course of future scientific progress.

August 29, 2015 Posted by | Corruption, Deception, Science and Pseudo-Science | , | Leave a comment

The conceits of consensus

By Judith Curry | Climate Etc. | August 27, 2015

Critiques, the 3%, and is 47 the new 97?

For background, see my previous post The 97% feud.

Cook et al. critiques

At the heart of the consensus controversy is the paper by Cook et al. (2013), which inferred a 97% consensus by classifying abstracts from published papers.The study was based on a search of broad academic literature using casual English terms like “global warming”, which missed many climate science papers but included lots of non-climate-science papers that mentioned climate change – social science papers, surveys of the general public, surveys of cooking stove use, the economics of a carbon tax, and scientific papers from non-climate science fields that studied impacts and mitigation.

The Cook et al. paper has been refuted in the published literature in an article by Richard Tol:  Quantifying the consensus on anthropogenic global warming in the literature: A re-analysis (behind paywall).  Summary points from the abstract:

A trend in composition is mistaken for a trend in endorsement. Reported results are inconsistent and biased. The sample is not representative and contains many irrelevant papers. Overall, data quality is low. Cook׳s validation test shows that the data are invalid. Data disclosure is incomplete so that key results cannot be reproduced or tested.

Social psychologist Jose Duarte has a series of blog posts that document the ludicrousness of the selection and categorization of papers by Cook et al., including citation of specific articles that they categorized as supporting the climate change consensus:

From this analysis, Duarte concludes: ignore climate consensus studies based on random people rating journal article abstracts.  I find it difficult to disagree with him on this.

The 3%

So, does all this leave you wondering what the 3% of papers not included in the consensus had to say?  Well, wonder no more. There is a new paper out, published by Cook and colleagues:

Learning from mistakes

Rasmus Benestad, Dana Nuccitelli, Stephan Lewandowski, Katherine Hayhoe, Hans Olav Hygen, Rob van Dorland, John Cook

Abstract.  Among papers stating a position on anthropogenic global warming (AGW), 97 % endorse AGW. What is happening with the 2 % of papers that reject AGW? We examine a selection of papers rejecting AGW. An analytical tool has been developed to replicate and test the results and methods used in these studies; our replication reveals a number of methodological flaws, and a pattern of common mistakes emerges that is not visible when looking at single isolated cases. Thus, real-life scientific disputes in some cases can be resolved, and we can learn from mistakes. A common denominator seems to be missing contextual information or ignoring information that does not fit the conclusions, be it other relevant work or related geophysical data. In many cases, shortcomings are due to insufficient model evaluation, leading to results that are not universally valid but rather are an artifact of a particular experimental setup. Other typical weaknesses include false dichotomies, inappropriate statistical methods, or basing conclusions on misconceived or incomplete physics. We also argue that science is never settled and that both mainstream and contrarian papers must be subject to sustained scrutiny. The merit of replication is highlighted and we discuss how the quality of the scientific literature may benefit from replication.

Published in Theoretical and Applied Climatology [link to full paper].

A look at the Supplementary Material shows that they considered credible skeptical papers (38 in total) – by Humlum, Scafetta, Solheim and others.

The gist of their analysis is that the authors were ‘outsiders’, not fully steeped in consensus lore and not referencing their preferred papers.

RealClimate has an entertaining post on the paper, Let’s learn from mistakes, where we learn that this paper was rejected by five journals before being published by Theoretical and Applied Climatology. I guess the real lesson from this paper is that you can get any kind of twaddle published, if you keep trying and submit it to different journals.

A consensus on what, exactly?

The consensus inferred from the Cook et al. analysis is a vague one indeed; exactly what are these scientists agreeing on? The ‘97% of the world’s climate scientists agree that humans are causing climate change’ is a fairly meaningless statement unless the relative amount (%) of human caused climate change is specified. Roy Spencer’s 2013 Senate testimony included the following statement:

“It should also be noted that the fact that I believe at least some of recent warming is human-caused places me in the 97% of researchers recently claimed to support the global warming consensus (actually, it’s 97% of the published papers, Cook et al., 2013). The 97% statement is therefore rather innocuous, since it probably includes all of the global warming “skeptics” I know of who are actively working in the field. Skeptics generally are skeptical of the view that recent warming is all human-caused, and/or that it is of a sufficient magnitude to warrant immediate action given the cost of energy policies to the poor. They do not claim humans have no impact on climate whatsoever.

The only credible way to ascertain whether scientists support the consensus on climate change is through surveys of climate scientists. This point is eloquently made in another post by Joe Duarte: The climate science consensus is 78-84%. Now I don’t agree with Duarte’s conclusion on that, but he makes some very salient points:

Tips for being a good science consumer and science writer. When you see an estimate of the climate science consensus:

  • Make sure it’s a direct survey of climate scientists. Climate scientists have full speech faculties and reading comprehension. Anyone wishing to know their views can fruitfully ask them. Also, be alert to the inclusion of people outside of climate science.
  • Make sure that the researchers are actual, qualified professionals. You would think you could take this for granted in a study published in a peer-reviewed journal, but sadly this is simply not the case when it comes to climate consensus research. They’ll publish anything with high estimates.
  • Be wary of researchers who are political activists. Their conflicts of interest will be at least as strong as that of an oil company that had produced a consensus study – moral and ideological identity is incredibly powerful, and is often a larger concern than money.
  • In general, do not trust methods that rest on intermediaries or interpreters, like people reviewing the climate science literature. Thus far, such work has been dominated by untrained amateurs motivated by political agendas.
  • Be mindful of the exact questions asked. The wording of a survey is everything.
  • Be cautious about papers published in climate science journals, or really in any journal that is not a survey research journal. Our experience with the ERL fraud illustrated that climate science journals may not be able to properly review consensus studies, since the methods (surveys or subjective coding of text) are outside their domains of expertise. The risk of junk science is even greater if the journal is run by political interests and is motivated to publish inflated estimates. For example, I would advise strong skepticism of anything published by Environmental Research Letters on the consensus – they’re run by political people like Kammen.

Is 47 the new 97?

The key question is to what extent climate scientists agree with key consensus statement of the IPCC:

“It is extremely likely {95%+ certainty} that more than half of the observed increase in global average surface temperature from 1951 to 2010 was caused by the anthropogenic increase in greenhouse gas concentrations and other anthropogenic forcings together. ”

Several surveys of climate scientists have addressed using survey questions that more or less address the issue of whether humans are the dominant cause of recent warming (discussed in the previous post by Duarte and summarized in my post The 97% feud).

The survey that I like the best is:

Verheggan et al. (2014) Scientists view about attribution of climate change. Environmental Science & Technology [link]

Recently, a more detailed report on the survey was made available [link]. Fabius Maximus has a fascinating post New study undercuts key IPCC finding (the text below draws liberally from this post). This survey examines agreement with the keynote statement of the IPCC AR5:

“It is extremely likely {95%+ certainty} that more than half of the observed increase in global average surface temperature from 1951 to 2010 was caused by the anthropogenic increase in greenhouse gas concentrations and other anthropogenic forcings together. ”

The survey examines both facets of the attribution statement – how much warming is caused by humans, and what is the confidence in that assessment.

In response to the question: What fraction of global warming since the mid 20th century can be attributed to human induced increases in atmospheric greenhouse gas concentrations? A total of 1,222 of 1,868 (64% of respondents) agreed with AR5 that the answer was over 50%. Excluding the 164 (8.8%) “I don’t know” respondents, yields 72% agree with the IPCC.

 

Slide1

The second question is: “What confidence level would you ascribe to your estimate that the anthropogenic greenhouse gas warming is more than 50%?” Of the 1,222 respondents who said that the anthropogenic contribution was over 50%, 797 (65%) said it was 95%+ certain (which the IPCC defines as “virtually certain” or “extremely likely”).

Slide2The 797 respondents who are highly confident that more than 50% of the warming is human caused) are 43% of all 1,868 respondents (47% excluding the “don’t know” group). Hence this survey finds that slightly less than half of climate scientists surveyed agree with the AR5 keynote statement in terms of confidence in the attribution statement.

 Who’s opinion ‘counts’?

Surveys of actual climate scientists is a much better way to elicit the actual opinions of scientist on this issue. But surveys raise the issue as to exactly who are the experts on the issue of attribution of climate change? The Verheggan et al. study was criticized in a published comment by Duarte, in terms of the basis for selecting participants to respond to the survey:

“There is a deeper problem. Inclusion of mitigation and impacts papers – even from physical sciences or engineering – creates a structural bias that will inflate estimates of consensus, because these categories have no symmetric disconfirming counterparts. These researchers have simply imported a consensus in global warming. They then proceed to their area of expertise. [These papers] do not carry any data or epistemic information about climate change or its causes, and the authors are unlikely to be experts on the subject, since it is not their field.

Increased public interest in any topic will reliably draw scholars from various fields. However, their endorsement (or rejection) of human-caused warming does not represent knowledge or independent assessments. Their votes are not quanta of consnsensus, but simply artifacts of career choices, and the changing political climate. Their inclusion will artificially inflate sample sizes, and will likely bias the results.”

Roy Spencer also addresses this issue in his Senate testimony (cited above):

“(R)elatively few researchers in the world – probably not much more than a dozen – have researched how sensitive today’s climate system is based upon actual measurements. This is why popular surveys of climate scientists and their beliefs regarding global warming have little meaning: very few of them have actually worked on the details involved in determining exactly how much warming might result from anthropogenic greenhouse gas emissions.”

The number of real experts on the detection and attribution of climate change is small, only a fraction of the respondents to these surveys. I raised this same issue in the pre-Climate Etc. days in response to the Anderegg et al. paper, in a comment at Collide-a-Scape (referenced by Columbia Journalism Review ):

The scientific litmus test for the paper is the AR4 statement: “anthropogenic greenhouse gases have been responsible for “most” of the “unequivocal” warming of the Earth’s average global temperature over the second half of the 20th century”.

The climate experts with credibility in evaluating this statement are those scientists that are active in the area of detection and attribution. “Climate” scientists whose research areas is ecosystems, carbon cycle, economics, etc speak with no more authority on this subject than say Freeman Dyson.

I define the 20th century detection and attribution field to include those that create datasets, climate dynamicists that interpret the variability, radiative forcing, climate modeling, sensitivity analysis, feedback analysis. With this definition, 75% of the names on the list disappear. If you further eliminate people that create datasets but don’t interpret the datasets, you have less than 20% of the original list.

Apart from Anderegg’s classification of the likes of Freeman Dyson as not a ‘climate expert’ (since he didn’t have 20 peer reviewed publications that they classed as ‘climate papers’), they also did not include solar – climate experts such as Syun Akasofu (since apparently Akasofu’s solar papers do not count as ‘climate’).

But perhaps the most important point is that of the scientists who are skeptical of the IPCC consensus, a disproportionately large number of these skeptical scientists are experts on climate change detection/attribution. Think Spencer, Christy, Lindzen, etc. etc.

Bottom line: inflating the numbers of ‘climate scientists’ in such surveys attempts to hide that there is a serious scientific debate about the detection and attribution of recent warming, and that scientists who are skeptical of the IPCC consensus conclusion are disproportionately expert in the area of climate change detection and attribution.

Conceits of consensus

And finally, a fascinating article The conceits of ‘consensus’ in Halakhic rhetoric.  Read the whole thing, it is superb.  A few choice excerpts:

The distinguishing characteristic of these appeals to consensus is that the legitimacy or rejection of an opinion is not determined by intrinsic, objective, qualifiable criteria or its merits, but by its adoption by certain people. The primary premise of such arguments is that unanimity or a plurality of agreement among a given collective is halakhically binding on the Jewish population  and cannot be further contested or subject to review.

Just as the appeal to consensus stresses people over logic, subsequent debate will also focus on the merits of individuals and their worthiness to be included or excluded from the conversation. This situation runs the risk of the ‘No True Scotsman’ fallacy whereby one excludes a contradictory opinion on the grounds that no one who could possibly hold such an opinion is worth consideration.

Debates over inclusion and exclusion for consensus are susceptible to social manipulations as well. Since these determinations imply a hierarchy or rank of some sort, attempts which disturb an existing order may be met with various forms of bullying or intimidation – either in terms of giving too much credit to one opinion or individual or not enough deference to another. Thus any consensus reached on this basis would not be not based out of genuine agreement, but fear of reprisals. The consensus of the collective may be similarly manipulated through implicit or overt marketing as a way to artificially besmirch or enhance someone’s reputation.

The next premise to consider is the correlation between consensus and correctness such that if most (or all) people believe something to be true, then by the value of its widespread acceptance and popularity, it must be correct. This is a well known logical fallacy known as argumentum ad populum, sometimes called the ‘bandwagon fallacy’. This should be familiar to anyone who has ever been admonished, “if all your friends would jump off a bridge would you follow?” It should also be obvious that at face value that Jews, especially Orthodox Jews, ought to reject this idea as a matter of principle.

Appeals to consensus are common and relatively simply to assert, but those who rely on consensus rarely if ever acknowledge, address, or defend, the assumptions inherent with the invoking of consensus as a source – if not the determinant – of practical Jewish law. As I will demonstrate, appeals to consensus are laden with problematic logical and halakhic assumptions such that while “consensus” may constitute one factor in determining a specific psak, it is not nearly the definitive halakhic criterion its proponents would like to believe.

August 27, 2015 Posted by | Deception, Science and Pseudo-Science | , , | Leave a comment

Climatologist Dr. Tim Ball On 97% Consensus: “Completely False And Was Deliberately Manufactured”!

By P Gosselin | No Tricks Zone | August 24, 2015

Canadian climate scientist Dr. Tim Ball recently published a new book on climate science: The Deliberate Corruption of Climate Science. What follows later (below) is a short interview with Dr. Ball.

“Government propaganda” … “corrupt science”

In the book Ball writes that the failed predictions of the Intergovernmental Panel on Climate Change (IPCC), coupled with failed alarmist stories such as the complete loss of Arctic sea ice by 2013, are making the public increasingly skeptical of government propaganda about global warming. People were already skeptical because they knew weather forecasts, especially beyond forty-eight hours, were invariably wrong, and so today more people understand there is no substance to global warming claims and that it is based on corrupt science. Now they are asking: Who perpetrated the deception and could a small group of people deceive the world?

In his book The Deliberate Corruption of Climate Science Dr. Ball explains who did it and why.

Ball was among the earlier dissidents and as a result he became the target of media articles and false information promoted by a scurrilous website funded by a chairman of a large environmental foundation. He was a real threat because they couldn’t say he wasn’t qualified.

Dr. Ball has been the subject of three lawsuits from a lawyer operating in British Columbia. For the first one, he decided to avoid the expense of a challenge and so he withdrew what he had written. Then, within nine days, he received two more from the same lawyer suing for defamation because of harsh criticism he made of a climate scientist. At that point, he and his family decided they had to fight back.

As Ball carries on his legal battle he maintains that climate deception continues and that the public is paying a high price for completely unnecessary energy and economic policies based on the pseudoscience of the IPCC. Not to mention the social devastation of communities devastated by job losses.

“Their last effective chance”

Dr. Balls says the rhetoric and stream of misinformation increases as the perpetrators, now including the Pope, build up to their last effective chance to influence an increasingly skeptical world. When the Global Warming theme failed, they tried Climate Change. The Climate Change theme has failed, so now they are trying Climate Disruption as defined by President Obama’s science Czar, John Holdren—all to justify expensive government programs. The impetus for a global carbon tax and global governance represent the central theme of a climate conference scheduled for Paris in December 2015, the United Nations Climate Change Conference or COP21.

INTERVIEW

What follows are some questions that Dr. Ball kindly answered:

By what scientific reason do you think CO2’s role is far less?

Water vapor is 95% of the total greenhouse gases by volume, while CO2 is approximately 4%. The human portion is only 3.4% of the total CO2. They try to claim CO2 is more effective, but it’s a false claim called “climate sensitivity”. The number the IPCC use for sensitivity has constantly declined and will reach zero.

What factor has been the most responsible for the warming over the past 25 years?

The same factor as it has always been, changes in the sun. The IPCC dismiss the sun because they only look at variation in radiative output, but that is only one of three ways the Sun affects global climate.

What do you think the global temperature will do over the next few decades?

Decline. The major short-term control of global temperature is variation in the strength of the Sun’s magnetic field. As it varies it determines the amount of cosmic radiation reaching the Earth. The cosmic radiation creates clouds in the lower atmosphere and it, like a shutter in the greenhouse it determines the sunlight reaching the surface and therefore the temperature.

What do you think of the claimed “97% consensus”?

It is completely false and was deliberately manufactured by John Cook at the University of Queensland. There are more detailed analyses of the corruption but this is the best layman’s account. www.forbes.com/sites/alexepstein/.

On a scale of 1 to 10, how honest have the major climate institutes been with the public?

-10. If they knew what was wrong it is deliberate and criminal. If they didn’t know they are grossly incompetent. 

Other comments by Dr. Ball:

The biggest problem for the public is they can’t believe that an apparent majority of scientists seem to support the IPCC science. The simple answer is, very few are familiar with the science. They, like most of the public, assume other scientists would not distort, manipulate, or do anything other than proper science. When scientists find out, they are shocked, as exemplified in German meteorologist Klaus-Eckert Puls’s comment:

“Ten years ago I simply parroted what the IPCC told us. One day I started checking the facts and data—first I started with a sense of doubt but then I became outraged when I discovered that much of what the IPCC and the media were telling us was sheer nonsense and was not even supported by any scientific facts and measurements. To this day I still feel shame that as a scientist I made presentations of their science without first checking it.”

August 27, 2015 Posted by | Book Review, Science and Pseudo-Science | , , | Leave a comment

Hawaii Sees 10 Fold Increase in Birth Defects After Becoming GM Corn Testing Grounds

By Jay Syrmopoulos | The Free Thought Project | August 24, 2015

Waimea, HI – Doctors are sounding the alarm after noticing a disturbing trend happening in Waimea, on the island of Kauai, Hawaii. Over the past five years, the number of severe heart malformations has risen to more than ten times the national rate, according to an analysis by local physicians.

Pediatrician Carla Nelson, after seeing four of these defects in three years, is extremely concerned with the severe health anomalies manifesting in the local population.

Nelson, as well as a number of other local doctors, find themselves at the center of a growing controversy about whether the substantial increase in severe illness and birth defects in Waimea stem from the main cash crop on four of the six islands, genetically modified corn, which has been altered to resist pesticide.

Hawaii has historically been used as a testing ground for almost all GMO corn grown in the United States. Over 90% of GMO corn grown in the mainland U.S. was first developed in Hawaii, with the island of Kauai having the largest area used.

According to a report in The Guardian :

In Kauai, chemical companies Dow, BASF, Syngenta and DuPont spray 17 times more pesticide per acre (mostly herbicides, along with insecticides and fungicides) than on ordinary cornfields in the US mainland, according to the most detailed study of the sector, by the Center for Food Safety.

That’s because they are precisely testing the strain’s resistance to herbicides that kill other plants. About a fourth of the total are called Restricted Use Pesticides because of their harmfulness. Just in Kauai, 18 tons – mostly atrazine, paraquat (both banned in Europe) and chlorpyrifos – were applied in 2012. The World Health Organization this year announced that glyphosate, sold as Roundup, the most common of the non-restricted herbicides, is “probably carcinogenic in humans”.

Waimea is a small town that lies directly downhill from the 12,000 acres of GMO test fields leased mainly from the state. Spraying takes place often, sometimes every couple of days. Residents have complained that when the wind blows downhill from the fields, the chemicals have caused headaches, vomiting, and stinging eyes.

“Your eyes and lungs hurt, you feel dizzy and nauseous. It’s awful,” local middle school special education teacher Howard Hurst told the Guardian. “Here, 10% of the students get special-ed services, but the state average is 6.3%,” he says. “It’s hard to think the pesticides don’t play a role.”

To add insult to injury, Dow AgraSciences’ main lobbyist in Honolulu, until recently, actually ran the main hospital in town. Although only 1,700ft away from a Syngenta field, the hospital has never done any research into the effects of pesticides on its patients.

Hawaiians have attempted to reign in the industrial chemical/farming machine on four separate occasions over the past two years. On August 9 an estimated 10,000 people marched through Honolulu’s main tourist district to protest the collusion of big business and state putting profits over citizens’ health.

“The turnout and the number of groups marching showed how many people are very frustrated with the situation,” native Hawaiian activist Walter Ritte said.

Hawaiians have also attempted to use a ballot initiative to force a moratorium on the planting of GMO crops, according to The Guardian:

In Maui County, which includes the islands of Maui and Molokai, both with large GMO corn fields, a group of residents calling themselves the Shaka Movement sidestepped the company-friendly council and launched a ballot initiative that called for a moratorium on all GMO farming until a full environmental impact statement is completed there.

The companies, primarily Monsanto, spent $7.2m on the campaign ($327.95 per “no” vote, reported to be the most expensive political campaign in Hawaii history) and still lost.

Again, they sued in federal court, and, a judge found that the Maui County initiative was preempted by federal law. Those rulings are also being appealed.

Even amidst strong public pressure, the chemical companies that grow the GMO corn have continued to refuse to disclose the chemicals they are using, as well as the specific amounts of each chemical being used. The industry and its political cronies have continually insisted that pesticides are safe.

“We have not seen any credible source of statistical health information to support the claims,” said Bennette Misalucha, executive director of Hawaii Crop Improvement Association in a written statement distributed by a publicist.

Nelson pointed out that American Academy of Pediatrics’ report, Pesticide Exposure in Children, found “an association between pesticides and adverse birth outcomes, including physical birth defects,” going on to note that local schools have twice been evacuated and kids sent to the hospital due to pesticide drift. “It’s hard to treat a child when you don’t know which chemical he’s been exposed to.”

Sidney Johnson, a pediatric surgeon at the Kapiolani Medical Center for Women and Children who oversees all children born in Hawaii with major birth defects says he’s noticed that the number of babies born here with their abdominal organs outside. This is a rare condition known as gastroschisis and has grown from three a year in the 1980s to about a dozen now, according to The Guardian.

Johnson and a team of medical students have been studying hospital records to determine if any of the parents of the infants with gastroschisis were residing near fields that were undergoing spraying during conception and early pregnancy.

“We have cleanest water and air in the world,” Johnson said. “You kind of wonder why this wasn’t done before,” he says. “Data from other states show there might be a link, and Hawaii might be the best place to prove it.”

It was recently revealed that these chemical companies, unlike farmers, are allowed to operate under an antiquated decades-old Environmental Protection Agency permit. This permit was grandfathered in from the days of sugar plantations when the amounts and toxicities were significantly lower, and which allowed for toxic chemicals to be discharged into water. Tellingly the state of Hawaii has asked for a federal exemption to allow these companies to continue to not comply with modern standards.

The ominous reality of collusion between these mega-corporations and the political class in Hawaii has seemingly left the citizens of the state with virtually no ability to safeguard their children’s health. We tread dangerously close to corporate fascism when profits are put above the health of the people.

August 26, 2015 Posted by | Civil Liberties, Corruption, Science and Pseudo-Science | , , | Leave a comment

After Decades of Denial National Cancer Institute Finally Admits that “Cannabis Kills Cancer”

By Jay Syrmopoulos | The Free Thought Project | August 21, 2015

After decades of claiming that cannabis has no medicinal value, the U.S. government is finally admitting that cannabis can kill cancer cells.

Although still claiming, “there is not enough evidence to recommend that patients inhale or ingest cannabis as a treatment for cancer-related symptoms or side effects of cancer therapy,” the admission that “cannabis has been shown to kill cancer cells in the laboratory,” highlights a rapidly changing perspective on medicinal cannabis treatments.

In the most recent update to the National Cancer Institute’s (NCI) website included a listing of studies, which indicated anti-tumor effects of cannabis treatment.

Preclinical studies of cannabinoids have investigated the following activities:

Antitumor activity
• Studies in mice and rats have shown that cannabinoids may inhibit tumor growth by causing cell death, blocking cell growth, and blocking the development of blood vessels needed by tumors to grow. Laboratory and animal studies have shown that cannabinoids may be able to kill cancer cells while protecting normal cells.
• A study in mice showed that cannabinoids may protect against inflammation of the colon and may have potential in reducing the risk of colon cancer, and possibly in its treatment.
• A laboratory study of delta-9-THC in hepatocellular carcinoma (liver cancer) cells showed that it damaged or killed the cancer cells. The same study of delta-9-THC in mouse models of liver cancer showed that it had antitumor effects. Delta-9-THC has been shown to cause these effects by acting on molecules that may also be found in non-small cell lung cancer cells and breast cancer cells.
• A laboratory study of cannabidiol (CBD) in estrogen receptor positive and estrogen receptor negative breast cancer cells showed that it caused cancer cell death while having little effect on normal breast cells. Studies in mouse models of metastatic breast cancer showed that cannabinoids may lessen the growth, number, and spread of tumors.
• A laboratory study of cannabidiol (CBD) in human glioma cells showed that when given along with chemotherapy, CBD may make chemotherapy more effective and increase cancer cell death without harming normal cells. Studies in mouse models of cancer showed that CBD together with delta-9-THC may make chemotherapy such as temozolomide more effective.

The NCI, part of the U.S. Department of Health, advises that ‘cannabinoids may be useful in treating the side effects of cancer and cancer treatment’ by smoking, eating it in baked products, drinking herbal teas or even spraying it under the tongue.

The site goes on to list other beneficial uses, which include: anti-inflammatory activity, blocking cell growth, preventing the growth of blood vessels that supply tumors, antiviral activity and relieving muscle spasms caused by multiple sclerosis.

Several scientific studies have given indications of these beneficial properties in the past, and this past April the US government’s National Institute on Drug Abuse (NIDA) revised their publications to suggest cannabis could shrink brain tumors by killing off cancer cells, stating, “marijuana can kill certain cancer cells and reduce the size of others.”

“Evidence from one animal study suggests that extracts from whole-plant marijuana can shrink one of the most serious types of brain tumors,” the NIDA said. “Research in mice showed that these extracts, when used with radiation, increased the cancer-killing effects of the radiation.”

Research on marijuana’s potential as a medicine has been stifled for decades by federal restrictions, even though nearly half of the states and the District of Columbia have legalized medical marijuana in some form.

Although cannabis has been increasingly legalized by states, the federal government still classifies marijuana as a Schedule 1 drug — along with heroin and ecstasy — defining it as having no medical benefits and a potential for abuse.

The vast majority of the $1.4 billion spent on marijuana research, by the National Institute of Health, absurdly involves the study of abuse and addiction, with only $297 million being spent researching potential medical benefits.

Judging by the spending levels, it seems the feds have a vested interest in keeping public opinion of cannabis negative. Perhaps “Big Pharma” is utilizing their financial influence over politicians in an effort to maintain a stranglehold on the medical treatment market.

August 22, 2015 Posted by | Corruption, Economics, Science and Pseudo-Science | , | Leave a comment

Unspoken Death Toll of Fukushima: Nuclear Disaster Killing Japanese Slowly

Sputnik – 20.08.2015

According to London-based independent consultant on radioactivity in the environment Dr. Ian Fairlie, the health toll from the Fukushima nuclear catastrophe is horrific: about 12,000 workers have been exposed to high levels of radiation (some up to 250 mSv); between 2011 and 2015, about 2,000 died from the effects of evacuations, ill-health and suicide related to the disaster; furthermore, an estimated 5,000 will most likely face lethal cancer in the future, and that is just the tip of the iceberg.

What makes matters even worse, the nuclear disaster and subsequent radiation exposure lies at the root of the longer term health effects, such as cancers, strokes, CVS (cyclic vomiting syndrome) diseases, hereditary effects and many more.

Embarrassingly, “[t]he Japanese Government, its advisors, and most radiation scientists in Japan (with some honorable exceptions) minimize the risks of radiation. The official widely-observed policy is that small amounts of radiation are harmless: scientifically speaking this is untenable,” Dr. Fairlie pointed out.

The Japanese government even goes so far as to increase the public limit for radiation in Japan from 1 mSv to 20 mSv per year, while its scientists are making efforts to convince the International Commission on Radiological Protection (ICRP) to accept this enormous increase.

“This is not only unscientific, it is also unconscionable,” Dr. Fairlie stressed, adding that “there is never a safe dose, except zero dose.”

However, while the Japanese government is turning a blind eye to radiogenic late effects, the evidence “is solid”: the RERF Foundation which is based in Hiroshima and Nagasaki is observing the Japanese atomic bomb survivors and still registering nuclear radiation’s long-term effects.

“From the UNSCEAR estimate of 48,000 person Sv [the collective dose to the Japanese population from Fukushima], it can be reliably estimated (using a fatal cancer risk factor of 10% per Sv) that about 5,000 fatal cancers will occur in Japan in the future from Fukushima’s fallout,” he noted.

Dr. Fairlie added that in addition to radiation-related problems, former inhabitants of Fukushima Prefecture suffer Post-Trauma Stress Disorder (PTSD), depression, anxiety disorders that apparently cause increased suicide.

The expert also pointed to the 15 percent drop in the number of live births in the prefecture in 2011, as well as higher rates of early spontaneous abortions and a 20 percent rise in the infant mortality rate in 2012.

“It is impossible not to be moved by the scale of Fukushima’s toll in terms of deaths, suicides, mental ill-health and human suffering,” the expert said.

August 21, 2015 Posted by | Deception, Nuclear Power, Science and Pseudo-Science | | Leave a comment

US Jails People for Cannabis While Govt Promotes It as Cancer Treatment

Sputnik – August 21, 2015

Cannabis is, in fact, extremely effective in fighting cancer, the US government admitted last week. The drug, illegal throughout most of the United States, is now recommended by the government’s official cancer advice website.

Criminalized by the US federal government since 1937, cannabis is being advertised by the US Department of Health as “useful in treating the side effects of cancer and cancer treatment” on the agency’s official cancer advice website.

The National Cancer Institute claims cannabinoids, which are the active chemicals in cannabis, can be smoked, inhaled, eaten in baked products, drank in herbal teas, or even sprayed under the tongue as treatment.

The drug can do even more than just treat side effects. Cannabis can also act as an anti-inflammatory agent, prevent the growth of cancer cells, block the flow of blood vessels to tumours, and help relieve muscle spasms caused by multiple sclerosis.

The results were based partially on lab tests which showed the decline of cancer cells in mice after exposure to cannabis.

Some activists in the mass media, as well as Hollywood stars, have long touted the medical benefits of the drug.

In response to the multiple scientific studies which have proven marijuana’s efficacy, the US Food and Drug Administration recently approved two cancer treatment drugs which contain cannabinoids.

Several states, including California, New York, and Maine have already legalized marijuana for medical purposes. Four states, as well as the District of Columbia, have legalized the drug for recreational use, although it remains prohibited by federal law.

Despite these studies, as well as a general push for decriminalization across the country, the US penal system imprisons a shocking number of individuals for nonviolent crimes related to marijuana.

In 2013 alone, 609,423 individuals were arrested for possession of a substance which is now recommended by the US Department of Health.

Background:

US Study Concludes Marijuana Can Kill Cancer Cells

August 20, 2015 Posted by | Civil Liberties, Science and Pseudo-Science, Timeless or most popular | , , | Leave a comment

Monsanto Wants to Know Why People Doubt Science

By Colin Todhunter | CounterPunch | February 27, 2015

On Twitter recently, someone asked the question “Why do people doubt science?” Accompanying the tweet was a link to an article in National Geographic that implied people who are suspicious of vaccines, genetically modified organisms (GMOs), climate change, fluoridated water and various other phenomena are confused, adhere to conspiracy theories, are motivated by ideology or are misinformed as a result of access to the ‘University of Google.’ The remedy, according to what is said in the article, is for us all to rely on scientific evidence pertaining to these issues and adopt a ‘scientific method’ of thought and analysis and put irrational thought processes to one side.

Who tweeted the question and posted the link? None other than Robert T Fraley, Monsanto’s Vice President and Chief Technology Officer.

Before addressing that question, it is worth mentioning that science is not the giver of ‘absolute truth’. That in itself should allow us to develop a healthy sceptism towards the discipline. The ‘truth’ is a tricky thing to pin down. Scientific knowledge is built on shaky stilts that rest on shifting foundations. Science historian Thomas Kuhn wrote about the revolutionary paradigm shifts in scientific thought, whereby established theoretical perspectives can play the role of secular theology and serve as a barrier to the advancement of knowledge, until the weight of evidence and pressure from proponents of a new theoretical paradigm is overwhelming. Then, at least according to Kuhn, the old faith gives way and a new ‘truth’ changes.

Philosopher Paul Feyerabend argued that science is not an ‘exact science’. The manufacture of scientific knowledge involves a process driven by various sociological, methodological and epistemological conflicts and compromises, both inside the laboratory and beyond. Writers in the field of the sociology of science have written much on this.

But the answer to the question “Why do people doubt science” is not because they have read Kuhn, Feyerabend or some sociology journal. Neither is it because a bunch of ‘irrational’ activists have scared them witless about GM crops or some other issue. It is because they can see how science is used, corrupted and manipulated by powerful corporations to serve their own ends. It is because they regard these large corporations as largely unaccountable and their activities and products not properly regulated by governments.

That’s why so many doubt science – or more precisely the science corporations fund and promote to support their interests.

US sociologist Robert Merton highlighted the underlying norms of science as involving research that is not warped by vested interests, adheres to the common ownership of scientific discoveries (intellectual property) to promote collective collaboration and subjects findings to organised, rigorous critical scrutiny within the scientific community. The concept of originality was added by later writers in order to fully encapsulate the ethos of science: scientific claims must contribute something new to existing discourse. Based on this brief analysis, secrecy, dogma and vested interest have no place.

This is of course a highly idealised version of what science is or should be because in reality careers, reputations, commercial interests and funding issues all serve to undermine these norms.

But if we really want to look at the role of secrecy, dogma and vested interest in full flow, we could take a look at the sector to which Robert T Fraley belongs.

Last year, US Agriculture Secretary Tom Vilsack called for “sound science” to underpin food trade between the US and the EU. However, he seems very selective in applying “sound science” to certain issues. Consumer rights groups in the US are pushing for the labelling of GMO foods, but Vilsack said that putting a label on a foodstuff containing a GM product “risks sending a wrong impression that this was a safety issue.”

Despite what Vilsack would have us believe, many scientific studies show that GMOs are indeed a big safety issue and what’s more are also having grave environmental, social and economic consequences (for example, see this and this).

By not wanting to respond to widespread consumer demands to know what they are eating and risk “sending a wrong impression,” Vislack is trying to prevent proper debate about issues that his corporate backers would find unpalatable: profits would collapse if consumers had the choice to reject the GMOs being fed to them. And ‘corporate backers’ must not be taken as a throwaway term here. Big agritech concerns have captured or at the very least seriously compromised key policy and regulatory bodies in the US (see this), Europe (see this), India (see this) and in fact on a global level (see here regarding control of the WTO).

If Robert T Fraley wants to understand why people doubt science, he should consider what Andy Stirling, Professor of Science and Technology Policy at Sussex University, says:

“The main reason some multinationals prefer GM technologies over the many alternatives is that GM offers more lucrative ways to control intellectual property and global supply chains. To sideline open discussion of these issues, related interests are now trying to deny the many uncertainties and suppress scientific diversity. This undermines democratic debate – and science itself.” (see here)

Coming from the GMO biotech industry, or its political mouthpieces, the term “sound science” rings extremely hollow. The industry carries out inadequate, short-term studies and conceals the data produced by its research under the guise of ‘commercial confidentiality’ (see this), while independent research highlights the very serious dangers of its products [see this and this). It has in the past also engaged in fakery in India (see this), bribery in Indonesia (see this ) and smears and intimidation against those who challenge its interests [see this), as well as the distortion and the censorship of science (see this  and this).

With its aim to modify organisms to create patents that will secure ever greater control over seeds, markets and the food supply, the widely held suspicion is that the GMO agritech sector is only concerned with a certain type of science: that which supports these aims. Because if science is held in such high regard by these corporations, why isn’t Monsanto proud of its products? Why not label foods in the US that contain GMOs and throw open [Monsanto’s] science to public scrutiny, instead of veiling it with secrecy, restricting independent research on its products or resorting to unsavoury tactics?

If science is held in such high regard by the GMO agritech sector, why in the US did policy makers release GM food onto the commercial market without proper long-term tests? The argument used to justify this is GM food is ‘substantially equivalent’ to ordinary food. But this is not based on scientific reason. Foreign genes are being inserted into organisms that studies show make them substantially non-equivalent (see this). Substantial equivalence is a trade strategy on behalf of the GM sector that neatly serves to remove its GMOs from the type of scrutiny usually applied to potentially toxic or harmful substances. The attempt to replace processed-based regulation of GMOs in Europe with product-based regulation would result in serving a similar purpose (see this).

The reason why no labelling or testing has taken place in the US is not due to ‘sound science’ having been applied but comes down to the power and political influence of the GMO biotech sector and because a sound scientific approach has not been applied.

The sector cannot win the scientific debate (although its PR likes to tell the world it has) so it resorts to co-opting key public bodies or individuals to propagate various falsehoods and deceptions (see this). Part of the deception is based on emotional blackmail: the world needs GMOs to feed the hungry, both now and in the future. This myth has been blown apart (see thisthis and this). In fact, in the second of those three links, the organisation GRAIN highlights that GM crops that have been planted thus far have actually contributed to food insecurity.

This is a harsh truth that the industry does not like to face.

People’s faith in science is being shaken on many levels, not least because big corporations have secured access to policy makers and governments and are increasingly funding research and setting research agendas.

“As Andrew Neighbour, former administrator at Washington University in St. Louis, who managed the university’s multiyear and multimillion dollar relationship with Monsanto, admits, “There’s no question that industry money comes with strings. It limits what you can do, when you can do it, who it has to be approved by”…  This raises the question: if Agribusiness giant Monsanto [in India] is funding the research, will Indian agricultural researchers pursue such lines of scientific inquiry as “How will this new rice or wheat variety impact the Indian farmer, or health of Indian public?” The reality is, Monsanto is funding the research not for the benefit of either Indian farmer or public, but for its profit. It is paying researchers to ask questions that it is most interested in having answered.” – ‘Monsanto, a Contemporary East India Company, and Corporate Knowledge in India‘.

Ultimately, it is not science itself that people have doubts about but science that is pressed into the service of immensely powerful private corporations and regulatory bodies that are effectively co-opted and adopt a ‘don’t look, don’t find approach’ to studies and products (see thisthis  and this).

Or in the case of releasing GMOs onto the commercial market in the US, bypassing proper scientific procedures and engaging in doublespeak about ‘substantial equivalence’ then hypocritically calling for ‘sound science’ to inform debates.

The same corporate interests are moreover undermining the peer-review process itself and the ability of certain scientists to get published in journals – the benchmark of scientific credibility. In effect, powerful interests increasingly hold sway over funding, career progression as a scientist, journals and peer review (see this and this, which question the reliability of peer review in the area of GMOs).

Going back to the start of the piece, the question that should have been tweeted is: “Why do people doubt corporate-controlled or influenced science?” After that question, it would have been more revealing to have posted a link to this article here about the unscrupulous history of a certain company from St Louis. That history provides very good reason why so many doubt and challenge powerful corporations and the type of science they fund and promote (or attempt to suppress) and the type of world they seek to create (see this).

“Corporations as the dominant institution shaped by capitalist patriarchy thrive on eco-apartheid. They thrive on the Cartesian legacy of dualism which puts nature against humans. It defines nature as female and passively subjugated. Corporatocentrism is thus also androcentric – a patriarchal construction. The false universalism of man as conqueror and owner of the Earth has led to the technological hubris of geo-engineering, genetic engineering, and nuclear energy. It has led to the ethical outrage of owning life forms through patents, water through privatization, the air through carbon trading. It is leading to appropriation of the biodiversity that serves the poor.” –Vandana Shiva

August 16, 2015 Posted by | Deception, Science and Pseudo-Science, Timeless or most popular | , | Leave a comment

Mark Steyn’s new book on Michael Mann

A Disgrace to the Profession: The World’s Scientists – in their own words – on Michael E Mann, his Hockey Stick and their Damage to Science – Volume One

Review by Judith Curry | Climate Etc. | August 13, 2015

The backstory on Mann vs Steyn is described in previous posts [link] and links therein. The short story is this. Mann is suing Steyn (and others) for defamation regarding a statement about ‘the fraudulent hockeystick’. Steyn is countersuing. The lawsuits have been tied up in DC courts for years. The new book compiles what is presumably evidence obtained by Steyn’s lawyers regarding whether ‘fraudulent’ is defamatory here. And this is only Volume 1; apparently there is a Volume 2 in the works.

Mark Steyn has 3 blog posts (so far) on the book:

Two bloggers have already written about the book

Anthony Watts reminds me of this statement I made in a previous post: “Mark Steyn is formidable opponent. I suspect that this is not going to turn out well for you.” This book certainly supports my statement.

The book is organized around quotes from Ph.D. scientists (100+) that have made remarks about Mann, either publicly in interviews, on blogs, or in private emails that were revealed through FOIA or unauthorized releases (e.g. Climategate, SkS). This is not just a compilation of quotes from the ‘usual suspects’; I was unfamiliar with many of these individuals, and impressed by their credentials. Each chapter begins with an overview and context about the particular theme, then each subsection is devoted to a particular scientist, beginning with a brief biosketch of that scientist and including backstory and context.

There is much wit and plenty of zingers in Steyn’s narrative (not sure if anyone helped him with the technical aspects of this; seems pretty solid). However, for my post on this book, I decided to focus on snippets from climate scientists who generally support the consensus (explicitly, or lacking any evidence of the opposite), including Mann’s collaborators. It was not simple to cull this down to ~1200 words (so as not to steal thunder from potential buyers of the book), but the quotes below I think give a pretty good representation from the climate scientists that were quoted. Note, I focus particularly on the Hockey Stick (and subsequent incarnations), rather than broader issues about Mann that were raised in some of the quotes.

From climate scientists, all of whom support the general consensus on climate change:

Wallace Broecker: “The goddam guy is a slick talker and super-confident. He won’t listen to anyone else,” one of climate science’s most senior figures, Wally Broecker of the Lamont-Doherty Earth Observatory at Columbia University in New York, told me. “I don’t trust people like that. A lot of the data sets he uses are shitty, you know. They are just not up to what he is trying to do…. If anyone deserves to get hit it is goddam Mann.”

Eduardo Zorita: Why I Think That Michael Mann, Phil Jones and Stefan Rahmstorf2 Should be Barred from the IPCC Process. Short answer: because the scientific assessments in which they may take part are not credible anymore. These words do not mean that I think anthropogenic climate change is a hoax. On the contrary, it is a question which we have to be very well aware of. But I am also aware that editors, reviewers and authors of alternative studies, analysis, interpretations, even based on the same data we have at our disposal, have been bullied and subtly blackmailed.

Atte Korhola: Another example is a study recently published in the prestigious journal Science. Proxies have been included selectively, they have been digested, manipulated, filtered, and combined – for example, data collected from Finland in the past by my own colleagues has even been turned upside down such that the warm periods become cold and vice versa. Normally, this would be considered as a scientific forgery, which has serious consequences.

Hans von Storch: A conclusion could be that the principle, according to which data must be made public, so that also adversaries may check the analysis, must be really enforced. Another conclusion could be that scientists like Mike Mann, Phil Jones and others should no longer participate in the peer-review process or in assessment activities like IPCC.

Bo Christiansen: The hockey-stick curve does not stand. It does not mean that we cancel the manmade greenhouse effect, but the causes have become more nuanced… Popularly, it can be said that the flat piece on the hockey stick is too flat. In addition, their method contains a large element of randomness. It is almost impossible to conclude from reconstruction studies that the present period is warmer than any period in the reconstructed period.

David Rind: Concerning the hockey stick: what Mike Mann continually fails to understand, and no amount of references will solve, is that there is practically no reliable tropical data for most of the time period, and without knowing the tropical sensitivity, we have no way of knowing how cold (or warm) the globe actually got. I’ve made the comment to Mike several times, but it doesn’t seem to get across.

Tom Wigley: I have just read the M&M stuff criticizing MBH. A lot of it seems valid to me. At the very least MBH is a very sloppy piece of work – an opinion I have held for some time. Can you give me a brief heads up? Mike is too deep into this to be helpful.

From Mann’s collaborators and coauthors:

Phil Jones: Keith [Briffa] didn’t mention in his Science piece but both of us think that you’re on very dodgy ground with this long-term decline in temperatures on the thousand-year timescale. It is better we put the caveats in ourselves than let others put them in for us.

Keith Briffa: I have just read this letter – and I think it is crap. I am sick to death of Mann stating his reconstruction represents the tropical area just because it contains a few tropical series. He is just as capable of regressing these data again any other “target” series, such as the increasing trend of self-opinionated verbiage he has produced over the last few years

Edward Cook: I will be sure not to bring this up to Mike. As you know, he thinks that CRU is out to get him in some sense. I am afraid that Mike is defending something that increasingly cannot be defended. He is investing too much personal stuff in this and not letting the science move ahead.

Raymond Bradley: I would like to disassociate myself from Mike Mann’s view. As for thinking that it is “Better that nothing appear, than something unnacceptable to us” …as though we are the gatekeepers of all that is acceptable in the world of paleoclimatology seems amazingly arrogant. Science moves forward whether we agree with individual articles or not.

Matti Saarnisto: In that article [Science], my group’s research material from Korttajärvi, near Jyväskylä, was used in such a way that the Medieval Warm Period was shown as a mirror image. The graph was flipped upside-down. In this email I received yesterday from one of the authors of the article, my good friend Professor Ray Bradley …says there was a large group of researchers who had been handling an extremely large amount of research material, and at some point it happened that this graph was turned upside-down. But then this happened yet another time in Science, and now I doubt if it can be a mistake anymore. But how it is possible that this type of material is repeatedly published in these top science journals? There is a small circle going round and around, relatively few people are reviewing each other’s papers, and that is in my opinion the worrying aspect.

Rob Wilson: I want to clarify that my 2 hour lecture was, I hope, a critical look at all of the northern hemispheric reconstructions of past temperature to date. It was not focused entirely on Michael Mann’s work. The “crock of xxxx” statement was focused entirely on recent work by Michael Mann w.r.t. hypothesized missing rings in tree-ring records. Although a rather flippant statement, I stand by it and Mann is well aware of my criticisms (privately and through the peer reviewed literature) of his recent work.

Some of the harshest criticisms come from physicists; I’ve selected this one from Jonathan Jones, who I had the pleasure of meeting with last June while in the UK:

Jonathan Jones: My whole involvement has always been driven by concerns about the corruption of science. Like many people I was dragged into this by the Hockey Stick. The Hockey Stick is an extraordinary claim which requires extraordinary evidence, so I started reading round the subject. And it soon became clear that the first extraordinary thing about the evidence for the Hockey Stick was how extraordinarily weak it was, and the second extraordinary thing was how desperate its defenders were to hide this fact. The Hockey Stick is obviously wrong. Climategate 2011 shows that even many of its most outspoken public defenders know it is obviously wrong. And yet it goes on being published and defended year after year. Do I expect you to publicly denounce the Hockey Stick as obvious drivel? Well yes, that’s what you should do. It is the job of scientists of integrity to expose pathological science. It is a litmus test of whether climate scientists are prepared to stand up against the bullying defenders of pathology in their midst.

Two of the most surprising statements (to me) are from two young scientists associated with Skeptical Science:

Neal King: My impression is that Mann and buddies have sometimes gone out on a limb when that was unnecessary and ill-advised. Mann, for all his technical ability, is sometimes his own worst enemy. Similarly, with regard to “hiding the decline” in Climategate, I am left with the impression that the real question is, Why would you believe the tree-ring proxies at earlier times when you KNOW that they didn’t work properly in the 1990s? Mann et al spent too much time defending what was incorrect, and allowed the totality of the argument to become “infected” by the fight.

Robert Way: I don’t mean to be the pessimist of the group here but Mc2 brought up some very good points about the original hockey stick. I’ve personally seen work that is unpublished that challenges every single one of his reconstructions because they all either understate or overstate low-frequency variations. Mann et al stood by after their original HS and let others treat it with the confidence that they themselves couldn’t assign to it. The original hockey stick still used the wrong methods and these methods were defended over and over despite being wrong. He fought like a dog to discredit and argue with those on the other side that his method was not flawed. And in the end he never admitted that the entire method was a mistake. They then let this HS be used in every way possible despite knowing the stats behind it weren’t rock solid.

This selection of quotes does not include the strongest ‘zingers’, which come from scientists that are somewhat further afield or have made public statements that are critical of the AGW consensus.

JC reflections

So, back to the topic of the lawsuits. In light of these quotes by Ph.D scientists, does Mark Steyn have a strong defense against the charge of defamation for stating ‘fraudulent hockey stick’? This certainly looks to me like the basis of a strong defense. With regards to Steyn’s countersuit, if he makes a lot of money off this book, that would rather argue against large damages from his countersuit.

I have written many posts about Michael Mann – apart from my own concerns about the hockey stick (Hiding the Decline), I am greatly concerned about Mann’s bullying behavior inserting itself into the scientific process (collaboration, peer review, public communication). My concerns go beyond the general strategies of adversarial science, to what I regard as unethical behavior.

It is a sad state of affairs for climate science that this book had to be written (it was brought on by Michael Mann’s lawsuit – without the lawsuit, Steyn obviously wouldn’t have bothered).  At a time when the U.S. and the world’s nations are trying to put together an agreement to tackle climate change (for better or for worse), Steyn’s book reminds everyone of Climategate, why the public doesn’t trust climate scientists and aren’t buying their ‘consensus.

How will all this play out? Hard to predict, but I hope that everyone will learn that adversarial science as practiced in its pathological form by Michael Mann doesn’t ‘pay’ in the long run.

The book can be purchased at the SteynStore, or it will be available at amazon.com (kindle edition available Sept 1).

August 14, 2015 Posted by | Book Review, Deception, Science and Pseudo-Science | , | Leave a comment

Will Obama’s Clean Power Plan save consumers money?

By Dave Rutledge | Climate Etc. | August 10, 2015

On August 3, President Obama declared that “under the Clean Power Plan, by 2030, renewables will account for 28% of our capacity,” and “will save the average American family nearly $85 on their annual energy bill in 2030.”

In the accompanying EPA rule, the word renewables is not used consistently. Sometimes it includes hydroelectric power, sometimes not. Sometimes the focus is on wind and solar power, sometimes it is broader. As the readers are aware, capacity is not the same thing as generation, and for generation, prices vary widely during the day. This makes it unclear how we get from a 28% capacity to $85 in annual savings. It is common for energy analysts to use levelized costs to compare different sources, but a residential consumer is paying for 24/7 access to a working grid, not for electricity from individual sources.

Without any enabling legislation, President Obama plans to force the United States to make an enormous capital investment, of the order of a trillion dollars, in wind and solar power and the associated grid infrastructure. Politicians often talk about investments when they mean forced transfers, but this really would be an investment, and the goal of this post is to estimate the return for the consumer. The post was inspired by a post by Willis Eschenbach at What’s Up With That. I will not consider the health and climate impacts of the plan. Judy Curry started the discussion of these in her August 3 post.

If the residential electricity bills actually do go down $85 a year as President Obama promised, then that $85 would be the return on our investment. To evaluate an investment, we divide it by the annual return to get a payback time. The situation is different if the electricity bills go up. The return is negative. We are never paid back and we have also lost our investment. One can still calculate a payback time using the same formula but we get a negative payback time, which is worse than any investment with a positive payback time. The readers who are scientists and engineers may appreciate the analogy to negative-temperature systems that are hotter than any system with a positive temperature. Among those awful investments with negative payback times, the smaller the negative payback time the worse the investment.

One complication in assessing a return on wind and solar investments is that the primary subsidies for renewables in the United States are the 30% federal tax credit and the 2.2¢/kWh producer tax credit for wind. These subsidies are effectively paid for by the people who pay income taxes. The toll falls heavily on the upper 1% in income who pay 46% of net US income taxes. Another problem in assessing a possible return is that the US has not gotten very far in wind and solar power. They accounted for only 4% of the electricity generation in 2013.

Europe is a better place to evaluate an investment in wind and solar power. The primary subsidy in Europe is a feed-in-tariff. Who pays in the end is different from the US. The people who are well off enough to buy solar arrays effectively are paid by the people who are not well off enough to buy solar arrays. I will leave the question of whether this is good social policy or not to the Europeans, but for this post it is useful because it means that the residential electricity bills reflect the wind and solar installation costs. It also helps that Europe has installed more than twice as much wind and solar capacity as the US.

Our starting point is Figure 1, which shows a plot of residential electricity prices compared with the residential component of wind and solar capacity for OECD-Europe countries. The data and the figures for this post are available as an Excel file. Willis Eschenbach and Jonathan Drake also made price plots for EU countries. Our emphasis will be on the higher-income European countries that are members of the OECD. Some countries, like Norway and Switzerland, are in OECD Europe but not the EU, while Romania is in the EU, but not the OECD. BP deems that Estonia, Iceland, Luxembourg, and Slovenia are not significant enough to include in their electricity spreadsheets, and I omitted them also.

The residential component of the wind and solar capacity is calculated from the residential share of the final consumption reported by the IEA. At 15¢/kWh, Norway is an outlier, well below the other countries. It has a very large per-person residential consumption of electricity generated by hydroelectric power. Norway also provides profitable balancing services to the continent, consuming wind and solar electricity when the price is low and providing hydroelectric power when the price is high. Roger Andrews has an excellent post on this balancing. The trend line is calculated without Norway. Incidentally, the US residential price is 12¢/kWh, even lower than Norway. The US has low-cost natural gas and coal and the US emphasizes tax credits rather than feed-in-tariffs to subsidize wind and solar power. As Willis noted, higher wind and solar capacities are associated with higher prices. For European consumers the return on their wind and solar investment is negative.

Rutledge2015Fig1

Figure 1. Residential electricity prices vs the residential component of the per-person wind and solar capacity for OECD Europe Countries. The electricity prices are taken from the IEA, the capacities from BP, and the populations from the UN. Data are for 2013, except for the Spanish price, where I filled from 2011. The IEA prices are converted at the market exchange rates.

How negative is the return? I propose that we interpret the y-intercept of the trend line, 18.8¢/kWh, as the price of electricity without any wind or solar capacity. As a check, in Germany in 2000, when the wind and solar capacity were negligible, the price was 16.3¢/kWh, expressed in 2013 dollars with BP’s deflator. The difference between the actual price and the zero-wind-and-solar price becomes a per kWh surcharge for the wind and solar capacity.

If we multiply this by the annual residential consumption we get an annual per-person wind and solar surcharge. These are shown in Figure 2. Again there is a clear trend. More capacity is associated with a greater surcharge. The slope of the trend line in the figure is $1.14/y/W. If we divide this by the average cost of the cumulative wind and solar capacity, we get the return on the investment, which will be negative. I will take the average cost to be $4/W. Expressed as a negative payback time, this is 3.5 years. Expressed as a negative return, it is 29% per year.

Rutledge2015Fig2

Figure 2. Calculated annual per-person wind and solar surcharge vs the residential component of per-person wind and solar capacity for OECD Europe Countries. Hungary (11W/p, –$7/p/y) is omitted from the graph, but included in the trend calculation. The trend is constrained to go through the origin.

As investments, these are inconceivably bad and we would expect large opportunity costs at the national level. It is interesting that if we start on the right in our graphs and move left past Denmark and Germany, the big spenders are the PIIGS (Portugal, Italy, Ireland, Greece, and Spain) that have been in the financial doghouse in recent years.

For consumers, the high electricity prices discourage the use of electricity for increasing safety. During the great European Heat Wave of 2003, 70,000 people died, most of them indoors. This is a horrible way to die. The people who were indoors could have been saved by a $140 Frigidaire window unit, but only if they could afford to pay for the electricity.

Dave Rutledge is the Tomayasu Professor of Electrical Engineering at the California Institute of Technology.

August 11, 2015 Posted by | Deception, Economics, Malthusian Ideology, Phony Scarcity, Progressive Hypocrite, Science and Pseudo-Science | , , | Leave a comment

The $ Amount It Took Big Pharma To Strip California Parental Rights

The Edgy Truth | July 2, 2015

SB277 passed. And some political campaigns got richer and more powerful in the mean time. But let’s start with the spin factory. Via Sac Bee :

“We aren’t pushing this bill behind the scenes,” said Priscilla VanderVeer, the senior director for communications for the Pharmaceutical Research and Manufacturers of America, known as PhRMA, the industry’s main trade group. The group has no taken no position on SB 277, although the group has long backed vaccinations as sound public health policy, she said.

This statement from VanderVeer is absolutely absurd and panders to the lowest denominations of our society’s intelligence. Who would believe this stuff? This can’t be real life.

Sen. Richard Pan, a Sacramento Democrat, himself nabbed $95,000 during the 2013-14 year. That’s a serious amount of cash from Pharmaceutical companies who just don’t seem to care about mandatory vaccinations, no? I wonder what policies he supports which they enjoy? It couldn’t be a more transparent situation.

The overall spend from Big Phama was $3 million to lobby legislature, the governor and the state pharmacists’ board. Again, a lot of cash for a group that isn’t “pushing the bill behind the scenes.”

State records show that pharmaceutical companies and trade groups donated more than $2 million to current lawmakers in 2013-2014.

Courtesy of the Sac Bee, this is a total joke. I hope everyone who supported this bill understands what these numbers mean. And when Big Pharma comes calling for more mandatory drugs, like forced SSRI treatment to depressed kids, please understand where it all started.

Pharmaceutical company or group Campaign donations to current state legislators Direct lobbying payments
Johnson & Johnson Inc. $86,300 $583,926
GlaxoSmithKline $32,250 $561,479
Eli Lilly & Company $193,100 $280,863
Gilead Sciences Inc. $77,600 $196,732
Biocom PAC $30,000 $223,224
Sanofi $48,000 $172,500
Abbott Laboratories $173,600 $42,500
Astellas Pharma US Inc. $47,900 $161,440
AstraZeneca Pharmaceuticals LLP $157,300 $49,583
Merck & Co. Inc. $91,600 $108,204
California Pharmacists Association $53,389 $134,176
Pharmaceutical Research & Manufacturers Assn. $137,950 $45,455
Eisai Inc. $92,000 $88,000
Bristol-Myers Squibb Company $32,300 $144,101
Pfizer $150,600 $21,250
AbbVie $138,425 $25,530
Amgen $105,600 $45,455
Allergan USA Inc. $120,100 $22,757
Takeda Pharmaceuticals USA Inc. $40,000 $83,348
Pharmacy Professionals of California $32,000 $0

TOP DRUG MAKER RECIPIENTS

Lawmaker Party/District Amount
Sen. Richard Pan* D-Sacramento $95,150
Assembly Speaker Toni Atkins D-San Diego $90,250
Sen. Ed Hernandez* D-Azusa $67,750
Sen. Holly Mitchell* D-Los Angeles $60,107
Assemblyman Brian Maienschein* R-San Diego $59,879
Senate President Pro Tem Kevin de León D-Los Angeles $56,648
Sen. Isadore Hall D-Compton $52,400
Sen. Jerry Hill D-San Mateo $50,209
Assemblyman Henry Perea D-Fresno $49,550
Assemblywoman Shirley Weber D-San Diego $47,000
Assemblyman Mike Gatto D-Los Angeles $46,491
Assemblywoman Susan A. Bonilla* D-Concord $45,600
Sen. Andy Vidak R-Hanford $42,800
Assemblyman Tom Daly D-Anaheim $40,300
Assemblyman Kevin Mullin D-South San Francisco $38,400
Assemblyman Adam Gray D-Merced $37,000
Assemblyman Rob Bonta* D-Alameda $36,750
Assemblyman Anthony Rendon D-Lakewood $36,200
Assemblyman Jimmy Gomez* D-Los Angeles $33,850
Assemblyman Richard Gordon D-Menlo Park $33,100

*Member of the Assembly or Senate health committees

Source: Bee analysis of secretary of state campaign finance and lobbying reports

August 9, 2015 Posted by | Corruption, Deception, Science and Pseudo-Science | , | Leave a comment

Obama May Finally Succeed!

By Willis Eschenbach | Watts Up With That? | August 3, 2015

For this post I’ve taken as my departure point a couple of very interesting graphs from over at Not A Lot Of People Know That. I’ll repeat them here:


Interesting, no? But I’m a numbers guy, I wanted to actually analyze the results. Using the data from those posts and adding the US information, I graphed the relationship … Figure 1 shows the result:

RStudioScreenSnapz027
Figure 1. Electricity costs as a function of per capita installed renewable capacity. Wind and solar only, excludes hydropower.

That is a most interesting result. Per capita installed renewable capacity by itself explains 84% of the variation in electricity costs. Not a big surprise given the crazy-high costs of renewables, but it is very useful for another calculation.

Today, President Obama said that he wanted 28% of America’s electricity to come from renewable energy by 2030. He has not detailed his plan, so I will assume that like California and other states with renewable targets, and like the EU graph above, hydropower is not included in counting the renewables, and thus the energy will have to come from wind and solar. (Why? In California, they admitted that hydropower was excluded because it would make it too easy to meet the renewable goals … seriously, that was their explanation.)

Currently, we get about 4% of our electricity from wind and solar. He wants to jack it to 28%, meaning we need seven times the installed capacity. Currently we have about 231 kW/capita of installed wind and solar (see Figure 1). So Obama’s plan will require that we have a little less than seven times that, 1537 kW/capita. And assuming that we can extend the relationship we see in Figure 1, this means that the average price of electricity in the US will perforce go up to no less than 43 cents per kilowatt-hour. (This includes the hidden 1.4 cents/kW cost due to the five cents per kilowatt-hour subsidy paid to the solar/wind producers).

Since the current average US price of electricity is about 12 cents per kilowatt-hour … that means the true price of electricity is likely to almost quadruple in the next 15 years.

And given that President Obama famously predicted that under his energy plan electricity prices would necessarily “skyrocket” … it looks like he finally might actually succeed at something.

Since this is being done illegally or at least highly improperly by means of Obama’s Imperial Presidential Fiat, there seems to be little we can do about it except to let your friends and neighbors know that thanks to Obama and the Democratic Party, their electric bill is indeed about to skyrocket …

August 3, 2015 Posted by | Economics, Malthusian Ideology, Progressive Hypocrite, Science and Pseudo-Science | , , | Leave a comment