Report finds high rate of thyroid cancer in eastern Pa.; blames nuclear power plants
By REGINA MEDINA | Philadelphia Daily News | January 24, 2010
Residents of eastern Pennsylvania might not know it, but they’re living in the middle of a thyroid-cancer hot spot, according to a public-health advocate.
The eastern side of the state lays claim to six of the nation’s top 18 counties with the highest thyroid-cancer rates, according to figures from the Centers for Disease Control and Prevention.
Pennsylvania ranked as the No. 1 state in thyroid-cancer cases between 2001 and 2005, 12.8 cases per 100,000 residents. (New Jersey comes in at No. 5 with 11.8 cases per 100,000.)
Joseph Mangano, the executive director of the Radiation and Public Health Project research group, said yesterday that he believes the spike in cancer is due to the high number of nuclear plants in the area.
At a news conference at City Hall where thyroid-cancer survivors and physicians also spoke, Mangano said that within 100 miles of eastern Pennsylvania, 16 nuclear reactors are operating at seven nuclear plants, the highest concentration in the country.
The emissions from the Limerick and Three-Mile Island plants don’t come close to those from the 1945 bombing of Hiroshima or the 1986 Chernobyl accident, but “that doesn’t necessarily mean [it’s] safer,” Mangano said.
“Not only have we documented an epidemic of thyroid cancer in the area, but we have raised a red flag for more and more detailed study of the relationship between the reactor emissions and thyroid cancer,” Mangano said.
Mangano, who published his findings in the International Journal of Health Services, said that the only known cause of thyroid cancer is exposure to radiation, specifically radioactive iodine, “one of the 100 man-made chemicals” produced by nuclear energy.
One University of Pennsylvania doctor who has researched thyroid cancer called the findings “provocative” and “intriguing,” but added that the author needed to delve more into the subject.
“We do know nuclear plants give off radioactive iodine [and] radioactive iodine can be associated with thyroid cancer,” said Susan J. Mandel, a professor of medicine and radiology. “Does it mean it causes it? It requires further investigation to see if it’s causing it.”
Lehigh County had the highest thyroid-cancer rate; others in eastern Pennsylvania were: Northampton (3rd), Luzerne (6th), York (7th), Bucks (14th) and Lancaster (18th). In New Jersey, Camden was ranked No. 16 and Burlington was 17th.
Climate science: models vs. observations
By Richard K. Moore | Aletho News | January 16, 2010
This document continues to evolve, based on continuing research. The latest version is always maintained at this URL:
http://rkmdocs.blogspot.com/2010/01/climate-science-observations-vs-models.html
You can click on any graphic in this document to see a larger image.
If a man is offered a fact which goes against his instincts, he will scrutinize it closely, and unless the evidence is overwhelming, he will refuse to believe it. If, on the other hand, he is offered something which affords a reason for acting in accordance to his instincts, he will accept it even on the slightest evidence.
— Bertrand Russell, Roads to Freedom, 1918
Science and models
True science begins with observations. When patterns are recognized in these observations, that leads to theories and models, which then lead to predictions. The predictions can then be tested by further observations, which can validate or invalidate the theories and models, or be used to refine them.
This is the paradigm accepted by all scientists. But scientists being people, typically in an academic research community, within a political society, there can be many a slip between cup and lip in the practice of science. There are the problems of getting funding, of peer pressure and career considerations, of dominant political dogmas, etc.
In the case of models there is a special problem that frequently arises. Researchers tend to become attached to their models, both psychologically and professionally. When new observations contradict the model, there is a tendency for the researchers to distort their model to fit the new data, rather than abandoning their model and looking for a better one. Or they may even ignore the new observations, and simply declare that their model is right, and the observations must be in error. This problem is even worse with complex computer models, where it is difficult for reviewers to figure out how the model really works, and whether ’fudging’ might be going on.
A classic example of the ’attached to model’ problem can be found in models of the universe. The Ptolemaic model assumed that the Earth is the center of the universe, and that the universe revolves around that center. Intuitively, this model makes a lot of sense. On the Earth, it feels like we are stationary. And we see the Sun and stars moving across the sky. “Obviously” the universe revolves around the Earth.
However, in order for this model to work in the case of the planets, it was necessary to introduce the arbitrary mechanism of epicycles. When Galileo and Copernicus came along, a much cleaner model was presented, that explained all the motions with no need for arbitrary assumptions. But no longer would the Earth be the center.
In this case it was not so much scientists that were attached to the old model, but the Church, which liked the model because it fit their interpretation of scripture. We’ve all heard the story of the Bishop who refused to look through the telescope, so he could ignore the new observations and hold on to the old model. Galileo was forced to recant. Thus can political interference hold back the progress of science, and ruin careers.
Climate models and global warming
Over the past century there has been a strong correlation between rising temperatures, and rising CO2 levels in the atmosphere, caused by the ever-increasing burning of fossil fuels. And it is well known that CO2 is a greenhouse gas. Other things being equal, higher CO2 levels must cause an increase in temperature, due to trapping more heat from the sun. Many scientists, quite reasonably, began to explore the theory that continually rising CO2 emissions would lead to continually rising temperatures.
Intuitively, it seems that the theory is “obviously” true. Temperatures have been rising along with CO2 levels; CO2 is a greenhouse gas; what is there to prove? And if the theory is true, and we keep increasing our emissions, then temperatures will eventually reach dangerous levels, melting the Antarctic ice sheet, raising sea levels, and all the other disasters presented by Al Gore in his famous documentary. “Obviously” we are facing a human-generated crisis – and something has got to be done!
But for many years, before Gore’s film, governments didn’t seem to be listening. Environmentalists, however, were listening. Public concern began to grow about CO2 emissions, and the climate scientists investigating the theory shared these concerns. They had a strong motivation to present the scientific case convincingly, in order to force governments to pay attention and take effective action — the future of humanity was at stake!
The climate scientists began building computer models, based on the observed correlation between temperature and CO2 levels. The models looked solid, not only for the past century, but extending back in time. Research with ice-core data revealed a general correlation between temperature and CO2 levels, extending back for a million years and more. What had been “obvious” to begin with, now looked even more obvious, confirmed by seemingly solid science.
These are the very conditions that typically cause scientists to become attached to their models. The early success of the model confirms what the scientists suspected all along: the theory must be true. A subtle shift happens in the mind of the scientists involved. What began as a theory starts to become an assumption. If new data seems to contradict the theory, the response is not to discard the theory, but rather to figure out what the model is lacking.
In the case of the Ptolemaic model, they figured out that epicycles must be lacking, and so epicycles were added. They were certain the universe revolved around the Earth, and so epicycles had to exist. Similarly, the climate scientists have run into problems with their models, and they’ve needed to add more and more machinery to their models in order to overcome those problems. They are certain of their theory, and so their machinery must be valid.
Perhaps they are right. Or perhaps they’ve strayed into epicycle territory, where the theory needs to be abandoned and a better model needs to be identified. This is the conclusion that quite a few scientists have reached. Experts do differ on this question, despite the fact that Gore says emphatically that the “science is settled”. Which group of scientists is right? This is the issue we will be exploring in this article.
Question 1
Compared to the historical record, are we facing a threat of dangerous global warming?
Let’s look at the historical temperature record, beginning with the long-term view. For long-term temperatures, ice-cores provide the most reliable data. Let’s look first at the very-long-term record, using ice cores from Vostok, in the Antarctic.
Data source:
ftp://ftp.ncdc.noaa.gov/pub/data/paleo/icecore/antarctica/vostok/deutnat.txt
Vostok Temperatures: 450,000 BC — Present
Here we see a very regular pattern of long-term temperature cycles. Most of the time the Earth is in an ice age, and about every 125,000 years there is a brief period of warm tempertures, called an inter-glacial period. Our current inter-glacial period has lasted a bit longer than most, indicating that the next ice age is somewhat overdue. These long-term cycles are probably related to changes in the eccentricity of the Earth’s orbit, which follows a cycle of about 100,000 years.
We also see other cycles of more closely-spaced peaks, and these are probably related to other cycles in the Earth’s orbit. There is an obliquity cycle of about 41,000 years, and a precession cycle, of about 20,000 years, and all of these cycles interfere with one another in complex ways. Here’s a tutorial from NASA that discusses the Earth’s orbital variations:
http://www-istp.gsfc.nasa.gov/stargaze/Sprecess.htm
Next let’s zoom-in on the current inter-glacial period, as seen in Vostok and Greenland, again using ice-core data. Temperatures here are relative to the value for 1900, which is shown as zero:
Vostok Temperatures: 12,000 BC — 1900
Data source:
http://www.ncdc.noaa.gov/paleo/metadata/noaa-icecore-2475.html
Greenland Temperatures: 9,500 BC — 1900
Here we see that the Southern Hemisphere emerged from the last ice age about 1,000 years earlier than did the Northern Hemisphere. As of 1900, in comparison to the whole inter-glacial period, the temperature was 3°C below the maximum in Vostok, and 3°C below the maximum in Greenland. Thus, as of 1900, temperatures were rather cool for the period in both hemispheres, and in Greenland, temperatures were close to a minimum.
During this recent inter-glacial period, temperatures in both Vostok and Greenland have oscillated through a range of about 4°C, although the patterns of oscillation are quite different in each case. In order to see just how different the patterns are, let’s look at Greenland and Vostok together, for the period 500BC–1900. Vostok is shown with a feint line, actually a dotted line if you click to see the enlarged version.
The patterns are very different indeed. In many cases we see an extreme high in Greenland, while at the same time Vostok is experiencing an extreme low. And in the period 1500—1900, while Greenland temperatures were relatively stable, within a range of .5°C, Vostok went through a radical oscillation of 3°C, from an extreme high to an extreme low. These differences between the two hemispheres might be related to the Earth’s orbit (See NASA tutorial), or they might be related to the fact that the Southern Hemisphere is dominated by oceans, while most of the land mass is in the Northern Hemisphere. Whatever the reason, the difference is striking.
There may be some value in trying to average these different records, to obtain a ’global average’, but it is important to understand that a global average is not the same as a global temperature. For example, consider temperatures 2,000 years ago. Greenland was experiencing a very wram period, 2°C above the baseline, while Vostok was experiencing a cold spell, nearly 1°C below the baseline. While the average for year 1000 might be near the baseline, that average does not represent the real temperature in either location.
This distinction between a global average, and real temperatures, is very important to keep in mind. Consider for example the concern that warming might lead to melting of the tundra in the Arctic, leading to the runaway release of methane. If that happens, it must happen in the Arctic. So it is the temperature in the Arctic that is relevant, not any kind of global average. In Greenland, temperatures 2,000 years ago were a full 2°C higher than 1900 temperatures, and there was no runaway release of methane.
The fact that the global average 2,000 years ago was dragged down by Antarctic cooling is completely irrelevant to the issue of melting tundra. Temperatures in the Arctic must rise by more than 2°C above 1900 levels before tundra-melting might be a problem, and this fact is obscured when we look at the global-average-derived hockey stick put out by the IPCC:
This graph gives the impression that temperatures 2,000 years ago were relatively low, and that in 1900 temperatures were higher than that. This may have some kind of abstract meaning, but it has nothing to do with what’s been going on in the Arctic, and it is very misleading as regards the likelihood of tundra-melting, or Arctic-melting in general. The graph is a gross misrepresentation of what’s been happening in the real world. It obscures the actual temperature record in both hemispheres, by presenting an artifical average that has existed nowhere.
Let’s now look at some other records from the Northern Hemisphere, to find out how typical the Greenland record is of its hemisphere. This first record is from Spain, based on the mercury content in a peat bog, as published in Science, 1999, vol. 284, for the most recent 4,000 years. Note that this graph is backwards, with present day on the left:
This next record is from the Central Alps, based on stalagmite isotopes, as published in Earth and Planteary Science Letters, 2005, vol. 235, for the most recent 2,000 years:
And finally, let’s include our Greenland record for the most recent 4,000 years:
While the three records are clearly different, they do share certain important characteristics. In each case we see a staggered rise, followed by a staggered decline — a long-term up-and-down cycle over the period. In each case we see that during the past few thousand years, temperatures have been 3°C higher than 1900 temperatures. And in each case we see a gradual descent towards the overdue next ice age. The Antarctic, on the other hand, shares none of these characteristics.
If we want to understand warming-related issues, such as tundra-melting and glacier-melting, we must consider the two hemispheres separately. If glaciers melt, they do so either because of high northern termperatures, or high southern temperatures. Whether or not glaciers are likely to melt cannot be determined by global averages. In this article we will concern ourselves with the Northern Hemisphere.
In the Northern Hemisphere, based on the shared characteristics we have observed, temperatures would need to rise at least 3°C above 1900 levels before we would need to worry about things like the extinction of polar bears, the melting of the Greenland ice sheet, or runaway methane release. We know this because none of these things have happened in the past 4,000 years, and temperatures have been3°C higher during that period.
However such a 3°C rise seems very unlikely to happen, given that all three of our Nothern Hemisphere samples show a gradual but definite decline toward the overdue next ice age. Let’s now zoom in the temperature record since 1900, and see what kind of rise has actually occurred. Let’s turn to Jim Hansen’s latest article, published on realclimate.org, 2009 temperatures by Jim Hansen. The article includes the following two graphs.
Jim Hansen is of course one of the primary proponents of the CO2-dangerous-warming theory, and there is considerable reason to believe these graphs show an exaggerated picture as regards to warming. Here is one article relevant to that point, and it is typical of other reports I’ve seen:
Son of Climategate! Scientist says feds manipulated data
Nonetheless, let’s accept these graphs as a valid representation of recent temperature changes, so as to be as fair as possible to the warming alarmists. We’ll be using the red line, which is from GISS, and which does not use the various extrapolations that are included in the green line. We’ll return to this topic later, but for now suffice it to say that these extrapolations make little sense from a scientific perspective.
The red line shows a temperature rise of .7°C from 1900 to the 1998 maximum, a leveling off beginning in 2001, and then a brief but sharp decline starting in 2005. Let’s enter that data into our charting program, using values for each 5-year period that represent the center of the oscillations for that period. Here’s what we get for 1900-2008:
Consider the downward trend at the right end of the graph. Hansen tells us this is very temporary, and that temperatures will soon start rising again. Perhaps he is right. However, as we shall see, his arguments for this prediction are seriously flawed. What we know for sure is that a downward trend has begun. How far that trend will continue is not yet known.
Next, let’s append that latest graph to the Greenland data, to get a reasonable characterization of Northern Hemisphere temperatures from 2000 BC to 2008:
This graph shows us that the temperature rise in the Northern Hemipshpere from 1800 to 2005 was not at all unnatural. That rise follows precisely the long-term pattern, where such rises have been occurring approximately every 1,000 years, with no help from human-caused CO2. Based on the long-term pattern of diminishing peaks, we would expect the recent down-trend to continue, and not turn upward again as Hansen predicts. If the natural pattern continues, then the recent warming has reached its maximum, and we will soon experience about two centuries of rapid cooling, as we continue our descent to the overdue next ice age.
So everything depends on the next decade or so. If temperatures turn upwards again, then the IPCC may be right, and human-caused CO2 emissions may have taken control of climate. However, if temperatures continue downward, then climate has been following natural patterns all along in the Northern Hemisphere. In this case there has been no evidence of any noticeable influence on climate from human-caused CO2, and we are now facing an era of rapid cooling. Within two centuries we could expect temperatures in the Northern Hemisphere to be consideralby lower than they were in the recent Little Ice Age.
We don’t know for sure which way temperatures will go, rapidly up or rapidly down. But I can make this statement:
As of this moment, based on the long-term temperature patterns in the Northern Hemisphere, there is no evidence that human-caused CO2 has had any effect on climate. The rise since 1800, as well as the downward dip starting in 2005, are entirely in line with the natural long-term pattern. If temperatures turn sharply upwards in the next decade or so, that will be the first-ever evidence for human-caused warming in the Northern Hemisphere.
As regards the the recent downturn, here are two other records, both of which show an even more dramatic downturn than the one shown in the GISS data:
University of Alabama, Huntsville (UAH)
Dr. John Christy
UAH Monthly Means of Lower Troposphere LT5-2
2004 – 2008
Remote Sensing Systems of Santa Rosa, CA (RSS)
RSS MSU Monthly Anomaly – 70S to 82.5N (essentially Global)
2004 – 2008
Based on the data we have looked at, all from mainstream scientific sources, we are now in a position to answer our first question with a reasonable level of confidence:
Answer 1
Temperatures, at least in the Northern Hemisphere, have been continuing to follow natural, long-term patterns — despite the unusually high levels of CO2 caused by the burning of fossil fuels. There have indeed been two centuries of global warming, and that is exactly what we would expect based on the natural pattern. Temperatures now are more than 2°C cooler than they were only 2,000 years ago, which means we have not been experiencing dangerously high temperatures in the Northern Hemisphere.
The illusion of global warming arises from a failure to recognize that global averages are are a very poor indicator of actual conditions in either hemisphere.
Within the next decade, or perhaps sooner, we are likely to learn which way the climate is going. If it turns again sharply upwards, as Hansen predicts, that will be counter to the long-term pattern, and evidence for human-caused warming. If it levels off, and continues downwards, that is consistent with long-term patterns, and we are likely to experience about two centuries of rapid cooling in the Northern Hemisphere, as we continue our descent toward the overdue next ice age.
Question 2
Why haven’t unsually high levels of CO2 significantly affected temperatures in the Northern Hemisphere?
One place to look for answers to this question is in the long-term patterns that we see in the temperature record of the past few thousand years, such as the peaks separated by about 1,000 years in the Greenland data, and other more closely-spaced patterns that are also visible. Some forces are causing those patterns, and whatever those forces are, they have nothing to do with human-caused CO2 emissions. Perhaps the forces have to do with cycles in solar radiation and solar magnetism, or perhaps they have something to do with cosmic radiation on a galactic scale, or something we haven’t yet identified. Until we understand what those forces are, how they intefere with one another, and how they effect climate, we can’t really build useful climate models, except on very short time scales.
We can also look for answers in the regulatory mechanisms that exist within the Earth’s own climate system. If an increment of warming happens on the surface, for example, then there is more evaporation from the oceans and more precipitation. While an increment of warming may melt glaciers, it may also cause increased snowfall in the arctic regions. Do these balance each other or not? Increased warming of the ocean’s surface may gradually heat the ocean, but the increased evaporation acts to cool the ocean. Do these balance each other?
Vegetation also acts as a regulatory system. Plants and trees gobble up CO2; that is where their substance comes from. Greater CO2 concentration leads to faster growth, taking more CO2 out of the atmosphere. Until we understand quantitively how these various regulatory systems function and interact, we can’t even build useful models on a short time scale.
In fact a lot of research is going on, investigating both lines of inquiry. However, in the current public-opinion and media climate, any research not related to CO2 causation is dismissed as the activity of contrarians, deniers, and oil-company hacks. Just as the Bishop refused to look through Galileo’s telescope, so today we have a whole society that refuses to look at many of the climate studies that are available.
I’d like to draw attention to one example of a scientist who has been looking at one aspect of the Earth’s regulatory system. Roy Spencer has been conducting research using the satellite systems that are in place for climate studies. Here are his relevant qualifications:
http://en.wikipedia.org/wiki/Roy_Spencer_(scientist)
Roy W. Spencer is a principal research scientist for the University of Alabama in Huntsville and the U.S. Science Team Leader for the Advanced Microwave Scanning Radiometer (AMSR-E) on NASA’s Aqua satellite. He has served as senior scientist for climate studies at NASA’s Marshall Space Flight Center in Huntsville, Alabama.
He describes his research in a presentation available on YouTube:
http://www.youtube.com/watch?v=xos49g1sdzo&feature=channel
In the talk he gives a lot of details, which are quite interesting, but one does need to concentrate and listen carefully to keep up with the pace and depth of the presentation. He certainly sounds like someone who knows what he’s talking about. Permit me to summarize the main points of his research:
When greenhouse gases cause surface warming, a response occurs, a ‘feedback response’, in the form of changes in cloud and precipitation patterns. The CRU-related climate models all assume the feedback response is a positive one: any increment of greenhouse warming will be amplified by knock-on effects in the weather system. This assumption then leads to the predictions of ‘runaway global warming’.
Spencer set out to see what the feedback response actually is, by observing what happens in the cloud-precipitation system when surface warming is occurring. What he found, by targeting satellite sensors appropriately, is that the feedback response is negative rather than positive. In particular, he found that the formation of storm-related cirrus clouds is inhibited when surface temperatures are high. Cirrus clouds are themselves a powerful greenhouse gas, and this reduction in cirrus cloud formation compensates for the increase in the CO2 greenhouse effect.
This is the kind of research we need to look at if we want to build useful climate models. Certainly Spencer’s results need to be confirmed by other researchers before we accept them as fact, but to simply dismiss his work out of hand is very bad for the progress of climate science. Consider what the popular website SourceWatch says about Spencer.
We don’t find there any reference to rebuttals to his research, but we are told that Spencer writes columns for a free-market website funded by Exxon. They also mention that he spoke at conference organized by the Heartland Institute, that promotes lots of reactionary, free-market principles. They are trying to discredit Spencer’s work on irrelevant grounds, what the Greeks referred to as an ad hominem argument. Sort of like, “If he beats his wife, his science must be faulty”.
And it’s true about ‘beating his wife’ — Spencer does seem to have a pro-industry philosophy that shows little concern for sustainability. That might even be part of his motivation for undertaking his recent research, hoping to give ammunition to pro-industry lobbyists. But that doesn’t prove his research is flawed or that his conclusions are invalid. His work should be challenged scientifically, by carrying out independent studies of the feedback process. If the challenges are restricted to irrelevant attacks, that becomes almost an admission that his results, which are threatening to the climate establishment, cannot be refuted. He does not hide his data, or his code, or his sentiments. The same cannot be said for the warming-alarmist camp.
Question 3
What are we to make of Jim Hansen’s prediction that rapid warming will soon resume?
Once again, I refer you to Dr. Hansen’s recent article, 2009 temperatures by Jim Hansen.
Jim begins with the following paragraph:
The past year, 2009, tied as the second warmest year in the 130 years of global instrumental temperature records, in the surface temperature analysis of the NASA Goddard Institute for Space Studies (GISS). The Southern Hemisphere set a record as the warmest year for that half of the world. Global mean temperature, as shown in Figure 1a, was 0.57°C (1.0°F) warmer than climatology (the 1951-1980 base period). Southern Hemisphere mean temperature, as shown in Figure 1b, was 0.49°C (0.88°F) warmer than in the period of climatology.
The Southern Hemisphere may be experiencing warming, but it has 2°C to go before that might become a problem there, and it has nothing to do with the Northern Hemisphere, where temperatures have been declining recently, not setting records for warming. This mathematical abstraction, the global average, is characteristic of nowhere. It creates the illusion of a warming crisis, when in fact no evidence for such a crisis exists. In the context of IPCC warnings about glacers melting, runaway warming, etc., this global-average argument serves as deceptive and effective propaganda, but not as science.
Jim continues with this paragraph, emphasis added:
The global record warm year, in the period of near-global instrumental measurements (since the late 1800s), was 2005. Sometimes it is asserted that 1998 was the warmest year. The origin of this confusion is discussed below. There is a high degree of interannual (year‐to‐ year) and decadal variability in both global and hemispheric temperatures. Underlying this variability, however, is a long‐term warming trend that has become strong and persistent over the past three decades. The long‐term trends are more apparent when temperature is averaged over several years. The 60‐month (5‐year) and 132 month (11‐year) running mean temperatures are shown in Figure 2 for the globe and the hemispheres. The 5‐year mean is sufficient to reduce the effect of the El Niño – La Niña cycles of tropical climate. The 11‐ year mean minimizes the effect of solar variability – the brightness of the sun varies by a measurable amount over the sunspot cycle, which is typically of 10‐12 year duration.
As I’ve emphasized in bold, Jim is assuming that there is a strong and persistent warming trend, which he of course attributes to human-caused CO2 emissions. And then that assumption becomes the justification for the 5 and 11-year running averages. Those running averages then give us phantom ’temperatures’ that don’t match actual observations. In particular, if a downard decline is beginning, the running averages will tend to ‘hide the decline’.
It seems we are looking at a classic case of over-attachment to model. What began as a theory has now become an assumption, and actual observations are being dismissed as “confusion” because they don’t agree with the model. The climate models have definitely strayed into the land of imaginary epicycles. The assumption of CO2 causation, plus the preoccupation with an abstract global average, creates a warming illusion that has no connection with reality in either hemisphere, as we see in these two graphs from Jim’s article:
As with the Ptolemaic model, there is a much simpler explantation for our recent era of warming , at least in the Northern Hemisphere: long term patterns are continuing, for whatever reasons, and human-caused CO2 has so far had no noticeable effect. This simpler explanation is based on actual observations, and requires no abstract mathematical epicycles or averages, but it removes CO2 from the center of the climate debate. And just as powerful forces in Galileo’s day wanted the Earth to remain the center of the universe, powerful forces today want CO2 to remain at the center of climate debate, and global warming to be seen as a threat.
Question 4
What is the real agenda of the politically powerful factions who are promoting global-warming alarmism?
One thing we always need to keep in mind is that the people at the top of the power pyramid in our society have access to the very best scientific information. They control dozens, probably hundreds, of high-level think tanks, able to hire the best minds, and carrying out all kinds of research we don’t hear about. They have access to all the secret military and CIA research, and a great deal of influence over what research is carried out in think tanks, the military, and in universities.
Just because they might be promoting fake science for its propaganda value, that doesn’t mean they believe it themselves. They undoubtedly know that global cooling is the real problem, and the actions they are promoting are completely in line with such an understanding.
Cap-and-trade, for example, won’t reduce carbon emissions. Rather it is a mechanism that allows emissions to continue, while pretending they are declining — by means of a phony market model. You know what a phony market model looks like. It looks like Reagan and Thatcher telling us that lower taxes will lead to higher government revenues due to increased business activity. It looks like globalization, telling us that opening up free markets will “raise all boats” and make us all prosperous. It looks like Wall Street, telling us that mortgage derivatives are a good deal, and we should buy them. And it looks like Wall Street telling us the bailouts will restore the economy, and that the recession is over. In short, it’s a con. It’s a fake theory about what the consequences of a policy will be, when the real consequences are known from the beginning.
Cap-and-trade has nothing to do with climate. It is part of a scheme to micromanage the allocation of global resources, and to maximize profits from the use of those resources. Think about it. Our ‘powerful factions’ decide who gets the initial free cap-and-trade credits. They run the exchange market itself, and can manipulate the market, create derivative products, sell futures, etc. They can cause deflation or inflation of carbon credits, just as they can cause deflation or inflation of currencies. They decide which corporations get advance insider tips, so they can maximize their emissions while minimizing their offset costs. They decide who gets loans to buy offsets, and at what interest rate. They decide what fraction of petroleum will go to the global North and the global South. They have ‘their man’ in the regulation agencies that certify the validity of offset projects. And they make money every which way as they carry out this micromanagement.
In the face of global cooling, this profiteering and micromanagenent of energy resources becomes particularly significant. Just when more energy is needed to heat our homes, we’ll find that the price has gone way up. Oil companies are actually strong supporters of the global-warming bandwagon, which is very ironic, given that they are funding some of the useful contrary research that is going on. Perhaps the oil barrons are counting on the fact that we are suspicious of them, and asssume we will discount the research they are funding, as most people are in fact doing. And the recent onset of global cooling explains all the urgency to implement the carbon-management regime: they need to get it in place before everyone realizes that warming alarmism is a scam.
And then there’s the carbon taxes. Just as with income taxes, you and I will pay our full share for our daily commute and for heating our homes, while the big corporate CO2 emitters will have all kinds of loopholes, and offshore havens, set up for them. Just as Federal Reserve theory hasn’t left us with a prosperous Main Street, despite its promises, so theories of carbon trading and taxation won’t give us a happy transition to a sustainable world.
Instead of building the energy-efficient transport systems we need, for example, they’ll sell us biofuels and electric cars, while most of society’s overall energy will continue to come from fossil fuels, and the economy continues to deteriorate. The North will continue to operate unsustainably, and the South will pay the price in the form of mass die-offs, which are already ticking along at the rate of six million children a year from malnutrition and disease.
While collapse, suffering, and die-offs of ‘marginal’ populations will be unpleasant for us, it will give our ‘powerful factions’ a blank canvas on which to construct their new world order, whatever that might be. And we’ll be desperate to go along with any scheme that looks like it might put food back on our tables and warm up our houses.
Author contact – rkm@quaylargo.com
Up in Smoke
Why Biomass Wood Energy is Not the Answer
By George Wuerthner | January 12, 2010
After the Smurfit-Stone Container Corp.’s linerboard plant in Missoula Montana announced that it was closing permanently, there have been many people including Montana Governor Switzer, Missoula mayor and Senator Jon Tester, among others who advocate turning the mill into a biomass energy plant. Northwestern Energy, a company which has expressed interest in using the plant for energy production has already indicated that it would expect more wood from national forests to make the plant economically viable.
The Smurfit Stone conversion to biomass is not alone. There have been a spate of new proposals for new wood burning biomass energy plants sprouting across the country like mushrooms after a rain. Currently there are plans and/or proposals for new biomass power plants in Maine, Vermont, Pennsylvania, Florida, California, Idaho, Oregon and elsewhere. In every instance, these plants are being promoted as “green” technology.
Part of the reason for this “boom” is that taxpayers are providing substantial financial incentives, including tax breaks, government grants, and loan guarantees. The rationale for these taxpayer subsidies is the presumption that biomass is “green” energy. But like other “quick fixes” there has been very little serious scrutiny of real costs and environmental impacts of biomass. Whether commercial biomass is a viable alternative to traditional fossil fuels can be questioned.
Before I get into this discussion, I want to state right up front, that coal and other fossil fuels that now provide much of our electrical energy need to be reduced and effectively replaced. But biomass energy is not the way to accomplish this end goal.
BIOMASS BURNING IS POLLUTION
First and foremost, biomass burning isn’t green. Burning wood produces huge amounts of pollution. Especially in valleys like Missoula where temperature inversions are common, pollution from a biomass burner will be the source of numerous health ailments. Because of the air pollution and human health concerns, the Oregon Chapter of the American Lung Association, the Massachusetts Medical Society and the Florida Medical Association, have all established policies opposing large-scale biomass plants.
The reason for this medical concern is that even with the best pollution control devises, biomass energy is extremely dirty. For instance, one of the biggest biomass burners now in operation, the McNeil biomass plant in Burlington, Vermont is the number one pollution source in the state, emitting 79 classified pollutants. Biomass releases dioxins, and as much particulates as coal burning, plus carbon monoxide, nitrogen oxide, sulfur dioxide, and contributes to ozone formation. […]
BIOMASS ENERGY IS INEFFICIENT
Wood is not nearly as concentrated a heat source as coal, gas, oil, or any other fossil fuel. Most biomass energy operations are only able to capture 20-25% of the latent energy by burning wood. That means one needs to gather and burn more wood to get the same energy value as a more concentrated fuel like coal. That is not to suggest that coal is a good alternative, rather wood is a worse alternative. Especially when you consider the energy used to gather the rather dispersed source of wood and the energy costs of trucking it to a central energy plant. If the entire carbon footprint of wood is considered, biomass creates far more CO2 with far less energy output than other energy sources.
The McNeil Biomass Plant in Burlington Vermont seldom runs full time because wood, even with all the subsidies (and Vermonters made huge and repeated subsidies to the plant—not counting the “hidden subsidies” like air pollution) wood energy can’t compete with other energy sources, even in the Northeast where energy costs are among the highest in the nation. Even though the plant was also retrofitted so it could burn natural gas to increase its competitiveness with other energy sources, the plant still does not operate competitively. It generally is only used to off- set peak energy loads.
One could argue, of course, that other energy sources like coal are greatly subsidized as well, especially if all environmental costs were considered. But at the very least, all energy sources must be “standardized” so that consumers can make informed decisions about energy—and biomass energy appears to be no more green than other energy sources.
BIOMASS SANITIZES AND MINES OUR FORESTS
The dispersed nature of wood as a fuel source combined with its low energy value means any sizable energy plant must burn a lot of wood. For instance, the McNeil 50 megawatt biomass plant in Burlington, Vermont would require roughly 32,500 acres of forest each year if running at near full capacity and entirely on wood. Wood for the McNeil Plant is trucked and even shipped on trains from as far away as Massachusetts, New Hampshire, Quebec and Maine.
Biomass proponents often suggest that wood [gathered] as a consequence of forest thinning to improve “forest health” (logging a forest to improve health of a forest ecosystem is an oxymoron) will provide the fuel for plant operations. For instance, one of the assumptions of Senator Tester’s Montana Forest Jobs bill is that thinned forests will provide a ready source of biomass for energy production. But in many cases, there are limits on the economic viability of trucking wood any distance to a central energy plant. Again without huge subsidies, this simply does not make economic sense. Biomass forest harvesting is even worse for forest ecosystems than clear-cutting. Biomass energy tends to utilize the entire tree, including the bole, crown, and branches. This robs a forest of nutrients, and disrupts energy cycles.
Worse yet, such biomass removal ignores the important role of dead trees to sustain the forest ecosystems. Dead trees are not a “wasted” resource. They provide home and food for thousands of species, including 45% of all bird species in the Nation. Dead trees that fall to the ground are used by insects, small mammals, amphibians and reptiles for shelter and even potentially food. Dead trees that fall into streams are important physical components of aquatic ecosystems and provide critical habitat for many fish and other aquatic species. Removal of dead wood is mining the forest. Keep in mind that logging activities are not benign. Logging typically requires some kind of access, often roads which are a major source of sedimentation in streams, and disrupt natural subsurface water flow. Logging can disturb sensitive wildlife like grizzly bear and even elk are known to abandon locations with active logging. Logging can spread weeds. And finally since large amounts of forest carbon are actually tied up in the soils, soil disturbance from logging is especially damaging, often releasing substantial additional amounts of carbon over and above what is released up a smoke stack.
BIOMASS ENERGY USES LARGE AMOUNTS OF WATER
A large-scale biomass plant (50 MW) uses close to a million gallons of water a day for cooling. Most of that water is lost from the watershed since approximately 85% is lost as steam. Water channeled back into a river or stream typically has a pollution cost as well, including higher water temperatures that negatively impact fisheries, especially trout. Since cooling need is greatest in warm weather, removal of water from rivers occurs just when flows are lowest, and fish are most susceptible to temperature stress.
BIOMASS ENERGY SAPS FUNDS FROM OTHER TRULY GREEN ENERGY SOURCES LIKE SOLAR
Since biomass energy is eligible for state renewable portfolio standards (RPS), it has captured the bulk of funding intended to move the country away from fossil fuels. For example, in Vermont, 90% of the RPS is from “smokestack” sources—mostly biomass incineration. This pattern holds throughout many other parts of the country. Biomass energy is thus burning up funds that could and should be going into other energy programs like energy conservation, solar and insulation of buildings.
PUBLIC FORESTS WILL BE LOGGED FOR BIOMASS ENERGY
Many of the climate bills now circulating in Congress, as well as Montana Senator Jon Tester’s Montana Jobs and Wilderness bill target public forests. Some of these proposals even include roadless lands and proposed wilderness as a source for wood biomass. One federal study suggests that 368 million tons of wood could be removed from our national forests every year—of course this study did not include the ecological costs that physical removal of this much would have on forest ecosystems.
The Biomass Crop Assistance Program, or BCAP, which was quietly put into the 2008 farm bill has so far given away more than a half billion dollars in a matching payment program for businesses that cut and collect biomass from national forests and Bureau of Land Management lands. And according to a recent Washington Post story, the Obama administration has already sent $23 million to biomass energy companies, and is poised to send another half billion.
And it is not only federal forests that are in jeopardy. Many states are eying their own state forests for biomass energy. For instance, Maine recently unveiled a new plan known as the Great Maine Forest Initiative which will pay timber companies to grow trees for biomass energy.
JOB LOSSES
Ironically one of the main justifications for biomass energy is the creation of jobs, yet the wood biomass rush is having unintended consequences for other forest products industries. Companies that rely upon surplus wood chips to produce fiberboard, cabinet makers, and furniture are scrambling to find wood fiber for their products. Considering that these industries are secondary producers of products, the biomass rush could threaten more jobs than it may create.
BOTTOM LINE
Large scale wood biomass energy is neither green, nor truly economical. It is also not ecologically sustainable and jeopardizes our forest ecosystems. It is a distraction that funnels funds and attention away from other more truly worthwhile energy options, in particular, the need for a massive energy conservation program, and changes in our lifestyles that will in the end provide truly green alternatives to coal and other fossil fuels.
George Wuerthner is a wildlife biologist and a former Montana hunting guide. His latest book is Plundering Appalachia.
Related articles
- Massachusetts Restricts Dirty Biopower (switchboard.nrdc.org)
- Forest Owners Tell EPA to Avoid Pitfalls in Biomass Review (prweb.com)
- Greens warn biomass plan could reduce food supplies (morningstaronline.co.uk)
- Biggest English Polluter Spends $1 Billion to Burn Wood (bloomberg.com)
- California Proposes Forest Thinning for Biomass Energy, But is it a Good Idea? (kcet.org)
DOE grants moratorium on safety inspections for nuclear weapons labs
Project on Government Oversight | January 7, 2010
If your kid accidentally blew apart a building, would you give them less supervision? This hands-off approach is exactly what the Department of Energy’s National Nuclear Security Administration (NNSA) is doing by giving the contractors who manage the nation’s eight nuclear weapons sites (Los Alamos National Laboratory, Lawrence Livermore National Laboratory, Nevada Test Site, Sandia National Laboratory, Savannah River Site, Pantex, Y-12, and the Kansas City Plant) a six-month break from many regularly scheduled oversight reviews.
On December 18, 2009–two days after researchers at the Los Alamos National Laboratory (LANL) accidentally blew apart a building, causing an initial estimate of $3 million in damage–NNSA Administrator Tom D’Agostino signed a directive “placing a six-month moratorium on NNSA-initiated functional assessments, reviews, evaluations and inspections.” Project on Government Oversight (POGO) saw this directive coming, as DOE and NNSA have initiated reforms to put contractors in charge of their own oversight, “Reforming the Nuclear Security Enterprise.” POGO is not convinced that this moratorium is so temporary, and is interested to know what NNSA is going to do with all of the federal full time employees at the site offices and headquarters it no longer needs as a result of this directive.
Getting a hiatus from regular reviews are many of the areas that have had recent serious problems—security, nuclear safety, cyber security, Material Control and Accountability (MC&A), contractor assurance systems that relate to contract oversight, property accountability, and nuclear weapons quality. For example, the weaknesses with Los Alamos’s MC&A program were so significant it took NNSA more than a year and a half to resolve them. Additionally, it was NNSA, not the contractor that found that Los Alamos treated its loss of more than 67 computers merely as a property management issue, and not as a potential lapse in cyber security. Over the last few years, POGO has also discovered countless security and safety incidents at Los Alamos, Livermore National Laboratory, Pantex, and Y-12, that the contractors had not provided oversight sufficient to prevent and resolve the problems. In the past, a senior manager at Los Alamos and his sidekick went to jail when the procurement system got out of control. Now, the directive exempts Los Alamos from a procurement management review.
“It seems foolish for NNSA to abdicate its management, given the last few years of debacles at the labs,” says POGO Senior Investigator Peter Stockton. “NNSA needs to recognize its role in overseeing the labs, as that was one of the major reasons it was created.”
NNSA’s new approach to federal safety and security oversight is irresponsible—stopping it in its entirety for six months. POGO would instead like to see NNSA make a New Year’s resolution to conduct smarter, more rigorous oversight of its labs. Such a move could prevent some of the costly contractor errors that occurred in 2009, such as Los Alamos’s Plutonium Facility (PF-4) needing to stop its main operations for more than one month, once again, because of the contractor and NNSA’s inadequate oversight of its fire suppression system.
.
Meltdown, USA: Nuclear Drive Trumps Safety Risks and High Cost
By Art Levine, t r u t h o u t | News Analysis | January 6, 2010

(Photo: Matthew Strmiska; Edited: Jared Rodriguez)
The pro-nuclear Department of Energy is set this month to offer the first of nearly $20 billion in loan guarantees to a nuclear industry that hasn’t built a plant since the 1970s or raised any money to do so in years. But although the industry is seeking to cash in on global warming concerns with $100 billion in proposed loan guarantees, environmentalists, scientists and federal investigators are warning that lax oversight by the Nuclear Regulatory Commission (NRC) of the nation’s aging 104 nuclear plants has led to near-meltdowns along with other health and safety failings since Three Mile Island – including what some critics say is a flawed federal health study apparently designed to conceal cancer risks near nuclear plants.
Also See: Part II: Energy Department, NRC Back Nuclear, Ignore Industry’s Dirty Little Secrets
All that is joined by the dangers and risks posed by at least 30 tons yearly of radioactive, cancer-causing, nuclear waste produced at each 1,000 megawatt plant; projected costs of $12 billion to $25 billion for any new plants (built largely through taxpayer support).
For instance, a meltdown of the two reactors at Indian Point, dubbed “Chernobyl on the Hudson,” could quickly kill nearly 50,000 people with radiation poisoning in a 50-mile radius and cause over 500,000 cancer deaths within six years, according to research by the Union of Concerned Scientists and other experts.
“Nothing’s changed,” said Paul Gunter, director of Reactor Oversight for the Beyond Nuclear reform group, about nuclear plants. “They’re still dirty, dangerous and expensive.”
But such concerns stand in sharp contrast to a wave of positive PR about the nuclear industry as the “clean air energy” solution to global warming, driven by ads, campaign donations and lobbying – and abetted by media outlets too often willing to accept industry and Nuclear Regulatory Commission spin at face value.
Even so, there’s little reason to have confidence in the NRC’s ability to protect the public or successfully monitor the current nuclear plants, let alone any new ones. In fact, with the bulk of its funding coming from nuclear utility industry fees, the agency appears to be literally asleep at the wheel, allowing everything from near meltdowns in a Toledo plant to ignoring internal reports of rent-a-cops at vulnerable nuclear plants sleeping on the job – until the negative publicity became too overwhelming. Ultimately, the agency gave that Exelon company a mild $65,000 fine last year. Meanwhile, researchers for the Project on Government Oversight and Union of Concerned Scientists found that the utility, the Wackenhut Security Firm and the NRC all knew well before the scandal broke publicly that guards were sleeping on the job at the Peach Bottom facility in Pennsylvania.
As one researcher pointed out in 2008 testimony, “Neither Wackenhut nor Exelon nor NRC acted upon the security allegations to correct the problem.”
The NRC’s coziness with industry extends to some of its own commissioners. As its own inspector general reported, before a Bush-appointed commissioner left in mid-2007, he made decisions that could benefit financially three firms he was negotiating with for jobs – including a ruling that apparently helped loosen regulatory requirements for an emergency cooling system in a Westinghouse plant.
Obama’s latest proposed appointee to the agency isn’t necessarily any less pro-industry. As Mother Jones reported about Peter Magwood: “Both before and after his time in government, he has worked as an enthusiastic advocate for nuclear interests in the private sector-including for at least one company likely to have business before the NRC in the near future.”
Indeed, there are few limits, no matter how absurd, to how far the NRC is willing to go to cut the industry plenty of slack, no matter how dangerous to the public. Take the case of the noncombustible foam that the agency ordered nuclear plants to buy in the late 1990s as a sealant to help prevent the spread of fire from room to room in a plant. It turned out that there was a small problem with this well-meaning plan: the brand of silicone foam bought by most of the nuclear power companies turned out to be, well, combustible. So, did the NRC then promptly order the dangerous, potentially life-threatening foam removed? No, of course not: it just revised its regulations to drop the phrase and requirement of “noncombustibility” for the foam.
Paul Gunter, then with the Nuclear Information and Resource Service, found himself in the Kafkaesque position of having to argue in regulatory comments against the logical insanity of dropping the word “noncombustible” in requirements for fire-preventing foam. In bold letters, he wrote, “NRC PROPOSED ACTION INCREASES THE RISK OF A NUCLEAR ACCIDENT RESULTING FROM THE REDUCTION OF DEFENSE-IN-DEPTH OF FIRE PROTECTION SYSTEMS….” He then attempted to reason with the NRC, noting, “the material in question is designated as a fire-barrier seal.” He and other critics did not prevail, and the NRC continues to allow nuclear companies to buy combustible foam as fire prevention sealants. “The shit burns, it’s combustible and it leaves charring,” Gunter now pointed out, asking, reasonably, how it could possibly meet fire protection standards.
The NRC also uses technicalities in other ways to advance industry interests. As Beyond Nuclear and other critics point out, there’s an important reason that so little is known about the dangers of radiation for those living near nuclear plants in America: there’s very little well-designed research that has been done on the issue.
There are some exceptions: a Massachusetts Department of Public Health study in the late 1980s, though, found a 400 percent increase in leukemia for those living downwind from the Pilgrim plant, and a recent German government study found that children under five living less than five kilometers from a nuclear plant had twice the risk of contracting leukemia of those living more than five kilometers away.
Yet, one of the most influential American studies on the topic was released in 1990 by the National Cancer Institute at the behest of the NRC – and it found, by studying the overall cancer incidence of those living in surrounding counties, nuclear power plants posed no apparent radiation risk for those living in the area. Yet, while hailed by the nuclear industry and the NRC, scientific and medical critics of nuclear power had strong doubts about the study’s design and its failure to measure the impact on those living nearby.
As The New York Times reported:
But Daryl Kimball, associate director for policy of Physicians for Social Responsibility, a national organization of medical professionals concerned with nuclear war and other dangers from nuclear power, said the study ”raises more questions than it answers.”
Mr. Kimball said the study diluted the risks of exposure to radiation from nuclear plants by examining entire counties instead of areas where people were directly exposed to radiation. He cited the Fernald weapons plant near Cincinnati, where over 500,000 pounds of uranium were released into the atmosphere. This uranium may have fallen on only a small area, he said, but the study includes all the people in the surrounding counties.
Because of questions about conflict of interest and research integrity, Beyond Nuclear, among others, is asking the NRC to take a hands-off position in commissioning a new academic study. “The NRC receives about 90 percent of its funding from nuclear power reactor licensing fees,” said Cindy Folkers, radiation and health specialist with Beyond Nuclear. “As such, NRC clearly stands to gain from more reactor construction. Therefore, it should not be doing cancer studies or directly hiring people to conduct such studies. This is a flagrant conflict-of-interest and puts a scientifically rigorous, non-biased study at great risk.”
In response, a spokesperson for the NRC said the agency is using a peer-review panel of experts drawn from the National Cancer Institute and other agencies to oversee the research. “The panel will provide comments on the proposed methodology before the study is done, and it will review the study’s results, ensuring a scientifically sound project that uses the latest available data,” spokesman Scott Burnell said in an emailed response.
Yet, despite all these problems, a seemingly benign “solution for global warming” – nuclear energy – has boundless, if simplistic, appeal, even if it could take years to build and threatens public health and safety, while undermining genuine renewable energy with billions devoted to nuclear bailouts.
Still, the pro-nuclear pitch is especially welcomed by media outlets when it advances the seemingly fresh story line of environmentalists embracing nuclear power, as delivered by the likes of ex-Greenpeace activist Patrick Moore, whose financial ties to a Nuclear Energy Institute front group are rarely disclosed.
For instance, as the Center for Media and Democracy has noted, Moore has recently placed op-eds extolling nuclear power in such reputable publications as The Philadelphia Inquirer, while being paid by the front organization The Clean and Safe Energy Industry Coalition (CASE). This flack outfit, nominally headed by former Gov. and EPA Director Christine Todd Whitman, was actually established by the PR firm Hill and Knowlton at the behest of the Nuclear Energy Institute.
Moore recently outlined the selling points that the nuclear industry – and its allies in Congress – are promoting to sprinkle eco-friendly fairy dust around the grim nuclear industry that Wall Street and private investors won’t touch:
Old Foes Welcome Clean Fuel Rising demand for emission-free energy is spurring a nuclear rebirth.
By Patrick Moore
Nuclear energy, a prime source of electricity for Pennsylvania, is finally getting the respect it deserves.
It’s not hard to see why: America’s power needs continue to grow, and meeting them without harming the environment calls for every available nonpolluting energy source.
Nuclear energy is the most dependable and cost-effective such option.
It isn’t the only solution, of course. Wind, geothermal, and other renewable energy sources will likely become a bigger part of Pennsylvania’s energy portfolio, and America’s. But nuclear energy will be expected to shoulder the biggest load.
Because nuclear energy is virtually emissions-free, America’s 104 nuclear reactors already account for nearly 75 percent of the country’s clean energy, and 93 percent of Pennsylvania’s.
Nuclear energy has maintained a strong record of safety, reliability, and efficiency for decades, and Americans increasingly appreciate its environmental and economic benefits. A recent Gallup poll showed that 59 percent of Americans support using nuclear energy to meet the country’s energy needs. Support is even higher in Pennsylvania, reaching 82 percent of residents polled last year for the Pennsylvania Energy Alliance.
Unfortunately for Moore and fellow spinmeisters, nuclear energy isn’t the clean, harmless, renewable resource it’s portrayed here and by nuclear propaganda. The “clean air energy” meme comes complete with lovely images of the nuclear icon surrounded by leaves and flowers, or as in the Nuclear Energy Institute’s web site, features a happy family cavorting in a flowery green field. In fact, as Greenpeace, among others, has pointed out:
Let’s be blunt here. This isn’t just misleading. This isn’t just misinformation. This is a lie.
Nuclear energy is not clean energy. One need only look at the environmental destruction caused by uranium mining. In his book ‘Wollaston: People Resisting Genocide’, Miles Goldstick details the damage brought to the lives of the people living around the uranium mines in Canada’s Saskatchewan province. The accumulation of radioactive isotopes in edible plants. The lead, arsenic, uranium and radium found downstream from the mines. The spills that J.A. Keily, then Vice President of Production and Engineering for Gulf Minerals Rabbit Lake, described in 1980 as “probably too numerous to count.”
These are stories found wherever uranium mining takes place. The ruined lives, the contamination, the cover-ups, and the deception. And that’s before we even consider what happens to the waste produced by generating nuclear energy.
[…] Most critically, nuclear power-generated electricity is so much more expensive for consumers and businesses to use than renewables and conservation combined. That means that a new 1,000 megawatt nuclear plant would rob electricity users of $256 million they could have used for everything from making individual purchases to hiring more workers, according to John A. “Skip” Laitner, the director of economic and social analysis for the American Council for an Energy-Efficient Economy (ACEEE). “Energy-related sectors don’t support anywhere near the jobs that other sectors of the economy do,” he pointed out. “So going the nuclear route is a net loss to the economy” – except, of course, for the extra spending on hospitals and doctors to treat those residents near nuclear plants and mining facilities who develop cancers or birth defects.
Moreover, as Dr. Helen Caldicott and other experts have noted, “Large amounts of the now-banned chlorofluorocarbon gas (CFC) are emitted during the enrichment of uranium. CFC gas is not only 10,000 to 20,000 times more efficient as an atmospheric heat trapper (‘greenhouse gas’) than CO2, but it is a classic ‘pollutant’ and a potent destroyer of the ozone layer.”
In fact, it is the mining of uranium, followed by its “enrichment” – using carbon-polluting, complex ultracentrifuges or gaseous diffusion processes – to separate it into fissionable U-235 isotopes that are the dark truths about nuclear power hidden among the greenery of the industry’s propaganda. As Greenpeace pointed out:
Nuclear fuel production – the mining, milling and enriching of uranium – is one of the nuclear industry’s dirty secrets. Very little attention is paid to it by industry propagandists and pro-nuclear politicians and for very good reason. It’s dirty, dangerous, incredibly damaging to the environment and endangers the health of those people unfortunate enough to live close to uranium mines.
To hear some supporters of nuclear energy talk, you’d think the whole process of generating electricity begins with the throwing of a reactor’s “on” switch. But there’s a long story before we even get that far. It’s also a long, sad story that often goes untold in the wider media.
Pick any uranium mine around the world and it will invariably be surrounded by stories of pollution, contamination and the exploitation of local communities. Niger, Namibia, Brazil, Canada, Kazakhstan.
And Australia. The country’s “Environment Minister Peter Garrett has formally approved the new Four Mile uranium mine in South Australia, saying it poses no environmental risks.”
The article goes on to chronicle ten major spills of radioactive materials in Australia in the last decade at that mine.
In fact, the true dangers of this uranium mining and enrichment are becoming tragically and increasingly apparent – and will doubtless spread as more plants could get built worldwide. All this adds to the ongoing, unsolved problem of finding a safe repository in the United States for radioactive waste from nuclear plants still kept at their sites, now that long-delayed plans to use Yucca Mountain in Nevada have finally fallen apart.
As Greenpeace asked, “Delays in the construction and opening of Yucca Mountain have been seen as a large obstacle to the expansion of nuclear power in the US. With no viable plan for the safe disposal of nuclear waste in the country how can the go ahead for further nuclear reactors be given?”
Moreover, whether in Native-American reservations here or in Niger villages abroad, indigenous, impoverished people live near or work in uranium mines to supply nuclear plants, and suffer the consequences in cancers, birth defects and leukemia.
It’s a cruel irony that the poisonous levels of radiation in the uranium waste found in Niger villages comes from mining by the French nuclear company AREVA; their trouble-plagued plants and behind-schedule production are somehow seen as a role model for America’s proposed next generation of nuclear plants – and slated to be supported by US taxpayer-backed loan guarantees.
As Greenpeace asked recently, in awarding the 2009 “Blind Eye” Award:
For many of us, some of the electricity we use every day comes from nuclear power stations. Those reactors are fuelled with uranium. Do you know where that uranium comes from?
Does it come from Namibia where uranium mining has made the traditional lifestyles of the Topnaar Nama people ‘impossible to maintain’. Does it come from Caetite in Brazil where the drinking water has been contaminated with uranium? Does it come from Australia or Canada where there native peoples’ ways of life are threatened? Does it come from Niger whose streets where children play are contaminated with radiation?
Uranium Weapons, Low-Level Radiation and Deformed Babies
by Paul Zimmerman | Global Research | January 1, 2010
A dramatic increase in the number of babies born with birth defects was recently reported by doctors working in Falluja, Iraq [1]. One of the proposed causes for this alarming situation is radiation exposure to the population produced by uranium weapons. The international radiation protection community dismisses this explanation as completely unreasonable because (1) the radiation dose to the population of Iraq was too low, and (2) no evidence of birth defects was reported among offspring born to survivors of the atomic bombings of Hiroshima and Nagasaki. This so-called scientific explanation is deeply disturbing, for it is out of touch with the current knowledge base. Abundant evidence exists which clearly demonstrates that birth defects are being induced by levels of radiation in the environment deemed safe by the radiation protection community. In light of this knowledge, uranium contamination cannot be summarily dismissed as a hazard to the unborn.
The destruction of the nuclear reactor at Chernobyl produced a different type of radiation exposure from that portrayed for the atomic bomb. In Japan, victims were exposed to an instantaneous flash of gamma radiation and neutrons delivered from outside their bodies. In contrast, the Chernobyl accident scattered microscopic radioactive particles from the reactor’s core throughout Europe which was then inhaled and ingested by the populace. In this situation, those contaminated began receiving ongoing, low-dose exposure internally. According to the current theory of radiation effects embraced by the radiation protection community, there is no qualitative difference in the two types of exposure. What matters is the total amount of energy delivered to the body. Thus, the health effects experienced by the survivors of Hiroshima and Nagasaki can be considered to be representative of the health effects produced from any type of radiation exposure. In the case of birth defects, this assumption has been proven wrong. As a result of the external exposure in Japan, there was no increase in the incidence of birth defects among children whose parents were exposed to the bombings [2]. In contrast, radiation-induced birth defects have been documented in populations receiving low doses of internal contamination. In light of this contradiction, it’s obvious that the accepted theory of radiation effects is in error and needs to be corrected. The information which follows will demonstrate the hazard to the unborn produced by radioactive material vented into the environment.
1. In the book Chernobyl: 20 Years On, a chapter is devoted to discussing the birth defects in children who, while gestating in the wombs of their mothers, were exposed to radioactivity released by the Chernobyl reactor [3]. The author provides an overview of dozens of studies which confirm that low levels of radiation present in many areas of Europe after Chernobyl were responsible for a wide variety of birth defects. These birth defects occurred where radiation exposure was judged by the radiation protection agencies to be too low to warrant concern. Fifteen studies were cited which demonstrated an increase in the incidence of a wide variety of congenital malformations. Other studies cited confirmed increases in the rate of stillbirths, infant deaths, spontaneous abortions, and low birthweight babies. An elevated incidence of Down’s syndrome was also documented. In addition, an excess of a variety of other health defects were detected which included mental retardation and other mental disorders, diseases of the respiratory and circulatory systems, and asthma.
In a separate chapter of the same book, Alexey Yablokov of the Russian Academy of Sciences provided a review of the extensive body of research conducted after Chernobyl. Regarding studies on birth defects, he cited an increased frequency of a number of congenital malformations which included cleft lip and/or palate (“hare lip”), doubling of the kidneys, polydactyly (extra fingers or toes), anomalies in the development of nervous and blood systems, amelia (limb reduction defects), anencephaly (defective development of the brain), spina bifida (incomplete closure of the spinal column), Down’s syndrome, abnormal openings in the esophagus and anus, and multiple malformations occurring simultaneously [4].
2. The wide range of birth defects produced by the Chernobyl accident cannot be accounted for by the data collected from the survivors of Hiroshima and Nagasaki. This is one compelling thread of evidence that something is amiss in the current field of radiation protection. But there is a further problem. The proposed threshold dose of radiation capable of interfering with the development of a fetus, again based on the research from Japan, is between fifty and one hundred times greater than what the radiation protection community insists was the typical exposure in the areas of Europe where the elevated frequency of birth defects was documented. How are we to make sense of these contradictions? Chromosome studies conducted in the contaminated regions provide the answer.
In individuals exposed to ionizing radiation, peripheral lymphocytes, those lymphocytes which circulate in the blood, have an elevated occurrence of certain types of misshapen chromosomes [3,5]. Of particular interest are dicentric chromosomes which are produced when radiation breaks both strands of the DNA double helix in two neighboring chromosomes and the genetic material is then misrepaired. An increase in the relative frequency of these aberrantly shaped structures serve as a biological indicator of radiation exposure which is immune to lies and political propaganda. More specifically, the increased rate of these aberrations is proportional to the dose of radiation received. Thus, their frequency can be used to determine the true level of exposure in contaminated individuals. Studies of this type were conducted in Europe subsequent to the Chernobyl accident [3]. These studies demonstrated that the official dose estimates published by the radiation protection agencies were woefully in error, greatly underestimating the true level of exposure of people throughout Europe. This discrepancy casts further doubt on the scientific integrity of those organizations who are supposedly protecting the world from radioactive pollution. When combining the studies of chromosome aberrations with the studies of birth defects, the science speaks for itself: the population in many areas of Europe received much higher doses from Chernobyl than claimed and birth defects were induced by much smaller doses than suggested by current radiation protection science.
3. As the clouds of fallout from Chernobyl wafted around the planet, governments broadcast reassurances to their anxious citizens that there was no cause for concern, that doses to the public would be too low to produce detrimental health effects. Politically motivated, this advice was medically ill-conceived. What became evident after the accident was that children who received exposure to Chernobyl fallout, while still in the wombs of their mothers, experienced an elevated risk of developing leukemia by the time of their first birthday [6,7]. Relevant to this discussion is the fact that a gene mutation occurring in utero is one cause of infant leukemia [8,9].) In countries where unimpeachable data was collected for levels of fallout deposited in the environment, doses to the population, and the incidence of childhood leukemia, an unmistakable, uniform trend emerged: the studied population of children born during the 18-month period following the accident suffered increased rates of leukemia in their first year of life compared to children born prior to the accident or to those born subsequent to the accident after the level of possible maternal contamination had sufficiently diminished. This was confirmed in five separate studies conducted independently of one another: in Greece [9], Germany [10], Scotland [11], the United States [12], and Wales [13]. Again here is evidence that defects are being induced in fetuses that we are told by the radiation protection community are not possible. According to the European Committee on Radiation Risk (ECRR), these results provide unequivocal evidence that the risk model of the International Commission on Radiological Protection (ICRP) for infant leukemia is in error by a factor of between 100-fold and 2000-fold, the latter figure allowing for a continued excess incidence of leukemia as the population of children studied continues to age [6].
4. Other types of chromosome studies have been performed which demonstrate that radiation in the environment is producing damage to DNA that is being passed on to offspring. Minisatellites are identical short segments of DNA that repeat over and over again in a long array along a chromosome. These stretches of DNA do not code for the formation of any protein. What distinguishes these minisatellites is that they acquire spontaneous repeats through mutation at a known rate, which is 1,000 times higher than normal protein-coding genes. Dr. Yuri Dubrova, currently at the University of Leicester, first realized that these stretches of DNA could be used to detect radiation-induced genetic mutations by showing that their known rate of mutation had increased subsequent to exposure. Dubrova and his colleagues studied the rate of minisatellite mutations in families that had lived in the heavily polluted rural areas of the Mogilev district of Belarus after the Chernobyl meltdown [14]. They found the frequency of mutations being passed on by males to their descendants was nearly twice as high in the exposed families compared to the control group families. Among those exposed, the mutation rate was significantly greater in families with a higher parental dose. This finding was consistent with the hypothesis that radiation had induced mutations in the the reproductive germ cells of parents and then transmitted to their offspring. This was the first conclusive proof that radiation produced inheritable mutations in humans.
Minisatellite DNA testing has also been performed on the children of Chernobyl “liquidators” i.e., those people who participated in post-accident cleanup operations. When the offspring of liquidators born after the accident were compared to their siblings born prior to the accident, a sevenfold increase in genetic damage was observed [15,16]. As reported by the ECRR, “for the loci measured, this finding defined an error of between 700-fold and 2,000-fold in the ICRP model for heritable genetic damage” [6]. The ECRR made this further observation: “It is remarkable that studies of the children of those exposed to external radiation at Hiroshima show little or no such effect, suggesting a fundamental difference in mechanism between the exposures [17]. The most likely difference is that it was the internal exposure to the Chernobyl liquidators that caused the effects”.
5. In November 2009, Joseph Mangano of the Radiation and Public Health Project published a study of newborn hypothyroidism near the Indian Point nuclear reactors in Buchanan, New York [13]. Hypothyroidism is a disease characterized by an insufficient production of the hormone thyroxine. One cause of the disease is exposure to radioactive iodine which selectively destroys cells in the thyroid gland. Currently, the only environmental source of radioactive iodine is emissions from nuclear power plants. According to Mangano, four counties in New York state flank Indian Point and nearly all the residents of these counties live within 20 miles of the reactor complex. During the period 1997 to 2007, the rate of newborn hypothyroidism in the combined four-county population was 92.4% greater, or nearly double, the U.S. rate. The rate in each of the four counties separately was above the U.S. rate, and in two of the counties, the rate was more than double the national rate. In the period 2005-2007, the four county rate was 151.4% above the national rate. These finding were consistent with the fact that the local rate of thyroid cancer is 66% greater than the U.S. rate [14].
Mangano’s study raises important questions regarding our common welfare. We live with assurances by government and industry that nuclear reactors are operating within guidelines sponsored by the radiation protection agencies. What radiation they emit are dismissed as too low to warrant concern. An yet, babies born to mothers living in proximity to Indian Point are suffering an increased rate of hypothyroidism. Either the reactor complex is emitting more radiation than publicly known, or once again, there is an error in the safety standards published by the radiation protection community.
6. Are weapons containing depleted uranium a cause for concern for producing birth defects? Given that uranium inside the human body targets the reproductive system, the elevated rate of birth defects in Iraq strongly suggests that DU exposure is involved. In experimental animals exposed to uranium compounds, uranium has been found to accumulate in the testes [20]. Among Gulf War veterans wounded by DU shrapnel, elevated levels of uranium have been found in their semen [21]. In light of this discovery, the Royal Society cautions that this raises “the possibility of adverse effects on the sperm from either the alpha-particles emanating from DU, chemical effects of uranium on the genetic material or the chemical toxicity of uranium [21].” In experiments on female rats, uranium was found to cross the placenta and become concentrated in the tissues of the fetus [20,21,22]. When DU pellets were implanted into pregnant female rats, a direct relation was observed between the amount of contamination in the mother and the amount of contamination in the placenta and the fetus [23,24]. Most importantly, once dissolved within the body, uranium’s primary chemical form is the uranyl ion UO2++. This form of uranium has an affinity for DNA and binds strongly to it [25]. This fact alone is should be sufficient to halt the scattering of DU aerosols amidst populations. Internalized uranium targets human genetic material! Needless to say, this fact is totally ignored by the International Commission on Radiological Protection and related organizations when determining safe levels of exposure to uranium and assessing the risk posed by uranium for inducing birth defects.
7. In infants, hydrocephalus is a condition characterized by increased head size and atrophy of the brain. The frequency of this birth defect has increased dramatically in Iraq since the first Gulf War [26]. A small and admittedly incomplete study conducted in the United States lends credence to the hypothesis that DU exposure is the causative agent [26]. Rural and sparsely populated Socorro County is located downwind of a DU-weapons testing site, the Terminal Effects Research and Analysis division of the New Mexico Institute of Mining and Technology. On average, 250 births occur yearly in the county. An investigation by a community activist revealed that between 1984 and 1986, five infants were born with hydrocephalus. (The normal rate of hydrocephalus is one case in every 500 live births). According to the demonstrably incomplete State of New Mexico’s passive birth defects registry, between 1984 and 1988, 19 infants were born statewide with the condition, three of these within Socorro county. Regardless of which accounting is correct, the results are disturbing given that Socorro contains less than 1% of the state’s population.
8. To conclude, the current dogma regarding radiation effects cannot account for the increase in genetic malformations in populations exposed internally to low levels of radiation. Something is deeply wrong with the current science of radiation safety. Given this, statements by the radiation protection community regarding the impossibility that low levels of uranium can cause birth defects are suspect. Numerous studies demonstrate that uranium produces a wide range of birth defects in experimental animals [20,26]. Further, numerous in vitro and in vivo studies conducted in the last twenty years have proven that uranium is genotoxic (capable of damaging DNA), cytotoxic (poisonous to cells), and mutagenic (capable of inducing mutations) [27]. These effects are produced either by uranium’s radioactivity or its chemistry or a synergistic interaction between the two. These findings lend plausibility to the idea that the observed increased incidence of deformed babies in Iraq is related to depleted uranium munitions [26].
Paul Zimmerman is the author of A Primer in the Art of Deception: The Cult of Nuclearists, Uranium Weapons and Fraudulent Science. A more technical, fully referenced presentation of the ideas presented in this article can be found within its pages. Excerpts, free to download, are available at www.du-deceptions.com.
Notes
[1] Chulov M. Huge Rise in Birth Defects in Falluja. guardian.co.uk. November 13, 2009.
http://www.guardian.co.uk/world/2009/nov/13/falluja-cancer-children-birth-defects#history-byline
[2] Nakamura N. Genetic Effects of Radiation in Atomic-bomb Survivors and Their Children: Past, Present and Future. Journal of Radiation Research. 2006; 47(Supplement):B67-B73.
[3] Schmitz-Feurerhake I. Radiation-Induced Effects in Humans After in utero Exposure: Conclusions from Findings After the Chernobyl Accident. In C.C. Busby, A.V.Yablokov (eds.): Chernobyl: 20 Years On. European Committee on Radiation Risk. Aberystwyth, United Kingdom: Green Audit Press; 2006.
[4] Yablokov A.V. The Chernobyl Catastrophe — 20 Years After (a meta-review). In C.C. Busby, A.V. Yablokov (eds.): Chernobyl: 20 Years On. European Committee on Radiation Risk. Aberystwyth, United Kingdom: Green Audit Press; 2006.
[5] Hoffmann W., Schmitz-Feuerhake I. How Radiation-specific is the Dicentric Assay? Journal of Exposure Analysis and Environmental Epidemiology. 1999; 2:113-133.
[6] European Committee on Radiation Risk (ECRR). Recommendations of the European Committee on Radiation Risk: the Health Effects of Ionising Radiation Exposure at Low Doses for Radiation Protection Purposes. Regulators’ Edition. Brussels; 2003. http://www.euradcom.org.
[7] Low Level Radiation Campaign (LLRC). Infant Leukemia After Chernobyl. Radioactive Times: The Journal of the Low Level Radiation Campaign. 2005; 6(1):13.
[8] Busby C.C. Very Low Dose Fetal Exposure to Chernobyl Contamination Resulted in Increases in Infant Leukemia in Europe and Raises Questions about Current Radiation Risk Models. International Journal of Environmental Research and Public Health. 2009; 6:3105-3114.
[9] Petridou E., Trichopoulos D., Dessypris N., Flytzani V., Haidas S., Kalmanti M.K., Koliouskas D., Kosmidis H., Piperolou F., Tzortzatou F. Infant Leukemia After In Utero Exposure to Radiation From Chernobyl. Nature. 1996; 382:352-353.
[10] Michaelis J., Kaletsch U., Burkart W., Grosche B. Infant Leukemia After the Chernobyl Accident. Nature. 1997; 387:246.
[11] Gibson B.E.S., Eden O.B., Barrett A., Stiller C.A., Draper G.J. Leukemia in Young Children in Scotland. Lancet. 1988; 2(8611):630.
[12] Mangano J.J. Childhood Leukemia in the US May Have Risen Due to Fallout From Chernobyl. British Medical Journal. 1997; 314:1200.
[13] Busby C, Scott Cato M. Increases in Leukemia in Infants in Wales and Scotland Following Chernobyl: Evidence for Errors in Statutory Risk Estimates. Energy and Environment. 2000; 11(2):127-139.
[14] Dubrova Y.E., Nesterov V.N., Jeffreys A.J., et al. Further Evidence for Elevated Human Minisatellite Mutation Rate in Belarus Eight Years After the Chernobyl Accident. Mutation Research. 1997; 381:267-278.
[15] Weinberg H.S., Korol A.B., Kiezhner V.M., Avavivi A., Fahima T., Nevo E., Shapiro S., Rennert G., Piatak O., Stepanova E.I., Skarskaja E. Very High Mutation Rate in Offspring of Chernobyl Accident Liquidators. Proceedings of the Royal Society. London. 2001; D, 266:1001-1005.
[16] Dubrova Y.E., et al. Human Minisatellite Mutation Rate after the Chernobyl Accident. Nature. 1996; 380:683-686.
[17] Satoh C., Kodaira M. Effects of Radiation on Children. Nature. 1996; 383:226.
[18] Mangano J. Newborn Hypothyroidism Near the Indian Point Nuclear Plant. Radiation and Public Health Project. November 25, 2009. http://www.radiation.org
[19] Mangano J. Geographic Variation in U.S. Thyroid Cancer Incidence and a Cluster Near Nuclear Reactors in New Jersey, New York, and Pennsylvania. International Journal of Health Services. 2009; 39(4):643-661.
[20] Agency for Toxic Substances and Disease Registry (ATSDR). Toxicological Profile for Uranium. U.S. Department of Health and Human Services; 1999.
http://www.atsdr.cdc.gov/toxprofiles/tp150.html
[21] Royal Society. Health Hazards of Depleted Uranium Munitions: Part II. London: Royal Society, March 2002.
[22] Albina L., Belles M., Gomez M., Sanchez D.J., Domingo J.L. Influence of Maternal Stress on Uranium-Induced Developmental Toxicity in Rats. Experimental Biology and Medicine. 2003; 228( 9):1072-1077.
[23] Arfsten D.P., Still K.R., Ritchie G.D. A Review of the Effects of Uranium and Depleted Uranium Exposure on Reproduction and Fetal Development. Toxicology and Industrial Health. 2001; 17:180-191.
[24] Domingo J. Reproductive and Developmental Toxicity of Natural and Depleted Uranium: A Review. Reproductive Toxicology. 2001; 15:603-609.
[25] Wu O., Cheng X., et al. Specific Metal Oligonucleotide Binding Studied By High Resolution Tandem Mass Spectrometry. Journal of Mass Spectrometry. 1996; 321(6) 669-675.
[26] Hindin R., Brugge D., Panikkar B. Teratogenicity of Depleted Uranium Aerosols: A Review from an Epidemiological Perspective. Environmental Health. 2005; 26(4):17.
[27] Zimmerman P. A Primer in the Art of Deception: The Cult of Nuclearists, Uranium Weapons and Fraudulent Science. 2009. http://www.du-deceptions.com
Chernobyl Exclusion Zone Radioactive Longer Than Expected
- By Alexis Madrigal
- December 15, 2009

SAN FRANCISCO — Chernobyl, the worst nuclear accident in history, created an inadvertent laboratory to study the impacts of radiation — and more than twenty years later, the site still holds surprises.
Reinhabiting the large dead zone around the accident site may have to wait longer than expected. Radioactive cesium isn’t disappearing from the environment as quickly as predicted, according to new research presented here Monday at the meeting of the American Geophysical Union. Cesium 137’s half-life — the time it takes for half of a given amount of material to decay — is 30 years, but the amount of cesium in soil near Chernobyl isn’t decreasing nearly that fast. And scientists don’t know why.
It stands to reason that at some point the Ukrainian government would like to be able to use that land again, but the scientists have calculated that what they call cesium’s “ecological half-life” — the time for half the cesium to disappear from the local environment — is between 180 and 320 years.
“Normally you’d say that every 30 years, it’s half as bad as it was. But it’s not,” said Tim Jannick, nuclear scientist at Savannah River National Laboratory and a collaborator on the work. “It’s going to be longer before they repopulate the area.”
In 1986, after the Chernobyl accident, a series of test sites were established along paths that scientists expected the fallout to take. Soil samples were taken at different depths to gauge how the radioactive isotopes of strontium, cesium and plutonium migrated in the ground. They’ve been taking these measurements for more than 20 years, providing a unique experiment in the long-term environmental repercussions of a near worst-case nuclear accident.
In some ways, Chernobyl is easier to understand than DOE sites like Hanford, which have been contaminated by long-term processes. With Chernobyl, said Boris Faybishenko, a nuclear remediation expert at Lawrence Berkeley National Laboratory, we have a definite date at which the contamination began and a series of measurements carried out from that time to today.
“I have been involved in Chernobyl studies for many years and this particular study could be of great importance to many [Department of Energy] researchers,” said Faybishenko.
The results of this study came as a surprise. Scientists expected the ecological half-lives of radioactive isotopes to be shorter than their physical half-life as natural dispersion helped reduce the amount of material in any given soil sample. For strontium, that idea has held up. But for cesium the the opposite appears to be true.
The physical properties of cesium haven’t changed, so scientists think there must be an environmental explanation. It could be that new cesium is blowing over the soil sites from closer to the Chernobyl site. Or perhaps cesium is migrating up through the soil from deeper in the ground. Jannik hopes more research will uncover the truth.
“There are a lot of unknowns that are probably causing this phenomenon,” he said.
Beyond the societal impacts of the study, the work also emphasizes the uncertainties associated with radioactive contamination. Thankfully, Chernobyl-scale accidents have been rare, but that also means there is a paucity of places to study how radioactive contamination really behaves in the wild.
“The data from Chernobyl can be used for validating models,” said Faybishenko. “This is the most value that we can gain from it.”
Image: flickr/StuckinCustoms
Citation: “Long-Term Dynamics of Radionuclides Vertical Migration in Soils of the Chernobyl Nuclear Power Plant Exclusion Zone” by Yu.A. Ivanov, V.A. Kashparov, S.E. Levchuk, Yu.V. Khomutinin, M.D. Bondarkov, A.M. Maximenko, E.B. Farfan, G.T. Jannik, and J.C. Marra. AGU 2009 poster session.
Why are the oligarchic elites trying so hard to push their climate change policies through right now?
December 9, 2009 by Notsilvia Night
Why are the political and financial elites and their obedient servants in the their faith – sorry scientific community – pushing so madly for a final decision on a global Carbon Tax legislation at this very moment?
Why don´t they just wait until the scandal of “climate gate” has blown over?
Because those elites know they are wrong on the issue of human caused climate change.
They know that their lies are being revealed to the public piece by piece, faster and faster.
Most of all, they know that the planet is at the moment once again in a cooling phase as occurs every thirty or forty odd years.
Looking at the current lack of solar activity, this cooling phase might even be a more severe one than the one that ended 40 years ago, possibly as severe as during what is called the Maunder Minimum, a cooling phase lasting several decades during the 17. and 18. century.
In a couple of years their claims would no longer be tenable at all. The cooling trend would be obvious to even the most ideologically blinded environmentalist on earth.
The scheme of taxing global population, creating new revenue streams for the world´s financial markets establishing a central control over the world economy and preventing the rise of developing countries out of poverty, would lose out.
The political leaders of all less powerful countries are being bullied at the moment into signing a treaty which gives away their country’s national sovereignty to the leadership of the powerful ones, namely Britain and United States – and more to the point to the shadow leadership behind them, the world´s financial elites of Goldman Sachs and Co.
So why are so many decent people on the left fighting hand and foot for the profits in carbon trading of Goldman Sachs and Al Gore´s Generation Investment Management (GIM) company?
It´s a psychological problem; most people, especially on the left, want to be on the side of the good and caring people.
For over 40 years now we have been told that being environmentally minded means being a good person. It means we care about nature, wild animal life, about future generations of human beings.
Being environmentally minded means we are opposed to polluting the air and the water;
we are opposed to deforestation (especially in the rain-forest regions);
we are opposed to dumping our own poisonous waste unto the developing world;
we are opposed to rampant consumerism, in which driven by the advertisement industry we keep on buying and buying. Buying things we actually don´t need, things which do not make us either happier or more comfortable, just more indebted.
All those nice middle class people who want to feel good about themselves, they all support these ideas as part of the program for the left. And yes, there are plenty of real environmental issues we should be concerned about. But while marginalizing these real issues for the environment, the financial and right-wing ideological elites have – with the help of the media they control – succeeded to infiltrate their own agenda into the “green” movement with the bogus Anthropogenic Global Warming ideology.
The propaganda has been very successful indeed. People who want with all their heart to be “good” and decent are now supporting the agenda of the most selfish and anti-humanist forces on the planet.
The propaganda has created a belief-system which is hard to break. In Europe this belief-system is even more entrenched, since it has been developed for a over a few more years, hence it may be harder to break among Europeans than in the United States.
But after the “climate-gate” revelations chances aren’t so bad any more. A global storm is brewing against the liars (which include most of the mainline media) and their masters. No matter how bad it looks when we listen to the sound-bites of the top-level political hacks, down on the bottom, in the population, minds are changing en masse.
In just a little while, those who honestly strive to be the “good” guys (and girls) will realize that being good and caring about future generations means not caring for the Goldman Sachs carbon credits scheme.
The truth will indeed set us free from global tyranny:
Watch also:
Lord Monckton on Climategate at the 2nd International Climate Conference
on Vimeo.
Guru Of Science Czar Holdren Called For Doubling CO2 Emissions

Paul Joseph Watson
December 8, 2009
The guru of President Barack Obama’s science advisor, John P. Holdren, who in his 1977 book Ecoscience called for draconian population control measures including sterilizing the water supply and introducing forced abortions, wrote that large amounts of carbon dioxide should be pumped into the atmosphere in order to aid plant growth and solve the food crisis.
Lamenting on page 140 of his 1954 book The Challenge of Man’s Future, Harrison Brown – a geochemist who supervised the production of plutonium for the Manhattan Project, wrote that “the earth’s atmosphere contains only a minute concentration – about 0.03 percent” – Brown observed, “It has been demonstrated that a tripling of carbon-dioxide concentration in the air will approximately double the growth rates of tomatoes, alfalfa, and sugar beets.”
Brown was a guru to White House science czar John Holdren, who co-edited a 1986 book in his honor.
“There are between 18 and 20 tons of carbon dioxide over every acre of the earth’s surface,” Brown noted on page 142. “In order to double the amount in the atmosphere, at least 500 billion tons of coal would have to be burned – an amount six times greater than that which has been consumed during all of human history.”
The fact that Holdren was a disciple of Brown again goes to show how dramatically Holdren’s convictions about climate change have flip-flopped in order to accommodate whatever scientific fad holds sway at the time.
In the 70’s, Holdren was busy talking up the drastic threat of global cooling, warning that it would produce giant tidal waves and environmental devastation.
In a 1971 essay entitled “Overpopulation and the Potential for Ecocide,” Holdren and his co-author Paul Ehrlich wrote that global cooling would ensue as a result of , “a reduced transparency of the atmosphere to incoming light as a result of urban air pollutions (smoke, aerosols), agriculture air pollution (dust), and volcanic oil.”
Holdren and Ehrlich predicted, “a mere 1 percent increase in low cloud cover would decrease the surface temperature by .8C” and that “a decrease of 4C would probably be sufficient to cause another ice age.”
They continued: “Even more dramatic results are possible, however; for instance, a sudden outward slumping in the Antarctic ice cap, induced by added weight, could generate a tidal wave of proportions unprecedented in recorded history.”
Of course, Holdren and his ilk were spectacularly wrong with their doomsday predictions about global cooling, but almost entirely the same crowd are now telling us that global warming is a gargantuan threat and that only a carbon tax paid directly to the World Bank can stop it.
The Cellulosic Ethanol Delusion
By ROBERT BRYCE
March 30, 2009
For years, ethanol boosters have promised Americans that “cellulosic” ethanol lurks just ahead, right past the nearest service station. Once it becomes viable, this magic elixir — made from grass, wood chips, sawdust, or some other plant material — will deliver us from the evil clutches of foreign oil and make the U.S. “energy independent” while enriching farmers and strengthening small towns across the country.
Consider this claim: “From our cellulose waste products on the farm such as straw, corn-stalks, corn cobs and all similar sorts of material we throw away, we can get, by present known methods, enough alcohol to run our automotive equipment in the United States.”
That sounds like something you’ve heard recently, right? Well, fasten your seatbelt because that claim was made way back in 1921. That’s when American inventor Thomas Midgley proclaimed the wonders of cellulosic ethanol to the Society of Automotive Engineers in Indianapolis. And while Midgley was excited about the prospect of cellulosic ethanol, he admitted that there was a significant hurdle to his concept: producing the fuel would cost about $2 per gallon. That’s about $20 per gallon in current money.
Alas, what’s old is new again.
I wrote about the myriad problems of cellulosic ethanol in my book, Gusher of Lies. But the hype over the fuel continues unabated. And it continues even though two of the most prominent cellulosic ethanol companies in the U.S., Aventine Renewable Energy Holdings and Verenium Corporation, are teetering on the edge of bankruptcy. As noted last week by Robert Rapier on his R-Squared Energy blog, Verenium’s auditor, Ernst & Young, recently expressed concern about the company’s ability to continue as a going concern and Aventine was recently delisted from the New York Stock Exchange.
On March 16, the accounting firm Ernst & Young said Verenium may be forced to “curtail or cease operations” if it cannot raise additional capital. And in a filing with the Securities and Exchange Commission, the company’s management said “We continue to experience losses from operations, and we may not be able to fund our operations and continue as a going concern.” Last week, the company’s stock was trading at $0.36 per share. It has traded for as much as $4.13 over the past year.
Aventine’s stock isn’t doing much better. Earlier this month, the company announced that it may seek bankruptcy protection if it cannot raise additional cash. Last Friday, Aventine’s shares were selling for $0.09. Over the past year, those shares have sold for as much as $7.86.
The looming collapse of the cellulosic ethanol producers deserves more than passing notice for this reason: cellulosic ethanol – which has never been produced in commercial quantities — has been relentlessly hyped over the past few years by a panoply of politicians and promoters.
The list of politicos includes Iowa Senator Tom Harkin, President Barack Obama, former vice president Al Gore, former Republican presidential nominee and U.S. Senator John McCain, former president Bill Clinton, former president George W. Bush and former CIA director James Woolsey.
There are plenty of others who deserve to take a bow for their role in promoting the delusion of cellulosic ethanol. Prominent among them: billionaire investor/technologist Vinod Khosla. In 2006, Khosla claimed that making motor fuel out of cellulose was “brain dead simple to do.” He went on, telling NBC’s Stone Phillips that cellulosic ethanol was “just around the corner” and that it would be a much bigger source of fuel than corn ethanol. Khosla also proclaimed that by making ethanol from plants “in less than five years, we can irreversibly start a path that can get us independent of petroleum.”
In 2007, Kholsa delivered a speech, “The Role of Venture Capital in Developing Cellulosic Ethanol,” during which he declared that cellulosic ethanol and other biofuels can be used to completely replace oil for transportation. More important, Khosla predicted that cellulosic ethanol would be cost competitive with corn ethanol production by 2009.
Two other promoters who have declared that cellulosic ethanol is just on the cusp of viability: Mars exploration advocate Robert Zubrin, and media darling Amory Lovins.
Of all the people on that list, Lovins has been the longest – and the most consistently wrong – cheerleader for cellulosic fuels. His boosterism began with his 1976 article in Foreign Affairs, a piece which arguably made his career in the energy field. In that article, called “Energy Strategy: The Road Not Taken?” Lovins argued that American energy policy was all wrong. What America needed was “soft” energy resources to replace the “hard” ones (namely fossil fuels and nuclear power plants.) Lovins argued that the U.S. should be working to replace those sources with other, “greener” energy sources that were decentralized, small, and renewable. Regarding biofuels, he wrote that there are “exciting developments in the conversion of agricultural, forestry and urban wastes to methanol and other liquid and gaseous fuels now offer practical, economically interesting technologies sufficient to run an efficient U.S. transport sector.”
Lovins went on “Some bacterial and enzymatic routes under study look even more promising, but presently proved processes already offer sizable contributions without the inevitable climatic constraints of fossil-fuel combustion.” He even claims that given enough efficiency in automobiles, and a large enough bank of cellulosic ethanol distilleries, “the whole of the transport needs could be met by organic conversion.”
In other words, Lovins was making the exact same claim that Midgley made 45 years earlier: Given enough money – that’s always the catch isn’t it? – cellulosic ethanol would provide all of America’s transportation fuel needs.
The funny thing about Lovins is that between 1976 and 2004 — despite the fact that the U.S. still did not have a single commercial producer of cellulosic ethanol — he lost none of his skepticism. In his 2004 book Winning the Oil Endgame, Lovins again declared that advances in biotechnology will make cellulosic ethanol viable and that it “will strengthen rural America, boost net farm income by tens of billions of dollars a year, and create more than 750,000 new jobs.”
Lovins continued his unquestioning boosterism in 2006, when during testimony before the U.S. Senate, he claimed that “advanced biofuels (chiefly cellulosic ethanol)” could be produced for an average cost of just $18 per barrel.
Of course, Lovins isn’t the only one who keeps having visions of cellulosic grandeur. In his 2007 book, Winning Our Energy Independence, S. David Freeman, the former head of the Tennessee Valley Authority, declared that to get away from our use of oil, “we must count on biofuels.” And a key part of Freeman’s biofuel recipe: cellulosic ethanol. Freeman claims that “there is huge potential to generate ethanol from the cellulose in organic wastes of agriculture and forestry.” He went on, saying that using some 368 million tons of “forest wastes” could provide about 18.4 billion gallons of ethanol per year, yielding “the equivalent of about 14 billion gallons gasoline [sic], or about 10 percent of current gasoline consumption.” Alas, Freeman fails to provide a single example of a company that has made a commercial success of cellulosic ethanol.
Cellulosic ethanol gained even more acolytes during the 2008 presidential campaign.
In May 2008, the Speaker of the House Nancy Pelosi touted the passage of the subsidy-packed $307 billion farm bill, declaring that it was an “investment in energy independence” because it providing “support for the transition to cellulosic ethanol.”
About the same time that Pelosi was touting the new farm bill, a spokesman for the Renewable Fuels Association, an ethanol industry lobbying group in Washington, was claiming that corn ethanol was merely a starting point for other “advanced” biofuels. “The starch-based ethanol industry we have today, we’ll stick with it. It’s the foundation upon which we are building next-generation industries,” said Matt Hartwig, a spokesman for the lobby group.
In August 2008, Obama unveiled his “new” energy plan which called for “advances in biofuels, including cellulosic ethanol.”
After Obama’s election, the hype continued, particularly among Democrats on Capitol Hill. In January 2009, Tom Harkin, the Iowa senator who’s been a key promoter of the corn ethanol scam, told PBS: “ethanol doesn’t necessarily all have to come from corn. In the last farm bill, I put a lot of effort into supporting cellulose ethanol, and I think that’s what you’re going to see in the future…You’re going to see a lot of marginal land that’s not suitable for row crop production, because it’s hilly, or it’s not very productive for corn or soybeans, things like that, but it can be very productive for grasses, like miscanthus, or switchgrass, and you can use that to make the cellulose ethanol.”
Despite the hype, cellulosic ethanol is no closer to commercial viability than it was when Midgley first began talking about it back in 1921. Turning switchgrass, straw or corn cobs into sizable volumes of motor fuel is remarkably inefficient. It is devilishly difficult to break down cellulose into materials that can be fermented into alcohol. And even if that process were somehow made easier, its environmental effects have also been called into question. A September 2008 study on alternative automotive fuels done by Jan Kreider, a professor emeritus of engineering at the University of Colorado, and Peter S. Curtiss, a Boulder-based engineer, found that the production of cellulosic ethanol required about 42 times as much water and emitted about 50 percent more carbon dioxide than standard gasoline. Furthermore, Kreider and Curtiss found that, as with corn ethanol, the amount of energy that could be gained by producing cellulosic ethanol was negligible.
In a recent interview, Kreider told me that the key problem with turning cellulose into fuel is “that it’s such a dilute energy form. Coal and gasoline, dirty as they may be, are concentrated forms of energy. Hauling around biomass makes no sense.”
Indeed, the volumes of biomass needed to make any kind of dent in America’s overall energy needs are mind boggling. Let’s assume that the U.S. wants to replace 10 percent of its oil use with cellulosic ethanol. That’s a useful percentage as it’s approximately equal to the percentage of U.S. oil consumption that originates in the Persian Gulf. Let’s further assume that the U.S. decides that switchgrass is the most viable option for producing cellulosic ethanol.
Given those assumptions, here’s the math: The U.S. consumes about 21 million barrels of oil per day, or about 320 billion gallons of oil per year. New ethanol companies like Coskata and Syntec are claiming that they can produce about 100 gallons of ethanol per ton of biomass, which is also about the same yield that can be garnered by using grain as a feedstock.
At 100 gallons per ton, producing 32 billion gallons of cellulosic ethanol would require the annual harvest and transport of 320 million tons of biomass. Assuming an average semi-trailer holds 15 tons of biomass, that volume of biomass would fill 21.44 million semi-trailer loads. If each trailer is a standard 48 feet long, the column of trailers (not including any trucks attached to them) holding that amount of switchgrass would stretch almost 195,000 miles. That’s long enough to encircle the earth nearly eight times. Put another way, those trailers would stretch about 80 percent of the distance from the earth to the moon.
But remember, ethanol’s energy density is only about two-thirds that of gasoline. So that 32 billion gallons of cellulosic ethanol only contains the energy equivalent of about 21 billion gallons of gasoline. Thus, the U.S. would actually need to produce about 42.5 billion gallons of cellulosic ethanol in order to supplant 10 percent of its oil needs. That would necessitate the production of 425 million tons of biomass, enough to fill about 28.3 million trailers. And that line of semi-trailer loads that stretch about 257,500 miles, plenty long enough to loop around the earth more than 10 times, or to stretch from the Earth to the Moon.
But let’s continue driving down this road for another mile or two. Sure, it’s possible to produce that much biomass, but how much land would be required to make it happen? Well, a report from Oak Ridge National Laboratory suggests that an acre of switchgrass can yield about 11.5 tons of biomass per year, and thus, in theory, 1,150 gallons of ethanol per year.
Therefore, to produce 425 million tons of biomass from switchgrass would require some 36.9 million acres to be planted in nothing but switchgrass. That’s equal to about 57,700 square miles, or an area just a little smaller than the state Oklahoma. For comparison, that 36.9 million acres is equal to about 8 percent of all the cropland now under cultivation in the U.S. Thus, to get 10 percent of its oil needs, the U.S. would need to plant an area equal to 8 percent of its cropland.
And none of that considers the fact that there’s no infrastructure available to plant, harvest, and transport the switchgrass or other biomass source to the biorefinery.
So just to review: There are still no companies producing cellulosic ethanol on a commercial basis. The most prominent companies that have tried to do so are circling the drain. Even if a company finds an efficient method of turning cellulose into ethanol, the logistics of moving the required volumes of biomass are almost surely a deal-killer.
And yet, Congress has mandated that it be done. The Energy Independence and Security Act of 2007 mandates that a minimum of 16 billion gallons of cellulosic ethanol be blended into the U.S. auto fuel mix by 2022.
Don’t hold your breath.
Robert Bryce’s latest book is Gusher of Lies: The Dangerous Delusions of “Energy Independence” which just came out in paperback, available from the carousel at the bottom of this page.










