Aletho News

ΑΛΗΘΩΣ

Canadian authorities ran war game drills depicting ISIS attack scenarios

By Brandon Martinez | Non-Aligned Media | October 23, 2014

Joshua Blakeney has pointed out that Adrienne Arsenault of CBC reported last night that in the weeks leading up to the two so-called ‘terror’ incidents that took place this week in Quebec and Ottawa Canadian authorities had been running war games exercises depicting such attacks.

The relevant commentary starts at 1:52 of the video below:

According to Arsenault,

They [Canadian authorities] may have been surprised by the actual incidents but not by the concepts of them. Within the last month we know that the CSIS, the RCMP and the National Security Task Force … ran a scenario that’s akin to a war games exercise if you will where they actually imagined literally an attack in Quebec, followed by an attack in another city, followed by a tip that that ‘hey some foreign fighters are coming back from Syria.’ So they were imagining a worst case scenario. We’re seeing elements of that happening right now. … [Canadian authorities] may talk today in terms of being surprised but we know that this precise scenario has been keeping them up at night for awhile.

What an amazing coincidence that Canadian intelligence ran a drill envisioning an attack first in Quebec, then another city. On Monday October 20 a man identified as Martin Rouleau supposedly ran over two Canadian soldiers with his car in a mall parking lot in the city of Saint-Jean-sur-Richelieu in Quebec. And yesterday, as we know, one soldier was gunned down in Ottawa followed by a siege on the parliament itself. Authorities and media are claiming that both suspects were converts to Islam who had become “radicalized.”

What are the chances that these mock terror drills are just a coincidence? In nearly every instance of a major terrorist occurrence in the West, it has been revealed that intelligence services were conducting war games exercises mimicking the very events that later come to pass. On the day of the London subway bombings in 2005 British authorities ran drills depicting the exact attack scenario that transpired later in the day. On 9/11 multiple US agencies were running drills simulating jet hijackings. And now we have confirmation that Canada’s intelligence services were doing the same thing.

It has also been revealed that both suspects in the two incidents this week were being monitored by both US and Canadian intelligence for some time prior to their alleged attacks.

October 24, 2014 Posted by | Civil Liberties, False Flag Terrorism, Full Spectrum Dominance | , , | Leave a comment

Root Cause Analysis of the Modern Warming

By Matt Skaggs | Climate Etc. | October 23, 2014

For years, climate scientists have followed reasoning that goes from climate model simulations to expert opinion, declaring that to be sufficient. But that is not how attribution works.

The concept of attribution is important in descriptive science, and is a key part of engineering. Engineers typically use the term “root cause analysis” rather than attribution. There is nothing particularly clever about root cause methodology, and once someone is introduced to the basics, it all seems fairly obvious. It is really just a system for keeping track of what you know and what you still need to figure out.

I have been performing root cause analysis throughout my entire, long career, generally in an engineering setting. The effort consists of applying well established tools to new problems. This means that in many cases, I am not providing subject matter expertise on the problem itself, although it is always useful to understand the basics. Earlier in my career I also performed laboratory forensic work, but these days I am usually merely a facilitator. I will refer to those that are most knowledgeable about a particular problem as the “subject matter experts” (SMEs).

This essay consists of three basic sections. First I will briefly touch on root cause methodology. Next I will step through how a fault tree would be conducted for a topic such as the recent warming, including showing what the top tiers of the tree might look like. I will conclude with some remarks about the current status of the attribution effort in global warming. As is typical for a technical blog post, I will be covering a lot of ground while barely touching on most topics, but I promise that I will do my best to explain the concepts as clearly and concisely as I can.

Part 1: Established Root Cause Methodology

Definitions and Scope

Formal root cause analysis requires very clear definitions and scope to avoid chaos. It is a tool specifically for situations in which we have detected an effect with no obvious cause, but discerning the cause is valuable in some way. This means that we can only apply our methodology to events that have already occurred, since predicting the future exploits different tools. We will define an effect subject to attribution as a significant excursion from stable output in an otherwise stable system. One reason this is important is that a significant excursion from stable behavior in an otherwise stable system can be assumed to have a single root cause. Full justification of this is beyond the scope of this essay, but consider that if your car suddenly stops progressing forward while you are driving, the failure has a single root cause. After having no trouble for a year, the wheel does not fall off at the exact same instant that the fuel pump seizes. I will define a “stable” system as one in which significant excursions are so rare in time that they can safely be assumed to have a single root cause.

Climate science is currently engaged in an attribution effort pertaining to a recent temperature excursion, which I will refer to as the “modern warming.” For purposes of defining the scope of our attribution effort, we will consider the term “modern warming” to represent the rise in global temperature since 1980. This is sufficiently precise to prevent confusion, we can always go back and tweak this date if the evidence warrants. 

Choosing a Tool from the Toolbox 

There are two basic methods to conclusively attribute an effect to a cause. The short route to attribution is to recognize a unique signature in the evidence that can only be explained by a single root cause. This is familiar from daily life; the transformer in front of your house shorted and there is a dead black squirrel hanging up there. The need for a systematic approach such as a fault tree only arises when there is no black squirrel. We will return to the question of a unique signature later, after discussing what an exhaustive effort would look like.

Once we have determined that we cannot simply look at the outcome of an event and see the obvious cause, and we find no unique signature in the data, we must take a more systematic approach. The primary tools in engineering root cause analysis are the fault tree and the cause map. The fault tree is the tool of choice for when things fail (or more generally, execute an excursion), while the cause map is a better tool for when a process breaks down. The fault tree asks “how?,” while the cause map asks “why?” Both tools are forms of logic trees with all logical bifurcations mapped out. Fault trees can be quite complex with various types of logic gates. The key attributes of a fault tree are accuracy, clarity, and comprehensiveness. What does it mean to be comprehensive? The tree must address all plausible root causes, even ones considered highly unlikely, but there is a limit. The limit concept here is euphemistically referred to as “comet strike” by engineers. If you are trying to figure out why a boiler blew up, you are not obligated to put “comet strike” on your fault tree unless there is some evidence of an actual comet.

Since we are looking at an excursion in a data set, we choose the fault tree as our basic tool. The fault tree approach looks like this:

  1. Verify that a significant excursion has occurred.
  2. Collect sufficient data to characterize the excursion.
  3. Assemble the SMEs and brainstorm possible root causes for the excursion.
  4. Build a formal fault tree showing all the plausible causes. If there is any dispute about plausibility, put the prospective cause on the tree anyway.
  5. Apply documented evidence to each cause. This generally consists of direct observations and experimental results. Parse the data as either supporting or refuting a root cause, and modify the fault tree accordingly.
  6. Determine where evidence is lacking, develop a plan to generate the missing evidence. Consider synthetically modeling the behavior when no better evidence is available.
  7. Execute plan to fill all evidence blocks. Continue until all plausible root causes are refuted except one, and verify that the surviving root cause is supported by robust evidence.
  8. Produce report showing all of the above, and concluding that the root cause of the excursion was the surviving cause on the fault tree.

I will be discussing these steps in more detail below.

The Epistemology of Attribution Evidence

As we work through a fault tree, we inevitably must weigh the value of various forms of evidence. Remaining objective here can be a challenge, but we do have some basic guidelines to help us.

The types of evidence used to support or refute a root cause are not all equal. The differences can be expressed in terms of “fidelity.” When we examine a failed part or an excursion in a data set, our direct observations are based upon evidence that has perfect fidelity. The physical evidence corresponds exactly to the effect of the true root cause upon the system of interest. We may misinterpret the evidence, but the evidence is nevertheless a direct result of the true root cause that we seek. That is not true when we devise experiments to simulate the excursion, nor is it true when we create synthetic models.

When we cannot obtain conclusive root cause evidence by direct observation of the characteristics of the excursion, or direct analysis of performance data, the next best approach is to simulate the excursion by performing input/output (I/O) experimentation on the same or an equivalent system. This requires that we make assumptions about the input parameters, and we cannot assume that our assumptions have perfect fidelity to the excursion we are trying to simulate. Once we can analyze the results of the experiment, we find that it either reproduced our excursion of interest, or it did not. Either way, the outcome of the experiment has high fidelity with respect to the input as long as the system used in test has high fidelity to the system of interest. If the experiment based upon our best guess of the pertinent input parameters does not reproduce the directly-observed characteristics of the excursion, we do not discard the direct observations in favor of the experiment results. We may need to go back and double check our interpretation, but if the experiment does not create the same outcome as the actual event, it means we chose the wrong input parameters. The experiment serves to refute our best guess. The outcomes from experimentation obviously sit lower on an evidence hierarchy than direct observations.

The fidelity of synthetic models is limited in exactly the same way with respect to the input parameters that we plug into the model. But models have other fidelity issues as well. When we perform our experiments on the same system that had the excursion (which is ideal if it is available), or on an equivalent system, we take great care to assure that our test system responds the same way to input as the original system that had the excursion of interest. We can sometimes verify this directly. In a synthetic model, however, an algorithm is substituted for the actual system, and there will always be assumptions that go into the algorithm. This adds up to a situation in which we are unsure of the fidelity of our input parameters, and unsure of the fidelity of our algorithm. The compounded effect of this uncertainty is that we do not apply the same level of confidence to model results that we do to observations or experiment results. So in summary, and with everything else being equal, direct observation will always trump experimental results, and experimental results will always trump model output. Of course, there is no way to conduct meaningful experiments on analogous climates, so one of the best tools is not of any use to us.

Similar objective value judgments can be made about the comparison of two data sets. When we look at two curves and they both seem to show an excursion that matches in onset, duration and amplitude, we consider that to be evidence of correlation. If the wiggles also closely match, that is stronger evidence. Two curves that obviously exhibit the same onset, magnitude, and duration prior to statistical analysis will always be considered better evidence than two curves that can be shown to be similar after sophisticated statistical analysis. The less explanation needed to correlate two curves, the stronger the evidence of correlation.

Sometimes we need to resolve plausible root causes but lack direct evidence and cannot simulate the excursion of interest by I/O testing. Under these circumstances, model output might be considered if it meets certain objective criteria. When attribution of a past event is the goal, engineers shun innovation. In order for model output to be considered in a fault tree effort, the model requires extensive validation, which means the algorithm must be well established. There must be a historical record of input parameters and how changes in those parameters affected the output. Ideally, the model will have already been used successfully to make predictions about system behavior under specific circumstances. Models can be both sophisticated and quite trustworthy, as we see with the model of planetary motion in the solar system. Also, some very clever methods have been developed to substitute for prior knowledge. An example is the Monte Carlo method, which can sometimes tightly constrain an estimation of output without robust data on input. Similarly, if you have good input and output data, we can sometimes develop a useful empirical relationship of the system behavior without really knowing much about how the system works. A simple way to think of this is to consider three types of information, input data, system behavior, and output data. If you know two of the three, you have some options for approximating the third. But if you only have adequate information on one or less of the types of information, your model approach is underspecified. Underspecified model simulations are on the frontier of knowledge and we shun their use on fault trees. To be more precise, simulations from underspecified models are insufficiently trustworthy to adequately refute root causes that are otherwise plausible.

Part 2: Established Attribution Methodology Applied to the Modern Warming

Now that we have briefly covered the basics of objective attribution and how we look at evidence, let’s apply the tools to the modern warming. Recall that attribution can only be applied to events in the past or present, so we are looking at only the modern warming, not the physics of AGW. A hockey stick shape in a data set provides a perfect opportunity, since the blade of the stick represents a significant excursion from the shaft of the stick, while the shaft represents the stable system that we need to start with.

I mentioned at the beginning that it is useful for an attribution facilitator to be familiar with the basics of the science. While I am not a climate scientist, I have put plenty of hours into keeping up with climate science, and I am capable of reading the primary literature as long as it is not theoretical physics or advanced statistics. I am familiar with the IPCC Annual Report (AR) sections on attribution, and I have read all the posts at RealClimate.org for a number of years. I also keep up with some of the skeptical blogs including Climate Etc. although I rarely enter the comment fray. I did a little extra reading for this essay, with some help from Dr. Curry. This is plenty of familiarity to act as a facilitator for attribution on a climate topic. Onward to the root cause analysis.

Step 1: Verify that a significant excursion has occurred.

Here we want to evaluate the evidence that the excursion of interest is truly beyond the bounds of the stability region for the system. When we look at mechanical failures, Step 1 is almost never a problem, there is typically indisputable visual evidence that something broke. In electronics, a part will sometimes seem to fail in a circuit but meet all of the manufacturer’s specifications after it is removed. When that happens we shift our analysis to the circuit and the component originally suspected of causing the failure becomes a refuted root cause.

In looking at the modern warming, we first ask whether there are similar multi-decadal excursions in the past millennium of unknown cause. We also need to consider the entire Holocene. While most of the available literature states that the modern excursion is indeed unprecedented, this part of the attribution analysis is not a democratic process. We find that there is at least one entirely plausible temperature reconstruction for the last millennium that shows comparable excursions. Holocene reconstructions suggest that the modern warming is not particularly significant. We find no consensus as to the cause of the Younger Dryas, the Minoan, Roman, and Medieval warmings, or the Little Ice Age, all of which may constitute excursions of at least similar magnitude. I am not comfortable with this because we need to understand the mechanisms that made the system stable in the first place before we can meaningfully attribute a single excursion.

When I am confronted with a situation like this in my role as facilitator, I would have a discussion with my customer as to whether they want to expend the funds to continue the root cause effort given the magnitude of uncertainly regarding the question of whether we even have a legitimate attribution target. I have grave doubts that we have survived Step 1 in this process, but let’s assume that the customer wants us to continue.

Step 2. Collect sufficient data to characterize the excursion.

The methodology can get a little messy here. Before we can meaningfully construct a fault tree, we need to carefully define the excursion of interest, which usually means studying both the input and output data. However, we are not really sure of what input data we need since some may be pertinent to the excursion while other data might not. We tend to rely upon common sense and prior knowledge as to what we should gather at this stage, but any omissions will be caught during the brainstorming so we need not get too worried.

The excursion of interest is in temperature data. We find that there is a general consensus that a warming excursion has occurred. The broad general agreement about trends in surface temperature indices is sufficient for our purposes.

The modern warming temperature excursion exists in the output side of the complex process known as “climate.” A fully characterized excursion would also include robust empirical input data, which for climate change would be tracking data for the climate drivers. When we look for input data at this stage, we are looking for empirical records of the climate both prior to and during the modern warming. We do not have a full list yet, but we know that greenhouse gases, aerosols, volcanoes, water vapor, and clouds are all important. Rather than continue on this topic here, I will discuss it in more detail after we construct the fault tree below. That way we can be specific about what input data we need.

Looking for a Unique Signature

Now that we have chosen to consider the excursion as anomalous and sufficiently characterized, this is a good time to look for a unique signature. Has the modern warming created a signature that is so unique that it can only be associated with a single root cause? If so, we want to know now so that we can save our customer the expense of the full fault tree that we would build in Steps 3 and 4.

Do any SMEs interpret some aspect of the temperature data as a unique signature that could not possibly be associated with more than one root cause? It turns out that some interpret the specific spatio-temporal heterogeneity pattern as being evidence that the warming was driven by the radiation absorbed by increased greenhouse gas (GHG) content in the atmosphere. Based upon what I have read, I don’t think there is anyone arguing for a different root cause creating a unique signature in the modern warming. The skeptic arguments seem to all reside under a claim that the signature is not unique, not that it is unique to something other than GHG warming. So let’s see whether we can take our shortcut to a conclusion that an increase in GHG concentration is the sole plausible root cause due to a unique data signature.

Spatial heterogeneity would be occurring up to the present day, and so can be directly measured. I have seen two spatial pattern claims about GHG warming, 1) the troposphere should warm more quickly, and 2) the poles should warm more quickly. Because this is important, I have attempted to track these claims back through time. The references mostly go back to climate modeling papers from the 1970s and 1980s. In the papers, I was unable to find a single instance where any of the feedbacks thought to enhance warming in specific locations were associated solely with CO2. Instead, some are associated with any GHG, while others such as arctic sea ice decrease occur due to any persistent warming. Nevertheless, the attribution chapter in IPCC AR 5 contains a paragraph that seems to imply that enhanced tropospheric warming supports attribution of the modern warming to anthropogenic CO2. I cannot make the dots connect. But here is one point that cannot be overemphasized: the search for a unique signature in the modern warming is the best hope we have for resolving the attribution question.

Step 3. Assemble the SMEs and brainstorm plausible root causes for the excursion.

Without an overwhelmingly strong argument that we have a unique signature situation, we must do the heavy lifting involved with the exhaustive approach. Of course, I am not going to be offered the luxury of a room full of climate SMEs, so I will have to attempt this myself for the purposes of this essay.

Step 4. Build a Formal Fault Tree

An attribution analysis is a form of communication, and the effort is purpose-driven in that we plan to execute a corrective action if that is feasible. As a communication tool, we want our fault tree to be in a form that makes sense to those that will be the most difficult to convince, the SMEs themselves. And when we are done, we want the results to clearly point to actions we may take. With these thoughts in mind, I try to find a format that is consistent with what the SMEs already do. Also, we need to emphasize anthropogenic aspects of causality because those are the only ones we can change. So we will base our fault tree on an energy budget approach similar to a General Circulation Model (GCM), and we will take care to ensure that we separate anthropogenic effects from other effects.

GCMs universally, at least as far as I know, use what engineers call a “control volume” approach to track an energy budget. In a control volume, you can imagine an infinitely thin and weightless membrane surrounding the globe at the top of the atmosphere. Climate scientists even have an acronym for the location “top of the atmosphere,” TOA. Energy that migrates inside the membrane must equal energy that migrates outside the membrane over very long time intervals, otherwise the temperature would ramp until all the rocks melted or everything froze. In the rather unusual situation of a planet in space, the control volume is equivalent to a “control mass” equation in which we would track the energy budget based upon a fixed mass. Our imaginary membrane defines a volume but it also contains all of the earth/atmosphere mass. For simplicity, I will continue with the term “control volume.”

The control volume equation in GCMs is roughly equivalent to:

[heat gained] – [heat lost] = [temperature change]

This is just a conceptual equation because the terms on the left are in units of energy, while the units on the right are in degrees of temperature. The complex function between the two makes temperature an emergent property of the climate system, but we needn’t get too wrapped up in this. Regardless of the complexity hidden behind this simple equation, it is useful to keep in mind that each equation term (and later, each fault tree box) represents a single number that we would like to know.

There is a bit of housekeeping we need to do at this point. Recall that we are only considering the modern warming, but we can only be confident about the fidelity of our control volume equation when we consider very long time intervals. To account for the disparity in duration, we need to consider the concept of “capacitance.” A capacitor is a device that will store energy under certain conditions, but then discharge that energy under a different set of conditions. As an instructive example, the argument that the current hiatus in surface temperature rise is being caused by energy storage in the ocean is an invocation of capacitance. So to fit our approach to a discrete time interval, we need the following modification:

[heat gained] + [capacitance discharge] – [heat lost] – [capacitance recharge] = [modern warming]

Note that now we are no longer considering the entire history of the earth, we are only considering the changes in magnitude during the modern warming interval. Our excursion direction is up, so we discard the terms for a downward excursion. Based upon the remaining terms in our control volume equation, the top tier of the tree is this:

Slide1From the control volume standpoint, we have covered heat that enters our imaginary membrane, heat that exits the membrane, and heat that may have been stashed inside the membrane and is only being released now. I should emphasize that this capacitance in the top tier refers to heat stored inside the membrane prior to the modern warming that is subsequently released to create the modern warming.

This top tier contains our first logical bifurcation. The two terms on the left, heat input and heat loss, are based upon a supposition that annual changes in forcing will manifest soon enough that that the change in temperature can be considered a direct response. This can involve a lag as long as the lag does not approach the duration of the excursion. The third term, capacitance, accounts for the possibility that the modern warming was not a direct response to a forcing with an onset near the onset of our excursion. An alternative fault tree can be envisioned here with something else in the top tier, but the question of lags must be dealt with near the top of the tree because it constitutes a basic division of what type of data we need.

The next tier could be based upon basic mechanisms rooted in physics, increasing the granularity:

Slide2The heat input leg represents heat entering the control volume, plus the heat generated inside. We have a few oddball prospective causes here that rarely see the light of day. The heat generated by anthropogenic combustion and geothermal heat are a couple of them. In this case, it is my understanding that there is no dispute that any increases above prior natural background combustion (forest fires, etc.) and geothermal releases are trivial. We put these on the tree to show that we have considered them, but we need not waste time here. Under heat loss, we cover all the possibilities with the two basic mechanisms of heat transfer, radiation and conduction. Conduction is another oddball. The conduction of heat to the vacuum of space is relatively low and would be expected to change only slightly in rough accordance to the temperature at TOA. With conduction changes crossed off, a decrease in outward radiation would be due to a decreased albedo, where albedo represents reflection across the entire electromagnetic spectrum. A control volume approach allows us to lump convection in with conduction.   The last branch in our third tier is the physical mechanism by which a temperature excursion occurs due to heat being released from a reservoir, which is a form of capacitance discharge.

I normally do not start crossing off boxes until the full tree is built. However, if we cross off the oddballs here, we see that the second tier of the tree decomposes to just three mechanisms, solar irradiance increase, albedo decrease, and heat reservoir release. This comes as no revelation to climate scientists.

This is as far as I am going in terms of building the full tree, because the next tier gets big and I probably would not get it right on my own. Finishing it is an exercise left to the reader! But I will continue down the “albedo decrease” leg until we reach anthropogenic CO2-induced warming, the topic du jour. A disclaimer: I suspect that this tier could be improved by the scrutiny of actual SMEs.

Slide3The only leg shown fully expanded is the one related to CO2, the reader is left to envision the entire tree if each leg were to be expanded in a similar manner. The bottom left corner of this tree fragment shows anthropogenic CO2-induced warming in proper context. Note that we could have separated anthropogenic effects at the first tier of the tree, but then we would have two almost identical trees.

Once every leg is completed in this manner, the next phase of adding evidence begins.

Step 5. Apply documented evidence to each cause.

Here we assess the available evidence and decide whether it supports or refutes a root cause. The actual method used is often dictated by how much evidence we are dealing with. One simple way is to make a numbered list of evidence findings. Then when a finding supports a root cause, we can add that number to the fault tree block in green. When the same finding refutes a different root cause, we can add the number to the block in red. All findings must be mapped across the entire tree.

The established approach to attribution looks at the evidence based upon the evidence hierarchy and exploits any reasonable manner of simplification. The entire purpose of a control volume approach is to avoid having to understand the complex relationship that exists between variables within the control volume. For example, if you treat an engine as a control volume, you can put flow meters on the fuel and air intakes, a pressure gauge on the exhaust, and an rpm measurement on the output shaft. With those parameters monitored, and a bit of historical data on them, you can make very good predictions about the trend in rpm of the engine based upon changes in inputs without knowing very much about how the engine translates fuel into motion. This approach does not involve any form of modeling and is, as I mentioned, the rationale for using control volume in the first place.

The first question the fault tree asks of us is captured in the first tier. Was the modern warming caused by a direct response to higher energy input, a direct response to lower energy loss, or as a result of heat stored during an earlier interval being released? If we consider this question in light of our control volume approach (we don’t really care how energy gets converted to surface temperature), we see that we can answer the question with simple data in units of energy, watts or joules. Envision data from, say, 1950 to 1980, in terms of energy. We might find that for the 30-year interval, heat input was x joules, heat loss was y joules, and capacitance release was z joules.   Now we compare that to the same data for the modern warming interval. If any one of the latter numbers is substantially more than the corresponding earlier numbers x, y, or z, we have come a long way already in simplifying our fault tree. A big difference would mean that we can lop off the other legs. If we see big changes in more than one of our energy quantities, we might have to reconsider our assumption that the system is stable.

In order to resolve the lower tiers, we need to take our basic energy change data and break it down by year, so joules/year. If we had reasonably accurate delta joules/year data relating to the various forcings, we could wiggle match between the data and the global temperature curve. If we found a close match, we would have strong evidence that forcings have an important near-term effect, and that (presumably) only one root cause matches the trend. If no forcing has an energy curve that matches the modern warming, we must assume capacitance complicates the picture.

Let’s consider how this would work. Each group of SMEs would produce a simple empirical chart for their fault tree block estimating how much energy was added or lost during a specific year within the modern warming, ideally based upon direct measurement and historical observation. These graphs would then be the primary evidence blocks for the tree. Some curves would presumable vary around zero with no real trend, others might decline, while others might increase. The sums roll up the tree. If the difference between the “heat gained” and “heat lost” legs shows a net positive upward trend in energy gained, we consider that as direct evidence that the modern warming was driven be heat gained rather than capacitance discharge. If those two legs sum to near zero, we can assume that the warming was caused by capacitance discharge. If the capacitance SMEs (those that study El Nino, etc.) estimate that a large discharge likely occurred during the modern warming, we have robust evidence that the warming was a natural cycle.

  1. Determine where evidence is lacking…

Once all the known evidence has been mapped, we look for empty blocks. We then develop a plan to fill those blocks as our top priority.

I cannot find the numbers to fill in the blocks in the AR documents. I suspect that the data does not exist for the earlier interval, and perhaps cannot even be well estimated for the modern warming interval.

  1. Execute plan to fill all evidence blocks.

Here we collect evidence specifically intended to address the fault tree logic. That consists of energy quantities from both before and during the modern warming. Has every effort been made to collect empirical data about planetary albedo prior to the modern warming? I suspect that this is a hopeless situation, but clever SMEs continually surprise me.

In a typical root cause analysis, we continue until we hopefully have just one unrefuted cause left. The final step is to exhaustively document the entire process. In the case of the modern warming, the final report would carefully lay out the necessary data, the missing data, and the conclusion that until and unless we can obtain the missing data, the root cause analysis will remain unresolved.

Part 3: The AGW Fault Tree, Climate Scientists, and the IPCC: A Sober Assessment of Progress to Date

I will begin this section by stating that I am unable to assess how much progress has been made towards resolving the basic fault tree shown above. That is not for lack of trying, I have read all the pertinent material in the IPCC Annual Reports (ARs) on a few occasions. When I read these reports, I am bombarded with information concerning the CO2 box buried deep in the middle of the fault tree. But even for that box, I am not seeing a number that I could plug into the equations above. For other legs of the tree, the ARs are even more bewildering. If climate scientists are making steady progress towards being able to estimate the numbers to go in the control volume equations, I cannot see it in the AR documents.

How much evidence is required to produce a robust conclusion about attribution when the answer is not obvious? For years, climate scientists have followed reasoning that goes from climate model simulations to expert opinion, declaring that to be sufficient. But that is not how attribution works. Decomposition of a fault tree requires either a unique signature, or sufficient data to support or refute every leg of the tree (not every box on the tree, but every leg). At one end of the spectrum, we would not claim resolution if we had zero information, while at the other end, we would be very comfortable with a conclusion if we knew everything about the variables. The fault tree provides guidance on the sufficiency of the evidence when we are somewhere in between. My customers pay me to reach a conclusion, not muck about with a logic tree. But when we lack the basic data to decompose the fault tree, maintaining my credibility (and that of the SMEs as well) demands that we tell the customer that the fault tree cannot be resolved because we lack sufficient information.

The curve showing CO2 rise and the curve showing the modern global temperature rise do not look the same, and signal processing won’t help with the correlation. Instead, there is hypothesized to be a complex function involving capacitance that explains the primary discrepancy, the recent hiatus. But we still have essentially no idea how much capacitance has contributed to historical excursions. We do not know whether there is a single mode of capacitance that swamps all others, or whether there are multiple capacitance modes that go in and out of phase. Ocean capacitance has recently been invoked as perhaps the most widely endorsed explanation for the recent hiatus in global warming, and there is empirical evidence of warming in the ocean. But invoking capacitance to explain a data wiggle down on the fifth tier of a fault tree, when the general topic of capacitance remains unresolved in the first tier, suggests that climate scientists have simply lost the thread of what they were trying to prove. The sword swung in favor of invoking capacitance to explain the hiatus turns out to have two edges. If the system is capable of exhibiting sufficient capacitance to produce the recent hiatus, there is no valid argument against why it could not also have produced the entire modern warming, unless that can be disproven with empirical data or I/O test results.

Closing Comments

Most of the time when corporations experience a catastrophe such as a chemical plant explosion resulting in fatalities, they look to outside entities to conduct the attribution analysis. This may come as a surprise given the large sums of money at stake and the desire to influence the outcome, but consider the value of a report produced internally by the corporation. If the report exonerates the corporation of all culpability, it will have zero credibility. Sure, they can blame themselves to preserve their credibility, but their only hope of a credible exoneration is if it comes from an independent entity. In the real world, the objectivity of an independent study may still leave something to be desired, given the fact that the contracted investigators get their paycheck from the corporation, but the principle still holds. I can only assume when I read the AR documents that this never occurred to climate scientists.

The science of AGW will not be settled until the fault tree is resolved to the point that we can at least estimate a number for each leg in our fault tree based upon objective evidence. The tools available have thus far not been up to the task. With so much effort put into modelling CO2 warming while other fault tree boxes are nearly devoid of evidence, it is not even clear that the available tools are being applied efficiently.

The terms of reference for the IPCC are murky, but it is clear that it was never set up to address attribution in any established manner. There was no valid reason to not use an established method, facilitated by an entity with expertise in the process, if attribution was the true goal. The AR documents are position papers, not attribution studies, as exemplified by the fact that supporting and refuting arguments cannot be followed in any logical manner and the arguments do not roll up into any logical framework. If AGW is really the most important issue that we face, and the science is so robust, why would climate scientists not seek the added credibility that could be gained from an independent and established attribution effort?

October 24, 2014 Posted by | Science and Pseudo-Science | | 2 Comments

The Long Battle Over Pesticides, Birth Defects and Mental Impairment

By Dr. JANETTE D. SHERMAN, MD | CounterPunch | October 24, 2014

The recent number of articles in the popular press concerning loss of intellect among children exposed to chlorpyrifos is important in the case of this pesticide. Although in-home use of chlorpyrifos was restricted in the U. S in 2000, it is widely used in agriculture, and is a serious risk to health and intellect for people working and living in proximity to fields. Detectable levels of chlorpyrifos detected in New York City children, raises the question of exposure via food.

Across the U. S. we learn that students are doing poorly in school, often blaming the teachers and their unions. Are teachers no longer competent to teach or have children been “dumbed-down” by exposure to this neurotoxin?

The State of California is considering restriction on use, but is prepared for strong opposition from the pesticide and big agricultural industries.

Back in the “Dark Ages” – a mere 50 years ago – when I was a medical student and intern at Wayne State University, I rotated through Children’s Hospital in Detroit. It was staffed by some of the most thoughtful and kind physician/professors I have ever met. I attended a clinic named “FLK” otherwise known as Funny Looking Kid clinic. There we saw children who had abnormal looking faces, abnormal body parts, and, often impaired intelligence. Many of the children required complicated medical care, but I don’t recall much discussion as to why they had these abnormalities that had dramatically cut short their futures and altered the lives of their families.

Realizing you have given birth to a child with birth defects is devastating – not only for the child, but for the family, and for society in general. If the child survives infancy, it means being “different” and having to cope with disability, and with having to learn alternative ways to function. For many families, it means 24/7 care of a child who can never live independently. For society the costs can be enormous – surgery (often multiple), medications, social services, special education, special equipment, then alternative living arrangements, if and when family cannot care for their child, now grown to a non-functional adult.

Although the neurotoxicity of pesticides has been known for decades, recently, several national magazines, have named the pesticide, chlorpyrifos (Dursban/ Lorsban), as an agent causing loss of intelligence, as well as birth defects and structural brain damage.

Dr. James Hamblin’s article in March 2014 issue of The Atlantic, titled “The Toxins that Threaten Our Brains.” listed 12 commonly used chemicals, including chlorpyrifos, which is marketed as Dursban and Lorsban. The exposures described in the Atlantic articles were urban, so we do not know exactly how widespread this epidemic is, especially if we do not include agricultural areas such as in California, Hawaii and the mid-West.

That same month, The Nation published articles by Susan Freinkel “Poisoned Politics” and Lee Fang “Warning Signs” who reported adverse effects from exposure to Dursban and Lorsban.

Dr. Hamblin’s article generously cites Drs. Philip Landrigan of Mt. Sinai in New York City and Philippe Grandjean of Harvard that a “’silent pandemic’ of toxins has been damaging the brains of unborn children.”

Dr. Landrigan chaired a 1998 meeting of the Collegium Ramazzini International Scientific Conference, held in Carpi, Italy.   In attendance was Dr. Grandjean, whose research found “Methylmercury as a hazard to brain development.” Dr. Richard Jackson, from the U. S. CDC was also in attendance, as well as U.S. governmental and university members.

At that Collegium Ramazzini International Scientific Conference, on October 25, 1998, I presented definitive data in my paper: “Chlorpyrifos (Dursban) exposure and birth defects: report of 15 incidents, evaluation of 8 cases, theory of action, and medical and social aspects.” This presentation followed my earlier publications beginning in 1994 wherein I reported damage to the unborn from the same pesticide.

The Ramazzini organization sent my paper to the European Journal of Oncology for publication. Since my paper reported birth defects, not cancer, the paper has received little notice, but the attendees, including the EPA, have known of the findings for 16 years.

Currently a new battle is occurring in Hawaii over the use of pesticides, especially by Dow AgroSciences, DuPont Pioneer, BASF Plant Science, and Syngenta on the island of Kauai where giant seed companies develop Genetically Modified Organisms (GMOs) and other specialized seeds. The pesticides used there include alachlor, atrazine, chlorpyrifos, methomyl, metalochlor, permethrin and paraquat. The author, Paul Koberstein from Cascadia Times estimates that annually, more than 2000 pounds of chlorpyrifos are used per acre per year on Kauai, compared to less than 0.025 averages for the U. S. Mainland.

In addition to Hawaii, areas in California include workers and families from the Imperial Valley and other intensive agricultural areas where pesticide use is extensive. Using the Koberstein data, annual use of chlorpyrifos in California is approximately 1500 pounds/ acre.

Neurological Damage: Before and After Birth

Birth defects arise as a result of two mechanisms – damage to a gene, prior to fertilization, or damage to the growing cells of the fetus after life in the womb has begun. Differing from genetic damage, such as occurs in Down syndrome or Trisomy-21, the latter damage results from exposure of the developing fetus to agents called teratogens. For many years Mongolism was the name applied to children with growth delays, similar facial and hand features and intellectual deficits.

Chlorpyrifos is a unique pesticide. It is a combination of an organophosphate and a trichlorinatedpyridinol (TCP.) TCP is not only the feedstock used in the manufacture of chlorpyrifos, but also a contaminant in the product, and a metabolic breakdown product that is known to cause central nervous system abnormalities (hydrocephaly and dilated brain ventricles), and other abnormalities (cleft palate, skull and vertebral abnormalities) in fetuses as reported by Dow Chemical Co.

In March 1995, I was asked to fly to Arkansas to see a child whose mother had been exposed to the pesticide Dursban (chlorpyrifos) early in the pregnancy of her daughter.

Mrs. S had been working in a bank when in mid-March, 1991, she noticed a man spraying the baseboards behind the station where she worked as a teller. She said she asked the man if was okay to be in the area since she was pregnant, and she said the man told her it was “perfectly safe. She said the spraying had occurred around 4 PM, and that she worked at the bank until 6:30 PM, and when she went home that evening she had nausea and a” bit of headache.” She said she retuned to work the next day, felt nausea, but worked most of the day. An electrical fire at the drive-in window followed the pesticide event, and a technician used of a fogger that sprayed a “citrus-like” chemical that was intended to deodorize the smoke odor. Mrs. S. said she worked at the bank until about April of that year, and then worked at a credit union until her daughter was born in September.

When Mrs. S. was about five months pregnant she had an ultrasound, which showed that her baby had enlarged ventricles in her brain. Further examination revealed absence of the septum pellucidum, a central portion of her brain. Mrs. S. had additional follow up at a university center as well as with her own physician that showed normal amniocentesis and normal chromosomes.

Both Mr. & Mrs. S. said that caring for the daughter A. has been a severe financial and emotional drain, sometimes requiring them to be up 72 hours to try to soothe A’s crying. A. had surgery to repair her cleft lip when she was six months old, and repair of her cleft palate and left eyelid when she was a year old.

Both cleft lip and palate can now be repaired (in areas with skilled surgeon, and insurance or other funds) but until they are, the child has difficulty feeding and risks poor nutrition, upper respiratory and lung problems as a result of aspiration of food.

Additional diagnostic procedures indicated that A has a cleft left eye (failure of her eye to fuse during development), and she cannot blink her eye or move the left side of her face.

A was unable to sit up on her own by the time she was a year old, had to have food pureed until she was two, then her parents realized that when A neared her 4th birthday, she could not hear, when they began a program of sign language with the aid of a speech therapist.

A’s brother B. was born two years later, and is well, sleeping thought the night when he was two weeks of age.

I was given a tour of the bank where Ms. S worked by its’ Senior Vice-President, and to minimize stress to A, I examined her in the office and presence of her pediatrician. I also accompanied her parents to their home where I could observe A. at her home.

A was a small-boned child who walked with a wide-based, unsteady gait and who made audible sounds, but no language content. Her head was enlarged with hydrocephaly and a small bruise due to a recent, commonly occurring fall.

Her abnormalities included the following, and were characteristic of findings in other children:

low-set, tapering ears, wide-spaced nipples, and frequent infections. This litany is not to horrify readers, but to bring to attention the burdens imposed upon this child, her parents, and society as a whole. I evaluated seven more children, two families each having two children with similar, but more severe medical conditions.

With the exception of child #1, the seven children were profoundly retarded, were diapered, could not speak, and required feeding.

I first met C & D in 1996, along with their parents and handsome, healthy older brother, at their attractive home on the West Coast. Both D (a girl) and C (a boy) were lying flat, diapered, mouths open, fists clenched, staring into space, and being fed by bottle. Even today, looking at the photographs reminds me what an enormous burden was dealt to that family.

Ultimately I evaluated eight children, and identified seven more, reported by Dow Chemical Co., the manufacturer, to EPA on November 2, 1994, with reporting delays of as long as seven years from when the corporation first learned of them. I obtained the reports via a Freedom of Information request (FOI) from EPA. The reports were labeled with the revealing name: “DERBI” – or – “Dow Elanco Research Business Index.”

When I saw seven more children, all of who looked like siblings, (much as Trisomy-21 or Down Syndrome children do) it became clear to me, that the cause was linked to Dursban, the pre-natal exposure common to each.

Among the Dursban-exposed children, all 8 had both brain and palate abnormalities, seven had widespread nipples and growth retardation, six had low vision or blindness and six had genital abnormalities, five had brain atrophy and external ear abnormalities, four children had absence of the corpus collosum that is the critical connection between the two hemispheres of the brain.   Chromosomal studies were normal in all 8 families. All families reported stress and enormous financial burden to care for their children.

In addition to the children with birth defects, I also evaluated a number of families and a group of adults who had been exposed at their work site. Of the workers, all 12 complained of headache, and three of dizziness. Eight had findings of central nervous system damage, and six had peripheral nervous system damage. The patients reported upper respiratory and chest symptoms, as well as nausea, vomiting, diarrhea, and four had incontinence. The families also reported abnormalities and deaths in their household pets.

In February 1996, my deposition in the first case was taken by three groups of attorneys representing the defendants, two principally defending Dow Elanco. I was questioned for three 8-hour days. Ultimately a list of 565 exhibits was accumulated that included over 10,000 pages of materials that I supplied and relied upon for my opinion. These materials included Dow documents and correspondence, EPA documents, legal depositions, basic embryology, biochemistry and toxicology of chlorpyrifos, medical records of other exposed children, patents, books, articles, etc, etc.

Chlorpyrifos was designed to be neurotoxic in action. It is an interesting pesticide, in that it has not only an organophosphate portion, but also it has three chlorine atoms attached to a pyridinol ring. This ring is trichloropyridinol (TCP), a significant hazard, because it is fat-soluble, and persistent, up to 18 years as claimed by Dow Chemical Co. TCP also forms the body of trichlophenoxyacetic acid, part of Agent Orange, also linked to birth defects and cancer. In a war that ended in 1975, Agent Orange continues as a risk to the Vietnamese, and to military troops that were stationed there.

According to multiple Dow documents, TCP is the feedstock for production of chlopryrifos, a contaminant in the product, and a metabolic breakdown product. TCP has been demonstrated to cause central nervous system anomalies (hydrocephaly and dilated brain ventricles) as well as cleft palate, skull and vertebral abnormalities in the fetus at doses nontoxic to the mother, similar to the defects seen in affected children.

That TCP caused birth defects was known by Dow in 1987, but not reported to EPA until five years later in 1992. TCP is used to manufacture chlorpyrifos, and as such, comes under regulation of Section 8(e) of the Toxic Substances Control Act (TSCA), rather than the Federal Insecticide, Fungicide and Rodenticide Control Act (FIFRA.) Though there was regulatory difference, TSCA states very clearly “any person who manufactures, processes or distributed in commerce a chemical substance or mixture, or who obtains information which reasonably supports the conclusion that such substance or mixture presents a substantial risk of injure to heath or the environment, shall immediately inform the Administrator of such information. From 1976 to 1982, I was a member of a 16 person Advisory Committee to the EPA for TSCA, Chairman of the Risk-Benefit Assessment Group from 1977 to 1979, and a member of the Carcinogen Policy Sub-group from 1977 to 1981. It was clear that risks and benefits do no accrue to the same party. In the case of chlorpyrifos, the risks are to the unaware public, and the benefits to the corporation.

The Legal System is Not the Same as the Justice System

Bernard P. Whetstone was a well-established attorney who handled the initial birth defects case in Little Rock, Arkansas, and was aware of another case in that state. Mr. Whetstone was a “Southern Gentleman” with a soft drawl who had earned both a bachelor and doctorate of jurisprudence, and started practice in 1934. In 1995, he was worked with Davidson and Associates until he retired in 1999 at age 86. Mr. Whetstone died in 2001.

I was required to appear In Court in Little Rock, where Judge Eisley ruled that I was not qualified. Hard to believe that 10,000 pages of documents is not adequate, but that opinion was softened because he ruled that all the plaintiff’s experts were not qualified. Another physician/ toxicology expert and I evaluated additional patients (adults) who developed multiple adverse effects, including central nervous system damage, so Dow, employing the Eisley decision, argued successfully in other court jurisdictions that we were not qualified to give an opinion.

The main Dow law firm was Barnes and Thornburg from Indianapolis, where DowElanco, the co-manufacturer Eli Lilly is located. Eli Lilly is a manufacturer of both pharmaceuticals and pesticides. Barnes & Thornburg has over 500 attorneys in 12 cities and appeared to be very well staffed and funded.

A recent news release noted that William W. Wales, who spent more than 30 years in the legal department of The Dow Chemical Company and Dow AgroSciences LLC, had joined Barnes & Thornburg LLP’s Indianapolis office as a partner in the firm’s litigation and corporate departments. “Bill’s depth and breadth of experience in a variety of matters will be a tremendous asset to many of our clients who are dealing with similar issues,” said Joseph G. Eaton, Vice Chair of the firm’s Litigation Department and Co-Chair of the Toxic Tort Practice Group. Joseph Eaton is one of the attorneys who took my extensive deposition. They were the most aggressive law firm I had ever encountered, and I have testified in more than 700 depositions and/or court appearances.

In defense of their product, the Dow attorneys argued that there were no reports of levels of pesticides used or existing levels – a questionable tactic, since the corporation has never suggested or requested that such records be obtained.

Although the EPA stopped home use of Dursban in 2000, Lorsban is widely used in agriculture, on ornamentals, and places where women, the unborn and children are exposed. For many, this exposure is without their knowledge or consent. How is this allowed to happen?

Is it successful advertising, recommendations from country and state agricultural agents, an inept or politically adept EPA such as when on September 11, 2001, the then administrator of the U.S. Environmental Protection Agency and former governor of New Jersey Christie Whitman said on September 13, 2001, “EPA is greatly relieved to have learned that there appears to be no significant levels of asbestos dust in the air in New York City.” A week

Whitman said: “Given the scope of the tragedy from last week, I am glad to reassure the people of New York and Washington, DC that their air is safe to breathe and their water is safe to drink.”

In 2008, the U. S. EPA named Dow as an Energy Star Partner of the Year for excellence in energy management and reductions in greenhouse gas emissions.

Dow’s fleet of skilled lawyers have managed to save Dow from liability, when they achieved a reversal of a $925 million judgment for the contamination of the area around Rocky Flats, the Colorado facility that produced plutonium triggers for hydrogen bombs. And, a lawsuit filed by Vietnamese, damaged by Agent Orange against Dow and Monsanto was dismissed.

Dow is a multinational corporation and the third largest chemical manufacturer in the world, with earnings more than $57 billion in 2013. In addition to the manufacture of insecticides, herbicides, fungicides, and genetically modified seeds, Dow also manufactures multiple plastics, polystyrene, polyurethane, synthetic rubber, biphenyl-A as well as many other chemicals.

What are the chances that the use of Lorsban will be curtailed in the agricultural areas of Hawaii, California and elsewhere? Given what we know of the financial strength of the Dow Corporation, the weakness of the EPA, and our paid-for Congress, it does not look promising.

The Burden of Brain Damage 

If the top corporate officials were required to care for one of these severely brain-damaged children for a week, would it change their minds about the ethics of manufacturing chlorpyrifos and corporate profits?

There is not a teacher who can teach brain-damaged children to read and do math, which raises the larger question being proposed: are children’s lack of learning due to poor teachers, or to subtle brain damage? If children are being damaged to various degrees, profoundly in the situation of the 15 children sited in my research, to “mild” learning and/or behavioral problems, ranging from decreased IQ, Asperbergers, hyperactivity, autism, etc., how much is attributable to exposure to pesticides such as Dursban/ Lorsban? If we blame poor teaching, and teachers’ unions, but don’t stop the use of brain-damaging pesticides, where does that leave our U.S. society as a source of creativity and intellect in this world?

Note: All of my chlorpyrifos/ Dursban documents have been accepted and will be archived at the National Library of Medicine, along with my other scientific, medical and legal research.

Janette D. Sherman, M. D. is the author of Life’s Delicate Balance: Causes and Prevention of Breast Cancer and Chemical Exposure and Disease, and is a specialist in internal medicine and toxicology. She edited the book Chernobyl: Consequences of the Catastrophe for People and Nature, written by A. V. Yablokov, V. B., Nesterenko and A. V. Nesterenko, published by the New York Academy of Sciences in 2009.  Her primary interest is the prevention of illness through public education.  She can be reached at:  toxdoc.js@verizon.netand www.janettesherman.com

October 24, 2014 Posted by | Deception, Economics, Environmentalism, Science and Pseudo-Science, Timeless or most popular | , , , , , , , | Leave a comment

MH-17: The Untold Story

RT | October 22, 2014

Three months after Malaysia Airlines Flight MH17 was violently brought down from the skies over Ukraine, there are still no definitive answers to what caused the tragedy.

Civil conflict in the area prevented international experts from conducting a full and thorough investigation.

The wreckage should have been collected and scrupulously re-assembled to identify all the damage, but this standard investigative procedure was never carried out. Until that’s done, evidence can only be gleaned from pictures of the debris, the flight recorders or black boxes and eye-witnesses’ testimonies. This may be enough to help build a picture of what really happened to the aircraft, whether a rocket fired from the ground or gunfire from a military jet.

October 23, 2014 Posted by | Deception, False Flag Terrorism, Mainstream Media, Warmongering, Timeless or most popular, Video, War Crimes | | Leave a comment

US Court Rejects Argentina’s Appeal in Vulture Funds Case

teleSUR | October 23, 2014

The ongoing saga between Argentina and the vulture funds continues after a U.S court rejects Argentina’s appeal to allow the country to pay its creditors.

A United States appeals court has dismissed the Argentine appeal of an order directing Bank of New York Mellon to hold on to US$539 million dollars that Argentina deposited to pay its bondholders.

The appeals court said that it lacked jurisdiction over the appeal as an earlier ruling by U.S. District Judge Thomas Griesa was a clarification rather than modification of his earlier rulings on the matter.

In Griesa’s original ruling, the judge ruled that Argentina deposit with Bank of New York Mellon to pay bondholders who had renegotiated their debt with Argentina was “illegal,” and ordered the bank to hold on to the funds.

No progress has been made in talks between the country and hedge-fund holdouts, led by Elliott Management and Aurelius Capital Management.

Griesa has also scheduled another hearing on December 2 to weigh arguments over whether Citigroup Inc (C.N) should be allowed to process an expected interest payment by Argentina on bonds issued under its local laws following its 2002 default.

The hearing comes less than a month before an interest payment by Argentina on the bonds is due on December 31.

The hold outs, commonly referred to as vulture funds, had previously rejected all Argentina’s past restructuring offers on the country’s debt, most of which was incurred under Argentina’s military dictatorships and neoliberal governments. Ninety-two percent of creditors accepted the offer, and Argentina has been taking steps to continue to pay them back in spite of Judge Griesa’s ruling.

October 23, 2014 Posted by | Economics | , , , , , | Leave a comment

Boeing reneges on Iran business pledge

Press TV – October 23, 2014

American aircraft-manufacturing giant Boeing has ended a 35-year break in business with Iran, supplying the country’s national flag carrier with a cargo of aircraft-related items.

But the sale did not include spare parts for Iranian aircraft as promised by Washington following last year’s nuclear deal between Iran and six world powers.

“During the third quarter of 2014, we sold aircraft manuals, drawings, and navigation charts and data to Iran Air,” Boeing said in its quarterly report on Wednesday.

This is the first time that the American company has sold safety items to Iran Air since the 1979 Islamic Revolution.

The business deal brought Boeing USD 120,000 in revenue, the report added.

The sales came after the US Treasury Department issued a license in April that allowed Boeing to provide “spare parts that are for safety purposes” to Iran for a “limited period of time.”

Boeing said the plane parts were purchased “consistent with guidance from the US government in connection with ongoing negotiations.”

Boeing, which is still banned from selling new aircraft to the Islamic Republic, said that it could sell more plane parts to Iran Air in the future.

“We may engage in additional sales pursuant to this license,” it added.

In February, two major US aerospace manufacturers, Boeing and General Electric, applied for export licenses in order to sell airliner parts to Iran following an interim nuclear agreement between Tehran and the P5+1 group of world powers in November 2013.

Under the deal dubbed the Geneva Joint Plan of Action, the six countries – the US, France, Britain, Russia, China and Germany – undertook to provide Iran with some sanctions relief in exchange for Tehran agreeing to limit certain aspects of its nuclear activities.

In the past decade, Iran has witnessed several major air accidents blamed on its aging aircraft due to the US sanctions that prevent Iran from buying aircraft spare parts.

October 23, 2014 Posted by | Deception | , , , | Leave a comment

Secret Project Created Weaponized Ebola in South Africa in the 1980s

By Daniel Taylor | Old-Thinker News | October 20, 2014

“No records are available to confirm that the biological agents were destroyed.”

Operating out of South Africa during the Apartheid era in the early 1980’s, Dr. Wouter Basson launched a secret bioweapons project called Project Coast. The goal of the project was to develop biological and chemical agents that would either kill or sterilize the black population and assassinate political enemies. Among the agents developed were Marburg and Ebola viruses.

Basson is surrounded by cloak and dagger intrigue, as he told Pretoria High court in South Africa that “The local CIA agent in Pretoria threatened me with death on the sidewalk of the American Embassy in Schoeman Street.” According to a 2001 article in The New Yorker magazine, the American Embassy in Pretoria was “terribly concerned” that Basson would reveal deep connections between Project Coast and the United States.

In 2013, Basson was found guilty of “unprofessional conduct” by the South African health council.

Bioweapons expert Jeanne Guillemin writes in her book Biological Weapons: From the Invention of State-Sponsored Programs to Contemporary Bioterrorism, “The project’s growth years were from 1982 to 1987, when it developed a range of biological agents (such as those for anthrax, cholera, and the Marburg and Ebola viruses and for botulinum toxin)…”

Basson’s bioweapons program officially ended in 1994, but there has been no independent verification that the pathogens created were ever destroyed. The order to destroy them went directly to Dr. Basson. According to the Wall Street Journal, “The integrity of the process rested solely on Dr. Basson’s honesty.”

Basson claims to have had contact with western agencies that provided “ideological assistance” to Project Coast. Basson stated in an interview shot for the documentary Anthrax War that he met several times with Dr. David Kelly, the infamous UN weapons inspector in Iraq. Kelly was a top bioweapons expert in the United Kingdom. He was found dead near his home in Oxfordshire in 2003. While the official story claims he committed suicide, medical experts highly doubt this story.

In a 2007 article from the Mail Online, it was reported that a week prior to his death, Dr. Kelly was to be interviewed by MI5 about his ties to Dr. Basson.

Dr. Timothy Stamps, Minister of Health of Zimbabwe, suspected that his country was under biological attack during the time that Basson was operating. Stamps told PBS Frontline in 1998 that “The evidence is very clear that these were not natural events. Whether they were caused by some direct or deliberate inoculation or not, is the question we have to answer.”

Stamps specifically named the Ebola and Marburg viruses as suspect. Stamps thinks that his country was being used as a testing ground for weaponized Ebola.

“I’m talking about anthrax and cholera in particular, but also a couple of viruses that are not endemic to Zimbabwe [such as] the Ebola type virus and, we think also, the Marburg virus. We wonder whether in fact these are not associated with biological warfare against this country during the hostilities… Ebola was along the line of the Zambezi [River], and I suspect that this may have been an experiment to see if a new virus could be used to directly infect people.”

The Ghanaian Times reported in early September on the recent Ebola outbreak, noting connections between Basson and bioweapons research. The article points out that, “… there are two types of scientists in the world: those who are so concerned about the pain and death caused to humans by illness that they will even sacrifice their own lives to try and cure deadly diseases, and those who will use their scientific skill to kill humans on the orders of… government…”

Indeed, these ideas are not new. Plato wrote over 2,000 years ago in his work The Republic that a ruling elite should guide society, “… whose aim will be to preserve the average of population.” He further stated, “There are many other things which they will have to consider, such as the effects of wars and diseases and any similar agencies, in order as far as this is possible to prevent the State from becoming either too large or too small.”

As revealed by The Age, Nobel prize winning Australian microbiologist Sir Macfarlane Burnet secretly urged the Australian government in 1947 to develop bio weapons for use against the “overpopulated countries of South-East Asia.” In a 1947 meeting with the New Weapons and Equipment Development Committee, the group recommended that “the possibilities of an attack on the food supplies of S-E Asia and Indonesia using B.W. agents should be considered by a small study group.”

This information gives us an interesting perspective on the recent unprecedented Ebola outbreak. Is it an organic natural phenomenon? Did this strain of Ebola accidentally escape from a bioweapons lab? Or, was it deliberately released?

October 23, 2014 Posted by | Ethnic Cleansing, Racism, Zionism, Timeless or most popular, Video | , , | 2 Comments

Ottawa shooting: a false flag designed to steal away our freedoms?

Brandon Martinez | Non-Aligned Media | October 22, 2014

I’m not one to hastily jump to conclusions about events like these, but the alleged shooting at the Canadian parliament and a nearby war memorial that took place today smells like a false-flag operation designed to expedite the Harper regime’s militarist agenda.

The mainstream media is in a furor over the incident. Non-stop wall-to-wall coverage has commenced. Even American and British outlets have picked up the story.

One very noticeable clue as to the fraudulent nature of this event is the immediate calls from establishment propagandists for a crack down on free speech (what they call “hate speech”) and the bolstering of Orwellian “anti-terrorism” laws which will in effect hand the state unlimited powers to spy on the citizenry of Canada and snuff out dissidents.

For example, the former CSIS Assistant Director Ray Boisvert said this on CBC:

“We need to get at those who are the purveyors of hate. So those who proselytize, those who are radicalizing, we need to find ways to go after them with respect to hate speech or perhaps its time for new legislation under the anti-terrorism act as we’re seeing in the UK.”

The former Canadian spy boss essentially echoed what British PM David Cameron said in a UN speech last month wherein he called for “non-violent extremists” to be criminalized. The traitorous British statesman specifically named 9/11 and 7/7 skeptics as falling within his dubious definition of “non-violent extremists.”

Another suspicious guest on the aforementioned CBC program used innuendo to try to link the Ottawa shooting to ISIS and Islamism, conveniently at a time when Stephen Harper is looking to justify his decision to whore out our military in the US-led bombing initiative in Iraq.

Shortly after the false-flag attacks of 9/11, the Canadian government mimicked its US counterpart by passing anti-terror laws which included the infamous “Section 13″ provision in the Human Rights Act that was consequently used by Zionists and their agents to silence critics on the internet.

Look for more of the same from the Zionist regime in Ottawa in the coming days. The mainstream media’s job is to whip up hysteria in order to scare the populace into accepting draconian laws that will eliminate our freedoms. Unfortunately most of the population are lemmings who will believe anything the government or media tells them and willingly forfeit their freedoms to the deceptive miscreants who currently occupy our government.

In any case, one cannot discount the very real possibility that the Canadian state had a hand in this.

Click here to listen to Joshua Blakeney’s commentary on the matter.

October 23, 2014 Posted by | Civil Liberties, False Flag Terrorism, Mainstream Media, Warmongering | , | Leave a comment

The forgotten coup – how America and Britain crushed the government of their ‘ally’, Australia

By John Pilger | October 23, 2014

Across the political and media elite in Australia, a silence has descended on the memory of the great, reforming prime minister Gough Whitlam, who has died. His achievements are recognised, if grudgingly, his mistakes noted in false sorrow. But a critical reason for his extraordinary political demise will, they hope, be buried with him.

Australia briefly became an independent state during the Whitlam years, 1972-75. An American commentator wrote that no country had “reversed its posture in international affairs so totally without going through a domestic revolution”. Whitlam ended his nation’s colonial servility. He abolished Royal patronage, moved Australia towards the Non-Aligned Movement, supported “zones of peace” and opposed nuclear weapons testing.

Although not regarded as on the left of the Labor Party, Whitlam was a maverick social democrat of principle, pride and propriety. He believed that a foreign power should not control his country’s resources and dictate its economic and foreign policies. He proposed to “buy back the farm”. In drafting the first Aboriginal lands rights legislation, his government raised the ghost of the greatest land grab in human history, Britain’s colonisation of Australia, and the question of who owned the island-continent’s vast natural wealth.

Latin Americans will recognise the audacity and danger of  this “breaking free” in a country whose establishment was welded to great, external power. Australians had served every British imperial adventure since the Boxer rebellion was crushed in China. In the 1960s, Australia pleaded to join the US in its invasion of Vietnam, then provided “black teams” to be run by the CIA. US diplomatic cables published last year by WikiLeaks disclose the names of leading figures in both main parties, including a future prime minister and foreign minister, as Washington’s informants during the Whitlam years.

Whitlam knew the risk he was taking. The day after his election, he ordered that his staff should not be “vetted or harassed” by the Australian security organisation, ASIO – then, as now, tied to Anglo-American intelligence. When his ministers publicly condemned the US bombing of Vietnam as “corrupt and barbaric”, a CIA station officer in Saigon said: “We were told the Australians might as well be regarded as North Vietnamese collaborators.”

Whitlam demanded to know if and why the CIA was running a spy base at Pine Gap near Alice Springs, a giant vacuum cleaner which, as Edward Snowden revealed recently, allows the US to spy on everyone. “Try to screw us or bounce us,” the prime minister warned the US ambassador, “[and Pine Gap] will become a matter of contention”.

Victor Marchetti, the CIA officer who had helped set up Pine Gap, later told me, “This threat to close Pine Gap caused apoplexy in the White House. … a kind of Chile [coup] was set in motion.”

Pine Gap’s top-secret messages were de-coded by a CIA contractor, TRW. One of the de-coders was Christopher Boyce, a young man troubled by the “deception and betrayal of an ally”. Boyce revealed that the CIA had infiltrated the Australian political and trade union elite and referred to the Governor-General of Australia, Sir John Kerr, as “our man Kerr”.

Kerr was not only the Queen’s man, he had long-standing  ties to Anglo-American intelligence. He was an enthusiastic member of the Australian Association for Cultural Freedom, described by Jonathan Kwitny of the Wall Street Journal in his book, ‘The Crimes of Patriots‘, as, “an elite, invitation-only group… exposed in Congress as being founded, funded and generally run by the CIA”. The CIA “paid for Kerr’s travel, built his prestige… Kerr continued to go to the CIA for money”.

When Whitlam was re-elected for a second term, in 1974, the White House sent Marshall Green to Canberra as ambassador. Green was an imperious, sinister figure who worked in the shadows of America’s “deep state”. Known as the “coupmaster”, he had played a central role in the 1965 coup against President Sukarno in Indonesia – which cost up to a million lives. One of his first speeches in Australia was to the Australian Institute of Directors – described by an alarmed member of the audience as “an incitement to the country’s business leaders to rise against the government”.

The Americans and British worked together. In 1975, Whitlam discovered that Britain’s MI6 was operating against his government. “The Brits were actually de-coding secret messages coming into my foreign affairs office,” he said later. One of his ministers, Clyde Cameron, told me, “We knew MI6 was bugging Cabinet meetings for the Americans.” In the 1980s, senior CIA officers revealed that the “Whitlam problem” had been discussed “with urgency” by the CIA’s director, William Colby, and the head of MI6, Sir Maurice Oldfield. A deputy director of the CIA said: “Kerr did what he was told to do.”

On 10 November, 1975, Whitlam was shown a top secret telex message sourced to Theodore Shackley, the notorious head of the CIA’s East Asia Division, who had helped run the coup against Salvador Allende in Chile two years earlier.

Shackley’s message was read to Whitlam. It said that the prime minister of Australia was a security risk in his own country. The day before, Kerr had visited the headquarters of the Defence Signals Directorate, Australia’s NSA where he was briefed on the “security crisis”.

On 11 November – the day Whitlam was to inform Parliament about the secret CIA presence in Australia – he was summoned by Kerr. Invoking archaic vice-regal “reserve powers”, Kerr sacked the democratically elected prime minister. The “Whitlam problem” was solved, and Australian politics never recovered, nor the nation its true independence.

 

October 23, 2014 Posted by | Timeless or most popular | , , , , | Leave a comment

Free Speech for Some Means Free Speech for None

By Gabe Rottman | ACLU | October 22, 2014

In honor of Free Speech Week, let’s take a moment to acknowledge the obvious. Free speech is incredibly, almost unbelievably important, especially in a democracy.

It can also be unpleasant, uncomfortable and even downright offensive. Which can make defending it rather awkward at times.

Let’s take a trip back to Boston during this week in 1923:

Beantown’s Democratic machine boss and chief executive is the flamboyant Mayor James Michael Curley, a felon, rake, and hometown hero. As the Boston Globe put it, he “served four terms as mayor, four terms in Congress, one term as governor, and two terms in jail.”

Another popular political force in those days was the Ku Klux Klan. At its height in the 1920s, it effectively ran several states and would stage rallies seeking support in the rapidly urbanizing northern cities, including Boston, where racial and religious tensions were taut.

Mayor Curley—a hero among the city’s Irish-American working class—saw a campaign issue. On October 23, 1923, while calling himself a “stout stickler for freedom of meeting, speech and press,” he banned peaceful Klan meetings in Boston. In response to a letter from the local ACLU condemning the KKK but strongly defending the group’s right to speak and gather, Curley said, “The Klan cannot expect to shelter itself behind the rights it denies and the guaranties it repudiates.”

The argument has some appeal. Why should we tolerate intolerance, especially by a group as objectionable as the Klan? Consider, however, another move against unpopular speech by the good mayor. In 1925, Mayor Curley banned Margaret Sanger—the birth control activist and founder of Planned Parenthood—from speaking in Boston. In doing so, he lashed out against the ACLU and explicitly linked the Sanger ban to his moves against the KKK.

Having banned the Klan, silencing Sanger was just another step down that road. When you put some lawful speech outside the protection of the First Amendment because it is unpopular or even offensive, speech you like will invariably be lumped in as well. The KKK of the 1920s was a horrific thing. But Mayor Curley proved that progressive social reformers could be painted as equally horrific and their speech just as deserving of suppression.

Fortunately, despite the efforts of Curley and many like him, free speech protections grew muscle in the decades to follow. And support for contraception and similar social reforms started to win in the marketplace of ideas, while the Klan ate dust in the bin of history.

The ACLU continues to support free speech for all precisely because of these historical experiences. We understand that our position will allow some speech that is not just unpopular, but possibly deplorable. But our defense of speech regardless of speaker comes down to a simple truth: once you give the government the ability to silence unpopular speech, no one is safe. Once you start playing favorites with the protections of the First Amendment, you put yourself at the mercy of shifting political whims.

Free speech only for some translates directly into free speech for none.

October 23, 2014 Posted by | Civil Liberties, Full Spectrum Dominance, Timeless or most popular | , | 1 Comment

JSIL vs. ISIS

Revenge10

By Samer Jaber | Al-Akhbar | October 23, 2014

The Jewish state of Israel in the Levant (JSIL) and the Islamic State in Iraq and Syria (ISIS) are different in many ways. The most fundamental difference is that the former is a recognized state and a member of the United Nations, while the latter is not recognized as a legitimate polity and is considered a political/military terrorist organization. However, the two share core characteristics that define them and by recognising these similarities observers may be able to make predictions about their futures.

Divine right to exist

Both JSIL and ISIS display what might be termed “self-defined righteousness.” Although Israel is a modern state, its politics and treatment of others (Palestinians) are based on religious concepts and principles that can be traced back to the first century BC and the teachings of Rabbi Hillel, someone who would be considered a fundamentalist today. He instructed Jews to have a religious and social identity separate from those of other people (tribes). Israel introduces itself to the international community as a Jewish state and, based on this interpretation of Zionist Judaism, is a home to Jews wherever they are in the world. In other words, it is a state which includes all Jews but excludes the indigenous people of Palestine, the Palestinians. It uses its interpretation of Judaism to deny Palestinians equal rights and prevent them from accessing their lands.

ISIS believes that it is enacting God’s will and defines itself as the force to enforce the Islamic moral code, religious rituals and law (Sharia). ISIS’ interpretation of Islam goes back to Ibn Taymiyyah (1263-1328) who promoted the idea that Muslims are different from non-Muslims both in their way of life as well as in religious instruction. This notion of non-acceptance together with cultural differences led Ibn Taymiyyah’s followers to the practice of excluding others and in some cases putting them to death. This particular interpretation of Islam also means the rejection of other branches of Islam.

The form of righteousness practiced by ISIS leaders and Israeli politicians is also used to set apart the “good” people from the “bad” ones. The good are those who believe and support their respective political projects while the bad ones are those who stand against them. It is this stance that makes it permissible for Israel to inflict damage on the bad ones and reward the good. As such, the Palestinian people are depicted in the official Israeli narrative as the bad people who work hard to inflict damage on the good Israeli Jews.

Historically, the state of Israel was established on the self-proclaimed premise of the Zionist movement, that anti-Semitism and murder might surge again in the world. Thus, the resurgence of another wave of anti-Semitism will inflict another Holocaust on the world’s Jews. The Zionist movement took anti-Semitism and the Holocaust out of their historical context. In other words, the concepts were given an absolute ahistoric “religious” meaning. Consequently, the “Jews” started to become reified as an ethnic identity and Israel as a refuge for the world’s Jewry from harm.

For ISIS, one of the underlying reason for Muslims’ degeneration over the centuries is that too many people have strayed far from the fundamental principles of Islam. The role of ISIS is to establish an Islamic state ruled by the caliph. It considers itself to be the force that will revive true Islam and create a state in which all Muslims can live in under its interpretation of Islam. Similarly to Israel, where non-Jews are discriminated against, there is no place for non-Muslims to live as equal citizens in the so-called Islamic State.

Both JSIL and ISIS use the self-serving interpretation of religious texts to enact pragmatic politick. The Zionist narrative that gave birth to the state of Israel and is now its official ideology starts with the idea that the Jews are God’s chosen people and that God promised them the holy land. These two concepts of chosenness are ahistorical, unconditional, and self-limited. Thus, settler colonial expansion in Palestine beyond the 1948 borders is seen as the redemption of the biblically named Judea and Samaria for the Jewish people. Putting Palestinian communities under closure during Jewish holidays is usually disguised as a religious instruction and therefore not seen for what it is: a measure of control.

ISIS also claims it is justified in its actions; it considers itself the group fighting for God and enforcing the latter’s instructions on earth. The group’s interpretation of religious texts is based on its spiritual-political leaders’ rulings that place people into two main categories – believers in ISIS’ ideology are viewed as being on the right path for following the “correct” version of Islam while everyone else, including followers of other branches of Islam, is on the wrong path. Thus the expulsion or execution of Iraqi Christians and Yazidis in Mosul who refuse to convert or pay Jizya (a tax paid by non-Muslims) is introduced as a religious instruction that permits politically motivated discrimination.

Indiscriminate attacks on perceived “enemies”

International humanitarian law forbids parties in armed conflict from deliberately launching attacks against civilians but both Israel and ISIS carry out indiscriminate attacks against their enemies, and they cite similar justifications for such attacks, mainly operational reasons.

JSIL, like ISIS, says that engaging in conflict in residential areas makes it difficult to avoid harm to civilians. Israel, which deems itself “superior” to others, says it launches military operations to prevent harm to its own people whose lives are worth much than those of “others.” ISIS believes it is on the right path and views everyone else as living in a state of sinfulness and, according to the group, sinners deserve to be put to death. Ultimately, ISIS and Israel attack civilians as part of their strategy to dispose of the natives and remove them from their lands. As such, in their quest for control of the land they both practice ethnic cleansing under a myriad of guises.In areas controlled by ISIS, in both Syria and Iraq, the group has carried out the mass executions of opposition militants captured by its forces and any person who assists its enemies is liable to be sentenced to death. Israel carries out a similar strategy of collective punishment against Palestinian resistance. It used it in its latest war on the Gaza Strip and during the so-called Operation Cast Lead in 2008-09. Even when Israel states that its attacks are intended to kill only resistance fighters, its bombardment of residential areas always leads to the killing of civilians. These attacks are clearly designed to target and punish the combatants’ families and homes.

During Israel’s latest war on Gaza in the summer of 2014, the Israeli army intentionally converted 40 percent of the Gaza Strip into uninhabited land. The Israeli army displaced up to 500,000 Palestinians out of their neighborhoods. This is the same tactic ISIS has been using in vast swaths in Syria and Iraq. The latest incident is the ongoing fighting in Kobane, the Kurdish city under Syrian jurisdiction, where ISIS’ shelling of the city forced the majority of its citizens to be displaced.

Displacement, collective punishment, terrorism and ethnic cleansing in the name of God are but a few similarities between the two entities. It is worth considering how the state of Israel has embraced the legend of the Maccabees, a sect of Judaism which fought other Jews and foreign powers in the name of piousness and righteousness, and how it has incorporated it within the contemporary ethos. The Maccabees were fundamentalists who used violence against their enemies, both Jewish and non-Jewish, and carried out forced conversions much as ISIS does today.

Samer Jaber is a political activist and researcher. He is the managing director for Dar el-Karma Inc. for Media, Researches and Publication. He tweets at @Jerusalem_sbj

October 23, 2014 Posted by | Ethnic Cleansing, Racism, Zionism, War Crimes | , , , , , , , , , | Leave a comment

France puts pro-Palestinian campaigner on trial

Press TV – October 23, 2014

The French government has prosecuted a pro-Palestinian activist for disregarding the official ban on anti-Israel rallies during its recent offensive on the Gaza Strip, Press TV reports.

France has put the spokesperson of the New Anti-Capitalist Party on trial for his attempts to organize an “illegal demonstration” against the Israeli regime.

Meanwhile, several demonstrators held a rally on Wednesday to protest against the government’s prosecution of the pro-Palestinian campaigner.

“To incriminate the spokesman of a political party who is also a strong supporter of unions… is totally unjustified and unacceptable. We would like to know why the government singled him out,” said Patrick Picard, a member of the General Confederation of Labor (CGT).

France was heavily criticized by rights groups after it officially banned demonstrations against the Israeli regime’s deadly attacks on the besieged Gaza Strip in summer. Thousands of people defied the French government’s decision, saying it was a glaring breach of their constitutional basic right to demonstrate.

“This government made two decisions this summer: to support the extreme-right regime of Benjamin Netanyahu, which was in the process of massacring people in Gaza and then, … it tried to weaken the Palestinian solidarity movement here in France by claiming it was anti-Semitic and violent which we totally reject,” stated the national secretary of Left Front Party (PG), Eric Coquerel.

The French government has recently intensified the trend of prosecuting social activists who disagree with the unpopular policies of President Francois Hollande.

October 23, 2014 Posted by | Civil Liberties, Ethnic Cleansing, Racism, Zionism, Full Spectrum Dominance, War Crimes | , , , | Leave a comment