Aletho News

ΑΛΗΘΩΣ

The Dark Side of the 1991 Gulf War

Tales of the American Empire | November 22, 2019

The 1991 Gulf war is remembered as a great war. In reality, worldwide sanctions would have forced Iraq to peacefully withdraw. The Gulf war cost billions of dollars, killed or sickened a million people, left the region much worse off, assisted Iran, and caused a worldwide economic recession.

________________________________________

The creation of oil-rich Kuwait by Britain; Keesing’s; July 8, 1961; http://web.stanford.edu/group/tomzgro…

“Gulf War Documents: Meeting between Saddam Hussein and US Ambassador to Iraq April Glaspie”; Wikileaks/Global Research; March 5, 2012; https://www.globalresearch.ca/gulf-wa…

Stephen C. Pelletiere’s Jan 31, 2003 New York Times OpEd on the evidence that Iran was responsible for the gassing of 5000 Kurds at Halabja, not Iraq: https://www.nytimes.com/2003/01/31/op…

“Overwhelming Force – What happened in the final days of the Gulf War?”; Seymour Hersh; New Yorker; May 22, 2000; http://cryptome.org/mccaffrey-sh.htm

“Lucky War; Third Army in Desert Storm”; Richard M. Swain; US Army Command and General Staff College Press; 1994; https://history.army.mil/html/bookshe…

Video of tons of foreign munitions blown up at the Iraqi Khamisiyah storage area in 1991: https://www.youtube.com/watch?v=lzrqj…

September 27, 2020 Posted by | Militarism, Timeless or most popular, Video, War Crimes | , | Leave a comment

US Accuses Its Own Informant in Venezuela Case of Lying to Feds

teleSUR – September 26, 2020

United States federal law enforcement has accused a key informant on the trumped up case targeting Venezuela’s Minister Tareck El Aissami of lying and stealing US $140,000.

The Associated Press reports that Venezuelan-born businessman and pilot Alejandro Marin was arrested on September 19th in Miami on three counts of knowingly making false statements to U.S. federal agents, according to court filings.

Marin operates a chartered flight business out of Miami’s Opa Locka executive airport and conspired in the plot against Vice President El Aissami, utilizing his business.

The government of Venezuela has said that the years-long persecution of Minister Tareck El Aissami, like the recent charges against President Nicolas Maduro, are part of a permanent destabilization campaign against top officials of the Bolivarian government.

AP’s Joshua Goodman reports that in coordination with US authorities, Marin had transported millions of dollars on private jets, in violation of US-imposed unilateral coercive measures.

The US $140,000 is said to have gone missing during a US-directed operation in July 2018. Federal public defender Christian Dunham, who is representing Marin, says his client is expected to appear in court on September 30th for a pre-trial detention hearing.

According to the arrest order, the stolen funds were deposited to an account controlled by Marin over two years ago.

The US government has tried various hands hoping to generate the evidence to forge a case and a narrative of criminality within the Venezuelan government.

With the mainstream media on side with Washington, the case against President Nicolas Maduro and officials, in which a US $15 million bounty was placed by the Justice Department in March, remains dubious at best as new information regarding the corrupt and criminal nature of the Venezuelan opposition aligned with Juan Guaido comes to light.

September 27, 2020 Posted by | Corruption, Deception | | Leave a comment

US warns it is shutting down Baghdad embassy: WSJ

Press TV – September 27, 2020

The US has reportedly said it is closing its embassy in Baghdad unless Iraq prevents rocket attacks.

US Secretary of State Mike Pompeo reportedly called Iraqi President Barham Salih and Iraqi Prime Minister Mustafa al-Kadhimi on Sunday.

“What we’re being told is that it is a gradual closure of the embassy over two to three months,” an Iraqi official was cited as saying in a Wall Street Journal report.

A State Department official also took the chance to point the finger at Iran.

“The Iran-backed groups launching rockets at our embassy are a danger not only to us but to the government of Iraq, neighboring diplomatic missions,” the official was cited as saying.

Iran’s Foreign Minister Mohammad Javad Zarif on Saturday condemned any assaults on diplomatic places, saying such attacks in Iraq must be stopped.

He touched upon attacks carried out against Iran’s diplomatic locations and highlighted the necessity of guaranteeing the dignity and security of Iranian diplomats in Iraq.

The heavily fortified Green Zone in Baghdad, which hosts foreign diplomatic sites and government buildings, have been frequently targeted by rockets and explosives in the past few years.

September 27, 2020 Posted by | Illegal Occupation | , | 2 Comments

MSM Promotes Yet Another CIA Press Release As News

By Caitlin Johnstone | September 23, 2020

The Washington Post, whose sole owner is a CIA contractor, has published yet another anonymously sourced CIA press release disguised as a news report which just so happens to facilitate longstanding CIA foreign policy.

In an article titled “Secret CIA assessment: Putin ‘probably directing’ influence operation to denigrate Biden“, WaPo’s virulent neoconservative war pig Josh Rogin describes what was told to him by unnamed sources about the contents of a “secret” CIA document which alleges that Vladimir Putin is “probably” overseeing an interference operation in America’s presidential election.

True to form, at no point does WaPo follow standard journalistic protocol and disclose its blatant financial conflict of interest with the CIA when promoting an unproven CIA narrative which happens to serve the consent-manufacturing agendas of the CIA for its new cold war with Russia.

And somehow in our crazy, propaganda-addled society, this is accepted as “news”.

The CIA has had a hard-on for the collapse of the Russian Federation for many years, and preventing the rise of another multi-polar world at all cost has been an open agenda of US imperialism since the fall of the Soviet Union. Indeed it is clear that the escalations we’ve been watching unfold against Russia were in fact planned well in advance of 2016, and it is only by propaganda narratives like this one that consent has been manufactured for a new cold war which imperils the life of every organism on this planet.

There is no excuse for a prominent news outlet publishing a CIA press release disguised as news in facilitation of these CIA agendas. It is still more inexcusable to merely publish anonymous assertions about the contents of that CIA press release. It is especially inexcusable to publish anonymous assertions about a CIA press release which merely says that something is “probably” happening, meaning those making the claim don’t even know.

None of this stopped The Washington Post from publishing this propaganda piece on behalf of the CIA. None of it stopped this story from being widely shared by prominent voices on social media and repeated by major news outlets like CNN, The New York Times, and NBC. And none of it stopped all the usual liberal influencers from taking the claims and exaggerating the certainty.

The CIA-to-pundit pipeline, wherein intelligence agencies “leak” information that is picked up by news agencies and then wildly exaggerated by popular influencers, has always been an important part of manufacturing establishment Russia hysteria. We saw it recently when the now completely debunked claim that Russia paid bounties on US troops to Taliban-linked fighters in Afghanistan first surfaced; unverified anonymous intelligence claims were published by mass media news outlets, then by the time it got to spinmeisters like Rachel Maddow it was being treated not as an unconfirmed analysis but as an established fact.

If you’ve ever wondered how rank-and-file members of the public can be so certain of completely unproven intelligence claims, the CIA-to-pundit pipeline is a big part of it. The most influential voices who political partisans actually hear things from are often a few clicks removed from the news report they’re talking about, and by the time it gets to them it’s being waved around like a rock-solid truth when at the beginning it was just presented as a tenuous speculation (the original aforementioned WaPo report appeared on the opinion page).

The CIA has a well-documented history of infiltrating and manipulating the mass media for propaganda purposes, and to this day the largest supplier of leaked information from the Central Intelligence Agency to the news media is the CIA itself. They have a whole process for leaking information to reporters they like (with an internal form that asks whether the information leaked is Accurate, Partially Accurate, or Inaccurate), as was highlighted in a recent court case which found that the CIA can even leak documents to select journalists while refusing to release them to others via Freedom of Information Act requests.

lying, torturingpropagandizingdrug traffickingassassinatingcoup-stagingwarmongeringpsychopathic spook agency with an extensive history of deceit and depravity that selectively gives information to news reporters with whom it has a good relationship is never doing so for noble reasons. It is doing so for the same rapacious power-grabbing reasons it does all the other evil things it does.

The way mainstream media has become split along increasingly hostile ideological lines means that all the manipulators need to do to advance a given narrative is set it up to make one side look bad and then share it with a news outlet from the other side. The way media is set up to masturbate people’s confirmation bias instead of report objective facts will then cause the narrative to go viral throughout that partisan faction, regardless of how true or false it might be.

The coming US election and its aftermath is looking like it will be even more insane and hysterical than the last one, and the enmity and outrage it creates will give manipulators every opportunity to slide favorable narratives into the slipstream of people’s hot-headed abandonment of their own critical faculties.

And indeed they are clearly prepared to do exactly that. An ODNI press release last month which was uncritically passed along by the most prominent US media outlets reported that China and Iran are trying to help Biden win the November election while Russia is trying to help Trump. So no matter which way these things go the US intelligence cartel will be able to surf its own consent-manufacturing foreign policy agendas upon the tide of outrage which ensues.

The propaganda machine is only getting louder and more aggressive. We’re being prepped for something.

September 27, 2020 Posted by | Fake News, Mainstream Media, Warmongering, Russophobia | , , | 1 Comment

Canada Wildfires At Lowest Level In Years

By Paul Homewood | Not A Lot Of People Know That | September 27, 2020

According to the Met Office, global warming is leading to record breaking fires in North America.

Canada, of course, is a large part of North America, so surely fires should be getting worse there too.

In fact wildfires this year are running at just 8% of the 10-year average:


https://cwfis.cfs.nrcan.gc.ca/report

All provinces are well below average:

This suggests that meteorological conditions have been responsible for both the glut of fires in the US west and the dearth in Canada.

More significant though is the long term trend in Canada:

http://nfdp.ccfm.org/en/data/fires.php

1994, 1995 and 1998 recorded the biggest wildfire acreages. But over the full period, there is no obvious trend at all.

Which all rather makes of a nonsense of the Met Office’s claim that hot dry weather conditions promoting wildfires are becoming more severe and widespread due to climate change.

September 27, 2020 Posted by | Science and Pseudo-Science | | Leave a comment

‘Sort out your own internal affairs,’ Lukashenko tells Macron after French leader calls for his resignation

RT | September 27, 2020

If Emmanuel Macron believes heads of state must resign over street protests, he should have left office himself when Yellow Vest demonstrators took to the streets in France, Belarusian President Alexander Lukashenko has said.

Lukashenko, who has faced weeks of large-scale protests following the August 9 presidential election which the opposition insists was rigged, has labeled Macron an “immature politician.” His words were reported to the BelTA news agency by his press secretary.

As a “mature politician,” the Belarusian president advised his French counterpart “not to get distracted, but instead focus on the internal affairs of France. At least, begin solving the many problems that accumulated in the country,” Lukashenko added.

The statement came after Macron said in an interview with Le Journal du Dimanche that “it is clear that Lukashenko must go” because his “authoritarian administration” is unable to accept democracy.

“Judging by his own logic, the French president should’ve himself resigned two years ago when the Yellow Vests started going out to the streets in Paris,” Lukashenko pointed out.

The Belarusian president went on to note that, as well as the Yellow Vests, France also needs to deal with the BLM movement as well as “Muslim protests” – apparently referring to the 2019 anti-Islamophobia demonstrations.

Responding to Macron, Lukashenko quipped that Minsk is ready to provide a venue for negotiations on “a peaceful transition of power [from the French president] to any of the above-mentioned groups.”

The Yellow Vest demonstrations were provoked by fuel tax hikes in France in November 2018, but they quickly transitioned into a wider protest against Macron’s policies and economic injustice. Weekly rallies in Paris and other cities often turned violent, with French police facing accusations of using excessive force against the protesters.

On Sunday, thousands of people again marched in Minsk and other Belarusian cities, demanding Lukashenko’s immediate resignation and a new, fair election. The police said that around 200 people were arrested across the country.

September 27, 2020 Posted by | Progressive Hypocrite | , | 1 Comment

When False Flags Go Viral

False flag bioterrorism

The Corbett Report | September 26, 2020

… As it turns out, 9/11 may not prove to be the most long-lasting and world-changing false flag event to take place in the fall of 2001. The anthrax attacks that followed on the heels of “the day that changed everything” may in fact have more to say about the COVID-1984 world in which we find ourselves.

Viewers of my recent work on COVID-911 will already know about one of the remarkable “coincidences” linking the anthrax attacks of 2001 with the outbreak of SARS-CoV-2. Namely, that both events were preceded by a “simulation” that mirrored the real-life incident—Dark Winter in the case of the anthrax attacks and Event 201 in the case of the current scamdemic—complete with fake news segments dramatizing the real-life emergencies that would unfold on our tv screens months later. As you will also know, those events weren’t just co-hosted by the same organization (the Johns Hopkins Center for Health Security), but actually featured some of the same players who would go on to lay the groundwork for and participate in the US government’s COVID-19 response.

But those “coincidences” really only scratch the surface of the anthrax false flag. The real story of the anthrax attacks is much bigger than we can do justice to here, but it includes:

  • The revelation in the pages of the New York Times that the US government was running an illegal biological weapons program that was working to—among other things—genetically engineer weaponized anthrax (a revelation that was published on September 4, 2001, but quickly overshadowed by other events).
  • The death of Vladimir Pasechnik, a microbiologist who had worked on the Soviet germ warfare program weaponizing anthrax and other biological agents before defecting to Britain in 1989, who was hired by Britain to conduct his own research into anthrax antidotes at the UK’s secretive Porton Down bioweapon laboratory, and who died just weeks after the anthrax attacks took place.
  • The murder of Dr. David Kelly, who debriefed Pasechnik after his defection and offered him the job at Porton Down, and who had told his friend that he was going to write a book exposing what he knew about the bioweapons program before “killing himself” on Harrowdown Hill.

. . . and much, much else besides.

But for today, it serves merely to note that the anthrax attacks were indeed a false flag attack. In those first chaotic days of the attack, ABC’s Brian Ross began reporting from his “anonymous well-placed sources” that the anthrax spores contained traces of bentonite, a “troubling chemical additive” that just happened to be a ” a trademark of Iraqi leader Saddam Hussein’s biological weapons program.” Of course, this turned out to be a complete lie (a lie that Ross has never clarified or retracted to this day).

As was later confirmed, the spores in question were actually derived from the Ames strain, a strain of anthrax whose virulence makes it the “gold standard” for research into the bacterium by the biological warriors at the United States Army Medical Research Institute of Infectious Diseases. This made the attack almost certainly an inside job (although, it should be noted, the Ames strain is available to researchers in a number of laboratories around the world, including Porton Down).

Inevitably, the FBI “Amerithrax” investigation into the deadly anthrax letters—the largest investigation in the history of the Bureau—set its sights on a series of “lone wolves.” After failing to even bring charges against “person of interest” Steven Hatfill—a bioweapons expert who was awarded nearly $6 million in taxpayer money after years of harassment—and ultimately landed on Bruce Ivins, a patsy who conveniently killed himself before ever even being charged for the monumental crime that was ultimately blamed on him.

The anthrax false flag killed multiple birds with one stone:

  • It associated the terror attack of 9/11 with a subsequent bioterror attack that was quickly connected to Saddam Hussein and Iraq. That association was still strong in the minds of many Americans (some who may still have erroneously blamed Iraq for the attack) during the build up to the Iraq War in 2002 and 2003.
  • As Whitney Webb points out in her exhaustive report on the event, the anthrax attack also saved Bioport, the crony-connected DoD contractor that supplied the US military with the highly controversial Anthrax vaccine. Facing growing concerns about the safety and efficacy of their vaccine, Bioport faced financial ruin . . . until the anthrax attacks happened and demand for their questionable product skyrocketed. Later rebranding as Emergent Biosolutions, the company benefited from the largesse of the Gates-backed Coalition for Epidemic Preparedness, and, as Webb notes, the company “is now set to profit from the Coronavirus (Covid-19) crisis.”
  • And, it also gave a gigantic shot in the arm to another major wing of the military-industrial complex: the “biodefense” sector. With the signing of the Biological Weapons Convention in 1972, biological weapons development was forced underground. Of course, it still went on, but now it was carried out under the mantle of “defense.” After all, one could never trust that those damn *Insert Bogeyman Here* would really get rid of their bioweapon stockpiles, and one needed to create bioweapons in order to understand how to protect against them. But such research was necessarily sidelined and shrouded in secrecy.

Before the anthrax attacks, bioweapons research had been sidelined and shrouded in secrecy. After the attacks, however, the US government—and indeed every government in the world—had a perfect excuse to vastly expand its biological weapons programs in the name of “biological security.” As Jonathan King, a professor of microbiology at MIT, explains:

“[The] response to the anthrax attacks and the bioterrorism initiative has been to launch a nationwide, billion-dollar campaign to ‘defend us’ from unknown terrorists. But the character of this program is roughly as follows: You say, ‘Well, what would the terrorists come up with? What’s the nastiest, most dangerous, most difficult-to-diagnose, difficult-to-treat microorganisms that we can think of. Well, let’s go bring that organism into existence so that we can figure out how to defend against it.’ The fact of the matter is, it’s indistinguishable from an offensive program in which you would do the same thing.”

Thus we get such innovations as the Armed Forces Institute of Pathology’s reconstruction of the 1918 Spanish flu from the tissue of a victim buried in the Alaska permafrost. Or the USAID-funded 2015 research at the Wuhan Institute of Virology that weaponized bat-derived coronavirus in experiments that even other molecular biologists warned was presenting the world with a “clear and present danger.” (Oh, and the USAID funding for the research was technically illegal at the time, but who’s keeping track, hey?)

The long story short is that we have indeed arrived at another, potentially even more dangerous era of false flag attack. At this point it isn’t the scary bearded Muslim suicide bombers who we are supposed to be afraid of, though. It’s scary bearded Muslim biologists. Or something like that. Maybe it’ll be the Russkies. Or the ChiComs. Or some shadowy terror group that arises from nowhere and starts claiming responsibility for Bill Gates’ threatened “Pandemic II.”

The point is that bioterrorism is now very much on the table and don’t think for a second that the globalists won’t resort to more spectacular bioterror attacks to keep the current biosecurity hysteria going.

The ridiculous Skripal affair and its even more absurd low-budget sequel (the Navalny hoax) are just a taste of what we are likely to see in the near future. We may scoff at the amateur theatrics of these false flag test runs, but it would be the same as someone in 1993 dismissing the first World Trade Center bombing as a ridiculous, bungled FBI op, instead of the first taste of much bigger attacks to come.

Conclusion

They say forewarned is forearmed, and I think that adage is especially apt when it comes to the subject of false flag attacks. The entire reason that these operations have been used by country after country for centuries is that they are so effective. And they are only effective because throughout those centuries the general public was unable to wrap their minds around a trick so devious and downright evil.

“But why would the government attack itself?” is not just the question of a brainwashed simpleton; it’s the question of an innocent and trusting soul who could never in a million years imagine doing something so underhanded.

But this is not 1800. It’s not even 2000. It’s 2020. The world has cottoned on to the trick.

Now we have to completely break the spell that governments have cast over the public. In the event of every spectacular terror attack (biological or otherwise), we have to take the history of false flag operations into account and put the government at the top of the list of suspects. When enough of the population has adjusted their thinking in this way, the trick will have lost its effectiveness and the globalists will have to abandon it altogether.

The only question is: Can we wake enough of the public up to these false flag tricks before Gates and his ilk get their “Pandemic II?”

This weekly editorial is part of The Corbett Report Subscriber newsletter.

To support The Corbett Report and to access the full newsletter, please sign up to become a member of the website.

September 27, 2020 Posted by | Deception, False Flag Terrorism, Timeless or most popular | | Leave a comment

How to understand scientific studies (in health and medicine)

By Sebastian Rushworth, M.D. | September 25, 2020

Considering how much misinformation is currently floating around in the area of health and medicine, I thought it might be useful to write an article about how to read and understand scientific studies, so that you can feel comfortable looking at first hand data yourselves and making your own minds up.

Ethical principles

Anyone can carry out a study. There is no legal or formal requirement that you have a specific degree or educational background in order to perform a study. All the earliest scientists were hobbyists, who engaged in science in their spare time. Nowadays most studies are carried out by people with some formal training in scientific method. In the area of health and medicine, most studies are carried out by people who are MD’s and/or PhD’s, or people who are in the process of getting these qualifications.

If you want to perform a study on patients, you generally have to get approval from an ethical review board. Additionally, there is an ethical code of conduct that researchers are expected to stick to, known as the Helsinki declaration, which was developed in the 1970’s after it became clear that a lot of medical research that had been done up to that point was not very ethical (to put it mildly). The code isn’t legally binding, but if you don’t follow it, you will generally have trouble getting your research published in a serious medical journal.

The most important part of the Helsinki declaration is the requirement that participants be fully informed about the purpose of the study, and given an informed choice as to whether to take part or not. Additionally, participants have to be clearly informed that it is their right to drop out of a study at any point, without having to provide any reason for doing so.

Publication bias

The bigger and higher quality a scientific study is, the more expensive it is. This means that most big, high quality studies are carried out by pharmaceutical companies. Obviously, this is a problem, because the companies have a vested interest in making their products look good. And when companies carry out studies that don’t show their drugs in the best light, they will usually try to bury the data. When they carry out studies that show good results, however, they will try to maximize the attention paid to them.

This contributes to a problem known as publication bias. What publication bias means is that studies which show good effect are much more likely to get published than studies which show no effect. This is both because the people who did the study are more likely to push for it to be published, and because journals are more likely to accept studies that show benefit (because those studies get much more attention than studies that don’t show benefit).

So, one thing to be aware of before you start searching for scientific studies in a field is that the studies you can find on a topic often aren’t all the studies. You are most likely to find the studies that show the strongest effect. The effect of an intervention in the published literature is pretty much always bigger than the effect subsequently seen in the real world. This is one reason why I am skeptical to drugs, like statins, that show an extremely small benefit even in the studies produced by the drug companies themselves.

There have been efforts in recent years to mitigate this problem. One such effort is the site clinicaltrials.gov. Researchers are expected to post details of their planned study on clinicaltrials.gov in advance of beginning recruitment of participants. This makes it harder to bury studies that subsequently don’t show the wanted results.

Most serious journals have now committed to only publish studies that have been listed on clinicaltrials.gov prior to starting recruitment of participants, which gives the pharmaceutical companies a strong incentive to post their studies there. This is a hugely positive development, since it makes it a little bit harder for the pharmaceutical companies to hide studies that didn’t go as planned.

Peer review

Once a study is finished, the researchers will usually try to get it published in a peer-reviewed journal. The first scientists, back when modern science was being invented in the 1600’s, mostly wrote books in which they described what they had done and what results they had achieved. Then, after a while, scientific societies started to pop up, and started to produce journals. Gradually science moved from books to journal articles. In the 1700’s the journals started to incorporate the concept of peer-review as a means to ensure quality.

As you can see, journals are an artifact of history. There is actually no technical reason why studies still need to be published in journals in a time when most reading is done on digital devices. It is possible that the journals will disappear with time, to be replaced by on-line science databases.

In recent years, there has been an explosion in the popularity of “pre-print servers”, where scientists can post their studies while waiting to get them in to journals. When it comes to medicine, the most popular such server is medRxiv. The main problem with journals is that they charge money for access, and I think most people will agree that scientific knowledge should not be owned by the journals, it should be the public property of humankind.

Peer-review provides a sort of stamp of approval, although it is questionable how much that stamp is worth. Basically, peer-review means that someone who is considered an expert on the subject of the article (but who wasn’t personally involved with it in any way) reads through the article and determines if it is sensible and worth publishing.

Generally the position of peer-reviewer is an unpaid position, and the person engaging in peer-review does it in his or her spare time. He or she might spend an hour or so going through the article before deciding whether it deserves to be published or not. Clearly, this is not a very high bar. Even the most respected journals have published plenty of bad studies containing manipulated and fake data because they didn’t put much effort in to making sure the data was correct. As an example, the early part of the covid pandemic saw a ton of bad studies which had to be retracted just a few weeks or months after publication because the data wasn’t properly fact checked before publication.

If the peer reviewer at one journal says no to a scientific study, the researchers will generally move on to another, less prestigious journal, and will keep going like that until they can get the study published. There are so many journals that everything gets published somewhere in the end, no matter of how poor quality.

The whole system of peer-review builds on trust. The guiding principle is the idea that bad studies will be caught out over the long term, because when other people try to replicate the results, they won’t be able to.

There are two big problems with this line of thinking. The first is that scientific studies are expensive, so they often don’t get replicated, especially if they are big studies of drugs. For the most part, no-one but the drug company itself has the cash resources to do a follow-up study to make sure that the results are reliable. And if the drug company has done one study which shows a good effect, it won’t want to risk doing a second study that might show a weaker effect.

The second problem is that follow-up studies aren’t exciting. Being first is cool, and generates lots of media attention. Being second is boring. No-one cares about the people who re-did a study and determined that the results actually held up to scrutiny.

Different types of evidence

In medical science, there are a number of “tiers” of data. The higher tier generally trumps the lower tier, because it is by its nature of higher quality. This means that one good quality randomized controlled trial trumps a hundred observational studies.

The lowest quality type of evidence is anecdote. In medicine this often takes the form of “case reports”, which detail a single interesting case, or “case series”, which detail a few interesting cases. An example could be a case report of someone who developed a rare complication, say baldness, after taking a certain drug.

Anecdotal evidence can generate hypotheses for further research, but it can never say anything about causation. If you take a drug and you lose all your hair a few days later, that could have been caused by the drug, but it could also have been caused by a number of other things. It might well just be coincidence.

After anecdote, we have observational studies. These are studies which take a population and follow it to see what happens to it over time. Usually, this type of study is referred to as a “cohort study”, and often, there will be two cohorts that differ in some significant way.

For example, an observational study might be carried out to figure out the long term effects of smoking. Ideally, you want a group that doesn’t smoke to compare with. So you find 5,000 smokers and 5,000 non-smokers. Since you want to know what the effect of smoking is specifically, you try to make sure that the two cohorts are as similar as possible in all other respects. You do this by making sure that both populations are around the same age, weigh as much, exercise as much, and have similar dietary habits. The purpose of this is to decrease confounding effects.

Confounding is when something that you’re not studying interferes with the thing that you are studying. So, for example, people who smoke might also be less likely to exercise. If you then find that smokers are more likely to develop lung cancer, is it because of the smoking or the lack of exercise? If the two groups vary in some way with regards to exercise, it’s impossible to say for certain. This is why observational studies can never answer the question of causation. They can only ever show a correlation.

This is extremely important to be aware of, because observational studies are constantly being touted in the media as showing that this causes that. For example a tabloid article might claim that a vegetarian diet causes you to live longer, based on an observational study. But observational studies can never answer questions of causation. Observational studies can and should do their best to minimize confounding effects, but they can never get rid of them completely.

The highest tier of evidence is the Randomized Controlled Trial (RCT). In a RCT, you take a group of people, and you randomly select who goes in the intervention group, and who goes in the control group.

The people in the control group should ideally get a placebo that is indistinguishable from the intervention. The reason this is important is that the placebo effect is strong. It isn’t uncommon for the placebo effect to contribute more to a drug’s perceived effect than the real effect caused by the drug. Without a control group that gets a placebo it’s impossible to know how much of the perceived benefit from a drug that actually comes from the drug itself.

In order for an RCT to get full marks for quality, it needs to be double-blind. This means that neither the participants nor the members of the research team who interact with the participants know who is in which group. This is as important as having a placebo, because if people know they are getting the real intervention, they will behave differently compared to if they know they are getting the placebo. Also, the researchers performing the study might act differently towards the intervention group and the control group in ways that influence the results, if they know who is in which group. If a study isn’t blinded, it is known as an “open label” study.

So, why does anyone bother with observational studies at all? Why not always just do RCT’s? For three reasons. Firstly, RCT’s take a lot of work to organize. Secondly, RCT’s are expensive to run. Thirdly, people aren’t willing to be randomized to a lot of interventions. For example, few people would be willing to be randomized to smoking or not smoking.

There are those who would say that there is another, higher quality form of evidence, above the randomized controlled trial, and that is the systematic review and meta-analysis. This statement is both true, and not true. The systematic review is a review of all studies that have been carried out on a topic. As the name suggests, the review is “systematic”, i.e. a clearly defined method is used to search for studies. This is important, because it allows others to replicate the search strategy, to see if the reviewers have consciously left out certain studies they didn’t like, in order to influence the results in some direction.

The meta-analysis is a systematic review that has gone a step further, and tried to combine the results of several studies in to a single “meta”-study, in order to get a higher amount of statistical power.

The reason I say it’s both true and not true that this final tier is higher quality than the RCT is that the quality of systematic reviews and meta-analyses depends entirely on the quality of the studies that are included. I would rather take one large high quality RCT than a meta-analysis done of a hundred observational studies. An adage to remember when it comes to meta-analyses is “garbage in, garbage out” – a meta-analysis is only as good as the studies it includes.

There is one thing I haven’t mentioned so far, and that is animal studies. Generally, animal studies will take the form of RCT’s. There are a few advantages to animal studies. You can do things to animals that you would never be allowed to do to humans, and an RCT with animals is much cheaper than an RCT with humans.

When it comes to drugs, there is in most countries a legal requirement that they be tested on animals before being tested on humans. The main problem with animal studies is several million years of evolution. Most animal studies are done in rats and mice, which are separated from us by over fifty million years of evolution, but even our closest relatives, chimps, are about six million years away from us evolutionarily. It is very common for studies to show one thing in animals, and something completely different when done in humans. For example, studies of fever lowering drugs done in animals find a seriously increased risk of dying of infection, but studies in humans don’t find any increased risk. Animal studies always need to be taken with a big grain of salt.

Statistical significance

One very important concept when analyzing studies is the idea of statistical significance. In medicine, a result is considered “statistically significant” if the ”p-value” is less than 0,05 (p stands for probability).

This gets a little bit complicated, but please bear with me. To put it as simply as possible, the p-value is the probability that a certain result was seen even though the null hypothesis is true. (The null hypothesis is the alternative to the hypothesis that is being tested. In medicine the null hypothesis is usually the hypothesis that an intervention doesn’t work, for example that statins don’t decrease mortality).

So a p-value of 0,05 means that there is a 5% or lower chance that a result was seen even though the null hypothesis is true.

One thing to understand is that 5% is an entirely arbitrary cut-off. The number was chosen in the early twentieth century, and it has stuck. And it leads to a lot of crazy interpretations. If a p-value is 0,049 the researchers who have carried out a study will frequently rejoice, because the result is statistically significant. If the p-value is on the other hand 0,051, then the result will be considered a failure. Anyone can see that this is ridiculous, because there is actually only a 0,002 (0,2%) difference between the two results, and one is really no more statistically significant than the other.

Personally, I think a p-value of 0,05 is a bit too generous. I would much have preferred if the standard cut-off had been set at 0,01, and I am sceptical of results that show a p-value greater than 0,01. What gets me really excited is when I see a p-value of less than 0,001.

It is especially important to be sceptical of p-values that are higher than 0,01 considering the other things we know about medical science. Firstly, that there is a strong publication bias, which causes studies that don’t show statistical significance to “disappear” at a higher rate than studies that do show statistical significance. Secondly, that studies are often carried out by people with a vested interest in the result, who will do what they can to get the result they want. And thirdly, because the 0,05 cut-off is used inappropriately all the time, for a reason we will now discuss.

The 0,05 limit is only really supposed to apply when you’re looking at a single relationship. If you look at twenty different relationships at the same time, then just by pure chance one of those relationships will show statistical significance. Is that relationship real? Almost certainly not.

The more variables you look at, the more strictly you should set the limit for statistical significance. But very few studies in medicine do this. They happily report statistical significance with a p-value of 0,05, and act like they’ve shown some meaningful result, even when they look at a hundred different variables. That is bad science, but even big studies, published in prestigious journals, do this.

That is why researchers are supposed to decide on a “primary end-point” and ideally post that primary end-point on clinicaltrials.gov before they start their study. The primary end-point is the question that the researchers are mainly trying to answer (for example, do statins decrease overall mortality?). Then they can use the 0,05 cut-off for the primary endpoint without cheating. They will usually report any other results as if the 0,05 cut-off applies to them too, but it doesn’t.

The reason researchers are supposed to post the primary endpoint at clinicaltrials.gov before starting a trial is that they can otherwise choose the endpoint that ends up being most statistically significant just by chance, after they have all the results, and make that the primary endpoint. That is of course a form of statistical cheating. But it has happened, many times. Which is why clinicaltrials.gov is so important.

One thing to be aware of is that a large share of studies can not be successfully replicated. Some studies have found that more than 50% of research cannot be replicated. That is in spite of a cut-off which is supposed to cause this to only happen 5% of the time. How can that be?

I think the three main reasons are publication bias, vested interests that do what they can to manipulate studies, and inappropriate use of the 5% p-value cut-off. That is why we should never put too much trust in a result that has not been replicated.

Absolute risk vs relative risk

We’ve discussed statistical significance a lot now, but that isn’t really what matters to patients. What patients care about is “clinical significance”, i.e. if they take a drug, will it have a meaningful impact for them. Clinical significance is closely tied to the concepts of absolute risk and relative risk.

Let’s say we have a drug that decreases your five year risk of having a heart attack from 0,2% to 0,1% . We’ll invent a random name for the drug, say, “spatin”. Now, the absolute risk reduction when you take a spatin is 0,1% over five years (0,2 – 0,1 = 0,1). Not very impressive, right? Would you think it was worth taking that drug? Probably not.

What if I told you that spatins actually decreased your risk of heart attack by 50%? Now you’d definitely want to take the drug, right?

How can a spatin only decrease risk by 0,1% and yet at the same time decrease risk by 50%? Because the risk reduction depends on if we are looking at absolute risk or relative risk. Although spatins only cause a 0,1% reduction in absolute risk, they cause a 50% reduction in relative risk (0,1 / 0,2 = 50%).

So, you get the absolute risk reduction by taking the risk without the drug and subtracting the risk with the drug. You get the relative risk reduction by dividing the risk with the drug from the risk without the drug. Drug companies will generally focus on relative risk, because it sound much more impressive. But the clinical significance of a drug that decreases risk from 0,2% to 0,1% is, I would argue, so small that it’s not worth taking the drug, especially if the drug has side effects which might be more common than the probability of seeing a benefit.

When you look at an advertisement for a drug, always look at the fine print. Are they talking about absolute risk or relative risk?

How a journal article is organized

In the last few decades, a standardized format has developed for how scientific articles are supposed to be written. Articles are generally divided in to four sections.

The first section is the “Introduction”. In this section, the researchers are supposed to discuss the wider literature around the topic of their study, and how their study fits in with that wider literature. This section is mostly fluff, and you can usually skip through it.

The second section is the “Method”. This is an important section and you should always read it carefully. It describes what the researchers did and how they did it. Pay careful attention to what the study groups were, what the intervention was, what the control was. Was the study blinded or not? And if it was, how did they ensure that the blinding was maintained? Generally, the higher quality a scientific study, the more specific the researchers will be about exactly what they’ve done and how. If they’re not being specific, what are they trying to hide? Try to see if they’ve done anything that doesn’t make sense, and ask yourself why. If any manipulation is happening to make you think you’re seeing one thing when you’re actually seeing something else, it usually happens in the method section.

There are a few methodological tricks that are very common in scientific studies. One is choosing surrogate end points and another is choosing combined end points. I will use statins to exemplify each, since there has been so much methodological trickery in the statin research.

Surrogate end points are alternate endpoints that “stand in” for the thing that actually matters to patients. An example of a surrogate end point is looking at whether a drug lowers LDL cholesterol instead of looking at the thing that actually matters, overall mortality. The use of the surrogate end point in this case is motivated by the cholesterol hypothesis, i.e. the idea that cholesterol lowering drugs lower LDL, which results in a decrease in cardiovascular disease, which results in increased longevity.

By using a surrogate end point, researchers can claim that the drug is successful when they have in fact shown no such thing. As we’ve discussed previously, the cholesterol hypothesis is nonsense, so showing that a drug lowers LDL cholesterol does not say anything about whether it does anything clinically useful.

Another example of a surrogate endpoint is looking at cardiovascular mortality instead of overall mortality. People don’t usually care about which cause of death is listed on their death certificate. What they care about is whether they are alive or dead. It is perfectly possible for a drug to decrease cardiovascular mortality while at the same time increasing overall mortality, so overall mortality is the only thing that matters (at least if the purpose of a drug is to make you live longer).

An example of a combined end point is looking at the combination of overall mortality and frequency of cardiac stenting. Basically, when you have a combined end point, you add two or more end points together to get a bigger total amount of events.

Now, cardiac stenting is a decision made by a doctor. It is not a hard patient oriented outcome. A study might show that there is a statistically significant decrease in the combined end point of overall mortality and cardiac stenting, which most people will interpret as a decrease in mortality, without ever looking more closely to see if the decrease was actually in mortality, or stenting, or a combination of both. In fact, it’s perfectly possible for overall mortality to increase and still have a combined endpoint that shows a decrease.

Another trick is choosing which specific adverse events to follow, or not following any adverse events at all. Adverse events is just another word for side effects. Obviously, if you don’t look for side effects, you won’t find them.

Yet another trick is doing a “per-protocol analysis”. When you do a per-protocol analysis, you only include the results from the people who followed the study through to the end. This means that anyone who dropped out of the study because the treatment wasn’t having any effect or because they had side effects, doesn’t get included in the results. Obviously, this will make a treatment look better and safer than it really is.

The alternative to a per-protocol analysis is an “intention to treat” analysis. In this analysis, everyone who started the study is included in the final results, regardless of whether they dropped out or not. This gives a much more accurate understanding of what results can be expected when a patient starts a treatment, and should be standard for all scientific studies in health and medicine. Unfortunately per-protocol analyses are still common, so always be vigilant as to whether the results are being presented in a per-protocol or intention to treat manner.

The third section of a scientific article is the results section, and this is the section that everyone cares most about. This is just a pure tabulation of what results were achieved, and as such it is the least open to manipulation, assuming the researchers haven’t faked the numbers. Faking results has happened, and it’s something to be aware of and watch out for. But in general we have to assume that researchers are being honest. Otherwise the whole basis for evidence based medicine cracks and we might as well give up and go home.

To be fair, I think most researchers are honest. And I think even pharmaceutical companies will in general represent the results honestly (because it would be too destructive for their reputations if they were caught outright inventing data). Pharmaceutical companies engage in lots of trickery when it comes to the method and in the interpretation of the results, but I think it’s uncommon for them to engage in outright lying when it comes to the hard data presented in the results tables.

There is however one blatant manipulation of the results that happens frequently. I am talking about cherry picking of the time point at which a scientific study is ended. This can happen when researchers are allowed to check the results of their study while it is still ongoing. If the results are promising, they will often choose to stop the study at that point, and claim that the results were “so good that it would have been unethical to go on”. The problem is that the results become garbage from a statistical standpoint. Why?

Because of a statistical phenomenon known as “regression to the mean”. Basically, the longer a scientific study goes on for and the more data points that end up being gathered, the closer the result of the study is to the real result. Early on in a study, the results will often swing wildly just due to statistical chance. So studies will tend to show bigger effects early on, and smaller effects towards the end.

This problem is compounded by the fact that if a study at an early point shows a negative result, or a neutral result, or even a result that is positive but not “positive enough”, the researchers will usually continue the study in hopes of getting a better result. But the moment the result goes above a certain point, they stop the study and claim excellent benefit from their treatment.

That is how the time point at which a study is stopped ends up being cherry picked. Which is why the planned length of a study should always be posted in advance on clinicaltrials.gov, and why researchers should always stick to the planned length, and never look at the results until the study has gone on for the planned length. If a study is stopped early at a time point of the researchers’ choosing, the results are not statistically sound no matter what the p-values may show. Never trust the results of a study that stopped early.

The fourth section of a scientific article is the discussion section, and like the introduction section it can mostly be skipped through. Considering how competitive the scientific research field is, and how much money is often at stake, researchers will use the discussion section to try to sell the importance of their research, and if they are selling a drug, to make the drug sound as good as possible.

At the bottom of an article, there will generally be a small section (in smaller print than the rest of the study) that details who funded the study, and what conflicts of interest there are. In my opinion, this information should be provided in large, bright orange text at the top of the article, because the rest of the article should always be read in light of who did the study and what motives they had for doing it.

In conclusion, focus on the method section and the results section. The introduction section and the discussion section can for the most part be ignored.

Final words

My main take-home is that you should always be skeptical. Never trust a result just because it comes from a scientific study. Most scientific studies are low quality and contribute nothing to the advancement of human knowledge. Always look at the method used. Always look at who funded the study and what conflicts of interest there were.

I hope this article is useful to you. Please let me know if there are more things in terms of scientific methodology that you have been wondering about. I will try to make this article a living document that grows over time.

You might also be interested in my article about whether statins save lives, or my article about whether the cholesterol hypothesis is dead.


Sebastian Rushworth, M.D. is a practicing physician in Stockholm, Sweden. I studied medicine at Karolinska Institutet (home of the Nobel prize in medicine).

September 27, 2020 Posted by | Science and Pseudo-Science, Timeless or most popular | Leave a comment

Mexico finally orders arrest of soldiers in mysterious case of 43 missing students

Press TV – September 27, 2020

On the sixth anniversary of the mysterious disappearance of 43 Mexican college students, President Andres Manuel Lopez Obrador has issued dozens of arrest warrants for soldiers, who are suspected of involvement in their still un-resolved abduction from a teacher’s college in the state of Guerrero.

Lopez Obrador announced the arrest warrants at an event with parents of the missing students on Saturday.

“Orders have been issued for the arrest of the military personnel,” he said. “Zero impunity —those proven to have participated will be judged.”

Gomez Trejo, the prosecutor leading the investigation into the case, said in a separate statement that 25 arrest warrants had been issued for the “material and intellectual authors” of the crime, including military members, and federal and municipal police.

They are accused of carrying out or knowing about the students’ disappearance that had happened on September 26, 2014, near a large army base in the city of Iguala, Guerrero.

The highest-ranking official in the case, Tomas Zeron who at the time of the incident was the head of the federal investigation agency, is accused of torture and covering up forced disappearances.

The students who had commandeered public busses to travel to a protest, disappeared in the state of Guerrero.

The former administration had concluded that authorities took the students for members of a rival gang and killed them before incinerating their bodies at a garbage dump and tossing the remains in a river.

The remains of only two of the students have been identified so far.

Current Attorney General Alejandro Gertz Manero, however, said he believed there had been a “generalized cover-up” that led to further arbitrary arrests and torture.

Relatives of the students as well as independent experts from the Inter-American Commission on Human Rights also rejected the report as faulty.

They have continued to demand answers as independent investigations have shown the military was aware of what happened to the victims.

The kidnapping of the students, who were training to be teachers, sent shockwaves across Mexico and became a symbol of police violence and corruption that has plague the North American country.

September 27, 2020 Posted by | Civil Liberties | , | Leave a comment

Major banks, food & cosmetics brands linked to massive abuses in palm oil industry – report

RT | September 27, 2020

Renowned food and cosmetics firms could have used palm oil produced by workers suffering from various abuses – from threats to rape – while global lenders finance the exploiting companies, AP reported, citing its investigation.

According to the report, based on accounts of over 130 current and former workers from two dozen palm oil companies in Malaysia and Indonesia, as well as rights activists’ claims and journalists’ first-hand experiences, millions of people may be exploited at the palm oil plantations. The long list of alleged mistreatment includes threats and being held against one’s will, while the most severe abuses include child labor, slavery and allegations of rape.

While palm oil is widely used in a long list of daily products, it is sometimes hard to trace as it can be found under various names on labels. However, the most recent data from producers, traders and buyers of palm oil, cited in the investigation, indicate that the tainted product made its way to the supply chains of such industry giants as Unilever, L’Oreal, Nestle and Procter & Gamble. It could be used by the producers of Oreo cookies, Lysol cleaners and Hershey’s chocolate treats, the report claims.

“We gave our sweat and blood for palm oil,” said Zin Ko Ko Htwe, who was enslaved at one of the plantations for several years, but eventually managed to escape. He added that when European and US consumers see palm oil on a label, they should understand that “it’s the same as consuming our sweat and blood.”

Some big-name banks and financial institutions across Asia and beyond were mentioned in the report as financiers of the palm oil industry, which mainly relies on supplies from Malaysia and Indonesia. Out of $12 billion worth of investment inflows over in the last five years, around $3.5 billion reportedly came from the US’ BNY Mellon, Charles Schwab, Bank of America, JPMorgan Chase, and Citigroup, along with Europe’s HSBC, Standard Chartered, Deutsche Bank, Credit Suisse and Prudential. Some of the massive inflows could have come not directly, but through third parties like Malaysia-based Maybank.

When asked to comment on the report, some lenders noted that their investments were small or simply declined to answer, while others responded by pointing out their policies vowing to support sustainability practices in the palm oil industry. Meanwhile, some brands mentioned in the report said that they were aware of abuses in the industry, claiming that they are trying to work with ethically sourced palm oil.

September 27, 2020 Posted by | Aletho News | | Leave a comment