1 – On the surface, the sanctions imposed on Russia appear to be part of a new type of warfare – designed to punish innocent Russian people. Putin and his pals aren’t going to be hurt by sanctions but ordinary people will be. Politicians and journalists complain bitterly when civilians are bombed but don’t seem to care about civilians being impoverished or starved to death.
Nor do politicians or journalists care that the sanctions were also designed to bring in a global recession that will result in billions of deaths. The sanctions brought in by leaders around the world such as Johnson and Biden have caused massive price rises for fuel and food. The sanctions will cause most damage to the very poor in Africa and Asia. Huge numbers will die in Africa and Asia as a direct result of these sanctions which were designed by mad, bad, dangerous people. Why aren’t Biden, Johnson et al being treated as war criminals?
2 – Governments have created a perfect storm for travellers. Flights have been cancelled because of the millions of people unable to work because they have colds or think they have a disease called ‘long covid’ (which good research has shown is either malingering or hypochondria). The cost of fuel has risen to the highest price ever known, and motorists are unable or unwilling to buy enough fuel to take them more than a few miles from home.
Even when motorists can afford to buy fuel there may not be any available because refineries have been shut by insane and woefully ignorant and selfish protestors who want to make people as miserable as they are and to bring about economic ruin. (Curiously, the police seem unable to move the protestors very efficiently. I don’t know whether this is because the protestors are too fat to be moved without lifting equipment or because the police have been instructed to move only those protestors who are concerned with telling the truth about the covid fraud.)
Finally, the weather is colder and more miserable than ever. Coincidentally, there have been a good many chemtrails around recently. Oh, and anyone thinking of trying to go abroad needs to have their passport already because the Passport Office is advising travellers to allow ten weeks to get a new passport.
3 – Investment in oil and gas has crashed because banks and governments are too frightened to lend money to oil companies. The result is that discoveries of oil and gas are at the lowest for 75 years. We will run out of oil and gas very quickly. The consequences are described in my book `A Bigger Problem than Climate Change: The End of Oil’.
4 – The UK and Europe are now importing liquefied natural gas from the United States. The imported gas was produced using fracking. This will doubtless delight the cultists who believe that we should all keep warm by shivering.
5 – Sunak, the UK’s Chancellor of the Exchequer has been whingeing about criticisms of his wife’s financial affairs. With astonishing cheek, he’s been turning the story round to make himself and his family the victim! Most of the mainstream media supported his whingeing.
The Times noted that reporting his wife’s tax affairs is a potential criminal offence. With any luck Sunak will quickly disappear from public life. He has been a disastrous Chancellor and will not be missed. Even in the polluted waters of public life he is a disgrace.
6 – The French Government has paid private consultants 2.4 billion euros for advice since 2018. When the French were questioned about this, their defence was that the British Government spent around £100 billion on private consultants in the same period. If the army of highly paid civil servants did some of the work they’re paid to do, the British taxpayers would save £25 billion a year.
7 – Government officials who attended parties during the lockdown included Helen MacNamara, the former deputy cabinet secretary and Whitehall ethics chief (who provided a karaoke machine for a `gathering’) and Kate Josephs, who was the director general of the covid-19 task force and who wrote the regulations that made the gatherings illegal.
We don’t know if either of them had to pay a fine but if they were then the fines would have been no more than £50 (less than a parking fine round our way). Once again we see that the privileged few are treated differently. `Ordinary’ people who attended gatherings during lockdowns, and some who had a snowball fight in a park, were fined maximum amounts of £10,000.
8 – Bitcoin mining (possibly the most useless of all human activities) uses around 0.5% of global energy consumption.
9 – There is much talk among the loony lefties about free speech on social media – specifically Facebook, Twitter, YouTube and so on. The truth, of course, is that there is no free speech on any of these sites. They are all oppressive, faux communist platforms allowing only the fettered to speak. These sites belong to the enemy.
10 – The willingness and ability to break rules is what differentiates free men from slaves. And as many have said in the past, it is the duty of every free man and woman to speak out against bad laws and injustice. In the New World Order we won’t be told what we cannot do, but what we are allowed to do. There’s all the difference in the world.
11 – The global economy has been deliberately turned upside down, inside out and back to front. Investment companies and pension companies bought $18 trillion worth of sub-zero bonds. These are bonds with a negative interest rate – so the investors and pensioners who own them are paying governments and companies for the privilege of lending them money.
12 – A number of bankers at Goldman Sachs (frequently voted one of the world’s most evil companies by me) each received $30 million bonuses this year.
There follows a post by Hector Drummond, a former academic who worked in risk, who says when he came to research his new book The Face Mask Cult on the effectiveness of masks against COVID-19 the evidence was threadbare.
In 2021 I decided to write an FAQ on all aspects of Covid, lockdowns and non-pharmaceutical interventions (NPIs). I started with face masks, as they seemed to be the easiest issue to deal with, thinking that the whole mask situation could be summed up in five to six pages. After a few days work I had twenty pages of text, and another twenty pages of reminder notes on further aspects of face masks that I needed to consider and research. Those notes ballooned out in the next few weeks, and I realised that the use of face masks to prevent the spread of COVID-19 was a far bigger topic than I had appreciated, and would require substantial amounts of writing, and months of research and literature-reading.
It took until the next year before I decided I’d written enough on the topic. I had read an enormous number of scientific papers and other articles on masks, and gone through some of them with a fine-tooth comb (see Part 3 of the book, for instance). I had spent considerable time analysing, synthesising and rewriting, and my short FAQ article had become a comprehensive 400-page book that tackled all aspects of the issue, as well as a unique resource with its extensive scientific literature review section.
In all my researches I failed to come across very much in the way of convincing evidence that masks work. The papers that were supposed to show that they did all turned out to be poor pieces of science. None were randomly-controlled peer-reviewed trials. Some were observational studies, with inadequate controls for dealing with the possibility of faulty or biased recollection. Some were ‘modelling’ studies, in which a computer program was used to ‘model’ the effect of face masks on disease spread. Modelling studies are generally hopeless at providing any confirming evidence for the effectiveness of face masks as they require the modellers to make assumptions about how effective the masks are when writing their programs. Some were mannequin studies, in which a dummy in a lab with artificial breathing functions, rather than a real person in the real world, was used. Some were simply tests of the porosity of various materials in regard to salt aerosols.
Most studies ignored the issue of face mask gaps, despite it being well-known in the field that gaps around the sides of masks will let such large amounts of virions in and out that any effect that the masks do have will be completely negated. (This is why medical institutions require ‘fit tests’ for masks – not that fit tests are very reliable, as I explain in the book.)
Even these dubious studies that claimed to show an effect for masks didn’t show much of an effect. The less wild ones would typically claim that the cloth masks would stop 5% to 15% of virions, but they never presented any reason to believe the further claim that was often made that this would cause a 5% to 15% reduction in cases, or a 5% to 15% reduction in deaths. The closest such studies got to doing so was when an author would occasionally speculate, in an airy fashion, that if the disease in question’s R0 rate happened to be close to 1.0, then maybe widespread mask use (assuming masks had some small effect) would be enough to push the R0 rate below 1.0, in which case the disease would die out, although of course even if all their assumptions were true and masks did push the disease’s R0 rate below 1.0 it doesn’t follow that the disease would die out anytime soon. It could well be that the disease’s R0 rate would quickly come back over 1.0 again as soon as we stop masking, and so in order to stop the disease spreading again we would have to wear masks for years on end, or even indefinitely.
But what about all those government reports written by distinguished scientists assuring us that there were now truckloads of research proving that masks work? This is perhaps the most shocking part of the whole face mask con. The 2020 DELVE report and its updates, the 2020 Royal Society report, and the 2022 Department for Education’s Evidence Summary were disgraceful pieces of misinformation, as I show in detail in the book. Even more shocking, perhaps, is the fact that there have been so many acts of wrongdoing in the last two years that the scientific butchery committed in these reports is completely unknown to the general public. The fact, for instance, that the Royal Society’s report relied heavily upon a low-grade Chinese study, written in Chinese only, and published in an obscure Chinese journal, which reported fantastically unrealistic results, is never even going to briefly flit through the mind of the average person, because the average person will never come across any reference to this shameful affair in the mainstream media.
I felt vindicated as I put the finishing touches to the book when several prominent advocates of masks, such as Trish Greenhalgh, Jeremy Howard and many others, started to admit that cloth masks were useless. Not that they wanted us to stop wearing masks – they now wanted us to move onto medical-grade respirator masks, like N95s and FFP2s, as Germany required. Needless to say, these mask fanatics didn’t bother to mention that Germany’s stringent mask policy has been a complete failure.
The book I finished up with is a serious corrective to the endless propaganda we have been fed about masks. It lays out the case against masks in detail, considers the harms done by mask-wearing (harms which are usually ignored by scientists and governments), closely examines many claims made about masks by both sides, and backs it all up with an enormous number of references to the scientific literature. Whenever anyone who wants you to wear a mask says, “Follow the Science”, just show them this book and say, “I already did”.
By the time I arrived at the end of July 2020, the administration had already developed a massive testing capacity from scratch. Nearly a million tests per day were being conducted. The effort was led by Admiral Giroir, who was assigned the thankless task of overseeing that project.
I understood why the VP was so excited when he had displayed that simplistic chart on my first visit. And over the next weeks the administration continued to successfully facilitate and distribute tens of millions of point-of-care PCR tests and, later, rapid antigen tests. This was a significant accomplishment, but it was clear from the beginning that the White House did not understand how or when to use testing. To my thinking, it was a response to political pressure more than anything else.
From my very first meeting in the Oval Office back in July and again over subsequent meetings, President Trump expressed great frustration about testing. It was easy to see why. You could not turn on the news, even the most superficial talk show, without the lead story admonishing the administration for “the lack of testing.” For months, the country had been inundated with that message—not just from public health types who had now become household names, but from every pundit, talk show host, and news anchor. It became pure groupthink. Celebrities who had no understanding or expertise at all were now stridently opining about the unquestionable urgency of massive, widespread, on-demand testing.
Reminiscent of stock market frenzies, esoteric technical terms that had formerly been unknown to the public like “contact tracing” now became common parlance. Testing for this virus had turned into a national, indeed, international obsession. And to me, that obsession was not just misguided, it was harmful, creating more fear, more frenzy, more irrational policies. Yes, testing was an essential tool in the pandemic. And yes, months before I was involved in any way in Washington, there had been a failure to develop and deliver enough tests when they were needed the most. But by the time I came to DC at the end of July, a massive capacity to test had been quickly developed. The problem now was that it was not being leveraged to save lives. Schools and businesses were closed; people were cowering in their homes. Meanwhile, older people kept dying by the thousands.
Criticizing the administration about testing was more than a natural extension of that obsessive mindset. It was low-hanging fruit for the president’s political opponents. There had been almost no preexisting testing capacity from the outset, so naturally it would take some time to meet the challenge. The obsessive demand for testing rapidly escalated into a hyperpartisan issue. I remembered Pelosi’s mantra—“test, test, test; trace, trace, trace!”—as if she, or any politician for that matter, had any understanding of the appropriate testing policy. She was not alone, though. That mantra was echoed on every news network, regardless of political leaning. No dissenting opinion was even visible to most Americans.
That political heat provoked the expected reaction in the White House. Long before my arrival, testing became Priority Number One. Beyond an important public health policy question, it was an election season, and a contentious one at that. This environment elevated testing into the priority of the president’s closest counselors, his political advisors at the highest levels, and operationally, therefore, the vice president’s Task Force. Presumably, like all politicians, the president was politically motivated, too.
The conflict, the misjudgment about issues like testing and other advice coming out of the Task Force, occurred when the president was swayed too much by his political advisors instead of believing in his own common sense. That advice matched the message of the Task Force, especially that coming from Redfield and Birx, whose decision-making background was tied almost exclusively to testing. That was one of the many problems stemming from the HIV backgrounds of Birx and Redfield. SARS2 had already spread to millions, and it spread by breathing in close proximity; the role and practical application of testing in a virus like HIV couldn’t have been more different. In the end, it was easy to see how the advice to the president was to focus on testing.
Understandable for everyone, that is, except the president. He never agreed, because to him it made no sense. He couldn’t understand why we would test people who were not sick. It was as simple as that. President Trump talked to me privately in the Oval Office about many different things, but almost always, our discussions came back to the subject of testing. The president spoke very bluntly and resorted to common sense rather than any data. He knew nothing specific about the medical rationale for testing. He went with his gut feeling and placed no filter on stating his opinions.
“Why are we testing healthy, younger people? Why don’t we just test sick people?” he would ask.
“And if we test more, we find more cases. But those people aren’t sick!” he would point out, exasperated, echoing what he said many times to the press.
And that seemed rather straightforward, on its face. His point was simple logic—test and you shall discover “cases,” especially with COVID, since a large number, maybe half or more, of infections were asymptomatic. He was also correct that in clinical medicine, the definition of a “case”—a patient—is not generally based on a test seeking out something in a healthy, asymptomatic person.
That is not how medicine is practiced, a point I tried to explain time and again to the Task Force troika of doctors. I had that perspective, because I am a doctor who has been an expert for decades on the significance of diagnostic tests showing abnormalities without symptoms. And wasn’t it also important to consider that the overwhelming majority of people did not have a serious illness, even when symptomatic? As for mildly ill patients with COVID, “standard of care” for them was strict isolation, with or without testing.
Testing, though, was the way—the only way—to find infected people who had no symptoms. In high-risk settings, contagious people with asymptomatic infections would be critical to find, no doubt. But the goal, the rationale for testing, became a key point of confusion and disagreement. We needed to protect high-risk people, absolutely. The question was how. We knew who was at risk, so there were two alternatives: 1) indirectly protecting the “vulnerable” by confining and locking down everyone else, or 2) doing everything to protect high-risk people directly.
By the time I set foot in the White House, the nation, with few exceptions, had already been using the Birx-Fauci lockdown restrictions—the indirect strategy—for months. Why was there no admission that the lockdown strategy did not work? It undeniably failed to protect the elderly. Nursing home deaths were piling up, comprising up to 80 percent of total deaths in some states—and in the meantime the lockdown policy was destroying everyone and everything else. Einstein may or may not have said it, but everyone knew it: “The definition of insanity is doing the same thing over and over and expecting different results.”
Yet the strategy was to continue doubling down on the failed lockdowns that were devastating to so many, especially those outside the “elite.” Reality was being denied, and that remains the case today. Regardless, the answer to the failure, the available tool for those all-in on stopping all cases, was more testing!
Unbeknownst to the White House, several top epidemiologists and infectious disease experts had opined that massive testing of healthy people in settings that were not high-risk was not appropriate at this stage of a pandemic. That was apparent to me from months of lengthy discussions with leading epidemiologists at Stanford and elsewhere. There were already tens of millions of Americans who had been infected; even the CDC estimated a tenfold larger number compared to the confirmed number, as verified by early studies on SARS2 antibodies.
Contact tracing was also “futile” at this point, as Dr. Bhattacharya later wrote in a paper I distributed at a Task Force meeting. Contact tracing was a tool for newly emerging pandemics, new outbreaks perhaps. Oxford’s Sunetra Gupta, a world-renowned epidemiologist, repeatedly stressed the lack of logic in mass testing at this stage and the irrationality of focusing on cases by positive tests. Moreover, PCR tests were detecting virus fragments or dead virus in people who were not even contagious. Yet no one in the Task Force would even entertain this discussion.
The question about the role of testing was fundamental. It wasn’t simply surveillance for the purpose of knowledge—testing was the key to a strategic policy. It was not enough to consider testing through the limited prism of an epidemiologist, the way Birx and Fauci did (even though they, like me, are not epidemiologists). In medical practice, if you referred a patient with low back pain to a neurosurgeon, the most likely outcome was surgery. That’s exactly why I always referred patients to neurologists first—they had more perspective. Some might think of the adage “to someone with a hammer, everything looks like a nail.” Testing was the main tool in the epidemiology toolbox, their only tool, really. That was very limiting in defining its role in overall policymaking.
At this juncture, the testing was not being done to yield statistically valid surveillance information—a legitimate use of testing in the midst of a pandemic. This was diagnostic testing, with broad-reaching policy aims. In this pandemic, a positive test was a major driver of the policy of quarantining and isolating healthy people with low-risk profiles—shuttering businesses, closing schools— in short, a key to locking down the country. That’s why health policy experts like myself with a broader scope of expertise than that of epidemiologists and basic scientists are needed. Because no one with a medical science background who also considered the impacts of the policies was advising the White House. That lack of perspective was the main source of the tunnel-vision focus on preventing the spread of infections to the exclusion of all other considerations.
It was baffling to me, an incomprehensible error of whoever assembled the Task Force, that there were zero public health policy experts and no experts with medical knowledge who also analyzed economic, social, and other broad public health impacts other than the infection itself. Shockingly, the broad public health perspective was never part of the discussion among the Task Force health advisors other than when I brought it up. Even more bizarre was that no one seemed to notice.
The president clearly understood that testing healthy people for a disease that did not make them sick made little sense and would only lead to confining them. I agreed with that common sense view, although with important exceptions, and sitting in the Oval Office I explained the absurd extension of the logic of “test, test, test.” What was the “necessary” number, anyway? One million per day? Not even close. One hundred million per day? Nope. How about everyone in the country—330 million per day, every day.
Even if you could accomplish that goal, the tests themselves were only a snapshot in time. Seconds later, any given person could become infected. So 330 million per day, every fifteen minutes—maybe that would satisfy the testing mania! No matter how many tests were performed, there would never be enough.
The need for increased testing, but in a smarter, more targeted way, still needed to be explained to the president. And I did just that, repeatedly, whenever I had a chance—in concise, short doses. As always, he listened intently. But he had no time or patience for a detailed presentation. That is one reason why we got along well. I was capable of speaking succinctly, articulating the bottom line. More importantly, he knew I spoke directly, no BS.
From day one, I always reminded myself—if, and whenever, the president of the United States asks for my opinion, I am going to give it.
No holds barred—otherwise, what was I there for? Even on my very first visit to the Oval Office, when he complained about wide-spread testing, I bluntly told him, “You are a hamster on a wheel,” knowing that others in the room would probably recoil at hearing that. But President Trump knew it, even repeating the phrase later himself.
There was, I explained, a more nuanced approach to the policy of testing. There were serious reasons to test, important reasons to actually increase testing, but in a strategic way. The question was how to leverage that testing capability to have the most impact—to save the most lives and to facilitate reopening the country, which was the right goal from both a health perspective and the president’s stated policy.
I thought my approach was obvious. This was simple logic, and it reiterated exactly what I had written months before: let’s focus testing on where it really mattered, and increase it. High-risk environments, where high-risk people lived and worked. Nursing homes, a tinderbox of risk for its elderly, frail residents, were an obvious target. Knowing that cases were brought in by the staff, they needed to be tested, and tested far more frequently, perhaps every day. I also pushed for more point-of-care tests in places independent-living seniors frequented, like senior centers; visiting nurses taking care of seniors at home; and historically Black colleges and universities (HBCUs), where high-risk faculty members were more concentrated.
While the president understood and fully supported this, he remained frustrated, as did I, because his most trusted advisors didn’t fully sign on to a strategic approach to testing. At one point he offhandedly remarked, “You’ll have to convince my son-in-law of that.” Naturally, Kushner and everyone else had been deferring to Fauci and Birx on all things medical. To make matters worse, the Fauci-Birx testing strategy was not merely unfocused; their strategy bizarrely prioritized more testing in the lowest-risk people and the lowest-risk environments—students and schools—while letting the deaths continue in nursing homes and assisted living facilities, where a once-per-week schedule was assumed to be effective.
Politics seemed to be the main driver of those in the inner circle advising the president—that was their job. But the politics were irrelevant to me. The frenzy about testing everyone, everywhere, at all times, including low-risk people in low-risk settings, was incorrect, illogical, and harmful.
The funny thing was that while almost everyone assumed the president was only making excuses, somehow covering up for an “inadequate” testing capacity, there were valid reasons to use testing very differently in order to maximize its benefits. Despite the clamor of the “experts” in the public sphere, and almost the entire media narrative pushing the opposite view, the president happened to be correct. Instead of massively testing everyone on demand, testing should be leveraged to do what everything should have been geared toward in the first place—protecting the high-risk, saving lives, and opening society up as soon as possible.
What was most remarkable to me from the inside was that even though the president expressed his points about testing very clearly, and many top epidemiology experts agreed, the COVID Huddles and other strategic operations were run in a different world. The messaging, the public events, the operational strategy, and the communications team pushed ahead with a focus on producing and delivering more testing to low-risk environments, schools, and communities. Reminiscent of Catch-22, when 150 million antigen tests became available weeks later, I was asked by several people in the COVID Huddle, “Well, now that we have these tests, what do we do with them?”
Scott W. Atlas, M.D., is the Robert Wesson Senior Fellow in health care policy at the Hoover Institution of Stanford University and a fellow at Hillsdale College’s Academy for Science and Freedom.
The addition of a fluoride, such as hexafluorosilicic acid or disodium hexafluorosilicate, to public water supplies has been recommended in a joint statement by the four Chief Medical Officers of the U.K. The Government’s Health and Care Bill, which has reached its final stages in Parliament, includes a small section to facilitate water fluoridation, which is now expected to be spread throughout the U.K.
Although water is already fluoridated in a few parts of the U.K. (mainly Birmingham), for nearly forty years no new schemes have been implemented since local opposition has managed to defeat them all. The Government is now determined to impose its wishes.
A recent press release said that “higher levels of fluoride are associated with improved dental health outcomes”, and that the “Health and Care Bill will cut bureaucracy and make it simpler to expand water fluoridation schemes”. The Bill’s explanatory notes state: “Research shows that water fluoridation is an effective public health intervention to improve oral health for both children and adults and reduces oral health inequalities.”
For about 70 years it has been claimed that fluoridation reduces dental decay, and that it is safe. Although there is abundant evidence showing that in fact it is neither effective nor safe, the proponents of fluoridation have long had the advantage of far greater funding than that available to sceptics.
Trials of fluoridation started in 1945 in the U.S. and Canada but, before any had been completed, and without any comprehensive health studies, fluoridation was endorsed as safe and effective by the U.S. Public Health Service. The American Dental and Medical Associations soon added their approval, as later did their equivalents in the U.K.
The original trials were studied by Dr. Philip Sutton in Australia who graduated with honours in Dental Science. Asked to examine them, he found they were of low quality, full of errors and omissions.
In Austria, Rudolf Ziegelbecker also studied the original fluoridation trials and found they did not show what had been claimed. Professor Erich Naumann, Director of the German Federal Health Office, said of him: “Your results have been accepted everywhere in Germany with the greatest interest and have increased the grave doubts against drinking water fluoridation.” Prof. Naumann added: “It is regrettable that the existing data on water fluoridation had not been examined earlier using mathematical-statistical methods. Otherwise the myth of drinking water fluoridation would have already dissolved into air long ago.”
In the U.K., pilot schemes started in the mid-1950s in four areas, all of which sooner or later abandoned the practice: Andover (1955-58), part of Anglesey (1955-92), Kilmarnock (1956-62), and Watford (1956-89). In 1957, Dr. Geoffrey Dobbs wrote in New Scientist that they “are now officially described as demonstrations of the benefits of fluoridation, not experiments, so the results are a foregone conclusion” and their purpose quite openly “promotional”. He added that the studies would gain enormously in value if those responsible were willing to submit them to impartial scientific assessment.
When the UK pilot studies started, it was officially stated that they should include “full medical and dental examinations at all ages”, but no medical examinations were done, and neither short-term nor long-term possible harms were explored. This lack of concern continues, with a general failure in fluoridated countries to monitor fluoride exposure or side effects.
In 2000, a major report by the Centre for Reviews and Dissemination at the University of York concluded that, despite many studies over 50 years, “We were unable to discover any reliable good-quality evidence in the fluoridation literature world-wide”. Even among the 26 better studies on fluoridation and tooth decay, not one was evaluated as “high quality, with bias unlikely”.
In 2015, a Cochrane review added: “There is very little contemporary evidence, meeting the review’s inclusion criteria, that has evaluated the effectiveness of water fluoridation for the prevention of caries.”
When Israel ended fluoridation in 2014-15, partly because of health concerns, its Ministry of Health pointed out that WHO data indicated no significant difference in the level of tooth decay between countries that fluoridate and those that do not fluoridate.
A trial in Hastings in New Zealand was apparently so successful that it was widely reported as a classic case of the benefit of fluoridation, with tooth decay reduced by at least half. However, when New Zealand passed freedom-of-information legislation, two university researchers were able to access the original records, which revealed that the published results were fraudulent. One of those involved in running the trials was asked for an explanation but he did not even try to justify the published results.
Not only is there a great absence of good quality evidence that fluoridation significantly reduces tooth decay, there has, especially in recent years, been growing evidence that it is harmful.
In 2006, a major report by the U.S. National Research Council said that fluoride exposure is plausibly associated with neurotoxicity, gastrointestinal problems, endocrine problems and other ailments. It was also unable to rule out an increased risk of cancer and of Down’s syndrome in children.
In 2017, a team of experts in Chile, supported by the Medical College of Chile, concluded that fluoridation is ineffectual and harmful.
Fluoride occurs naturally in a few water supplies, but so does arsenic. A recent study from Sweden shows an increased prevalence of hip fracture in post-menopausal women associated with long-term exposure to natural fluoride at levels in water in the same range as used in some parts of the U.K. for artificial fluoridation.
About half a century passed before the declassification of hundreds of U.S. Government documents provided clues to the real reason for fluoridation. Much meticulous research by an award-winning investigative journalist, Christopher Bryson, resulted in his thoroughly documented book, The Fluoride Deception, showing beyond doubt the extensive fraud involved.
Bryson’s research revealed the strong connection between fluoridation and the Manhattan Project to create the first atomic bombs. Huge amounts of fluorine were used to extract the isotope of uranium needed. Workers suffered hundreds of chemical injuries, mostly from the gas uranium hexafluoride.
In 1943 and 1944, farmers reported workers made ill, crops blighted and livestock injured, with some cows so crippled they could not stand. When the war was over, farmers in New Jersey sued DuPont and the Manhattan Project for fluoride damage. In response the Government mobilised officials and scientists to defeat the farmers.
In 1946, the United States had begun full-scale production of atomic bombs, and the New Jersey farmers’ legal action was seen as a threat, because of the potential for enormous damages and a public relations problem, with more trouble likely if they won. The farmers’ legal action was blocked by the Government’s refusal to reveal how much hydrogen fluoride DuPont had vented into the atmosphere.
Dr. Harold Hodge defended the nuclear programme against the legal threat from farmers. He had the idea of calming the public’s fears by talking about the usefulness of fluorine in tooth health. In January 1944, a secret conference on fluoride metabolism took place in New York. Organised by President Roosevelt’s science adviser, James Conant, documents from it are among the first that connect the atomic bomb programme to water fluoridation and to the Public Health Service.
Manhattan Project scientists were ordered to help the contractors. They also played a prominent role in the fluoridation of the public water supply in Newburgh, New York, an experiment that began in May 1945. In 1947 the U.S. Atomic Energy Commission took over from the Manhattan Project.
Dr. Harold Hodge, the Project’s senior wartime toxicologist, became the leading promoter of fluoridation. He announced it was so safe that it would take a massive dose of fluoride to cause harm. (Some 25 years later, in 1979, he quietly admitted in an obscure paper that he had been wrong.)
A Committee to Protect Our Children’s Teeth was formed, with powerful links to U.S. military-industrial interests and their determined effort to escape liability for fluoride pollution. The aim was to transform the public image of fluoride from that of a dangerous pollutant to a beneficial prophylactic medicine.
This aim was achieved with the help of Edward Bernays, an expert in the use of psychological techniques to achieve “manipulation of the organised habits and opinions of the masses” and “the engineering of consent”. Bernays advised the avoidance of debate: fluoridation was to be presented as indisputably beneficial; only the ignorant could object to it.
Reviews of Bryson’s book included one in the scientific journal Nature, noting that he “raises the stakes by reporting a great deal of relevant and often alarming research”, and describing the book as “thought-provoking and worthwhile”.
Publishers Weekly wrote: “Bryson marshals an impressive amount of research to demonstrate fluoride’s harmfulness, the ties between leading fluoride researchers and the corporations who funded and benefited from their research, and what he says is the duplicity with which fluoridation was sold to the people.”
Chemical & Engineering News stated: “We are left with compelling evidence that powerful interests with high financial stakes have colluded to prematurely close honest discussion and investigation into fluoride toxicity.”
Bryson found that, while the American Dental Association had previously opposed fluoridation, it changed its tune after receiving a large donation from an industrialist with a stake in the commercial use of fluoride.
A study of workers at a chemical company in Cleveland was used to promote the idea that fluoride reduces tooth decay. It said workers exposed to fluoride had fewer cavities than those not exposed to it. The report helped to shift public opinion. The secret version of the report, discovered decades later, stated that most of the men had few or no teeth, and that corrosion affected such teeth as they had.
As early as 1951 a confidential gathering of State Dental Directors in the U.S. was advised by Dr. Frank Bull, “We have told the public it works, so we can’t go back on that”. If it was difficult then, it must be very difficult now for prestigious dental and medical organisations to admit that the assurances of effectiveness and safety they have given for so long were at best mistaken and at worst fraudulent.
Among the various methods used to suppress adverse evidence and dissent have been mocking, silencing, sacking and denigration of scientists who threatened the official story. One of the earliest to suffer was Dr. George Waldbott, an eminent U.S. physician who was viciously maligned after reporting fifty cases of people made ill by fluoridated water, as established by double-blind tests.
Dr. John Colquhoun, a former supporter of fluoridation in New Zealand, was Chief Dental Officer for Auckland when he discovered and reported that fluoride was damaging children’s teeth. This was not what the authorities wanted to hear and he was sacked.
Dr. William Marcus was Senior Science Adviser in the Office of Drinking Water in the Environmental Protection Agency. He was sacked when he warned that research by the famous Battelle Institute showed that some forms of cancer could be caused by fluoride.
Dr. Phyllis Mullenix was the Chief Toxicologist at the prestigious Forsyth Dental Center, who discovered that fluoride is a neurotoxin that can adversely affect the brain. Following publication of her peer-reviewed study, U.S. Government pressure resulted in her being sacked and the institute’s toxicology department closed.
Often those whose research gave results unfavourable to fluoridation found that medical journals were hostile. Dr. Albert Schatz was a co-discoverer of streptomycin, the first effective drug for tuberculosis. When he found that infants in Chile had much higher death rates in fluoridated areas he sent a report in 1965 to the editor of the Journal of the American Dental Association who returned it unread.
The reluctance of many medical journals to publish adverse findings on fluoride resulted in the foundation of the International Society for Fluoride Research and its quarterly journal Fluoride. However, MEDLINE, the bibliographic database published by the U.S. National Library of Medicine, declined to index the peer-reviewed journal’s contents.
Dr. Richard Foulkes chaired a committee that recommended fluoridation in British Columbia. Later, a friend urged him to do his own research, after which he changed his mind and said: “My initial belief was based on information given to me by those in authority rather than on the basis of my examination of the facts.”
Dr. Hardy Limeback was Head of Preventive Dentistry at the University of Toronto when in 1999 he apologised for having promoted fluoridation. “I did not realise the toxicity of fluoride,” he said. “I had taken the word of the public health dentists, the public health physicians, the USPHS, the USCDC, the ADA, the CDA that fluoride was safe and effective without actually investigating it myself”.
It used to be claimed that fluoride works on the teeth from within and therefore that pregnant mothers should take fluoride for the sake of unborn children’s teeth. Now it is said that fluoride’s main effect is from the outside (topical, not systemic). Therefore, there is no need to imbibe it.
Water fluoridation is a blunderbuss that hits far more than the intended target. About a third to a half of fluoride that is ingested remains in the body where it accumulates, not only in the teeth and bones but also in the kidneys, pineal gland and the cardiovascular system. Kidney patients are particularly at risk from fluoridation.
The dose of fluoride a person gets in water is haphazard since people consume widely differing amounts. Bottle-fed babies get very much more fluoride than breast-fed ones, and the American Dental Association conceded in 2006, with little publicity, that “using water that has no or low levels of fluoride” should be considered when preparing formula milk for infants. However, neither an ordinary water filter nor boiling can remove fluoride.
Recent research also finds that fluoride damages children’s brains. For example, studiesshow a loss of IQ and increased symptoms of ADHD in offspring when pregnant women are exposed to fluoride at doses commonly experienced in fluoridated communities in Canada.
Leading scientists concerned about fluoride’s toxicity, and willing to speak out, include Dr. Philippe Grandjean (Harvard University: “Fluoride is causing a greater overall loss of IQ points today than lead, arsenic or mercury”); Dr. Kathleen Thiessen (“The principal hazard at issue from exposure to fluoridation chemicals is IQ loss”); Professor David Bellinger (Harvard Medical School: “It’s actually very similar to the effect size that’s seen with childhood exposure to lead”); Professor Bruce Lanphear (“Fluoride exposure during early brain development diminishes the intellectual abilities in young children”); and Dr. Howard Hu (“Fluoride is a developmental neurotoxicant at levels of exposure seen in the general population in water-fluoridated communities”).
No less important is the fact that fluoridation is treatment without consent. People without the resources needed to obtain alternative supplies of water for drinking and cooking are chemically treated, in effect compulsorily.
A reader sent me this opinion piece published in the British Medical Journal last week. The authors argue that evidence based medicine (EBM) has been corrupted by corporate interests, failed regulation and commercialisation of academia.
The article begins by discussing how EBM was meant to improve medicine but as pharmaceutical documents have been released we realise that this remains an illusion.
The advent of evidence based medicine was a paradigm shift intended to provide a solid scientific foundation for medicine. The validity of this new paradigm, however, depends on reliable data from clinical trials, most of which are conducted by the pharmaceutical industry and reported in the names of senior academics. The release into the public domain of previously confidential pharmaceutical industry documents has given the medical community valuable insight into the degree to which industry sponsored clinical trials are misrepresented. Until this problem is corrected, evidence based medicine will remain an illusion.
They then look at how large corporations have dominated the market and in doing so have slowed scientific progress by supressing information and data and failing to report adverse events.
The philosophy of critical rationalism, advanced by the philosopher Karl Popper, famously advocated for the integrity of science and its role in an open, democratic society. A science of real integrity would be one in which practitioners are careful not to cling to cherished hypotheses and take seriously the outcome of the most stringent experiments.5 This ideal is, however, threatened by corporations, in which financial interests trump the common good. Medicine is largely dominated by a small number of very large pharmaceutical companies that compete for market share, but are effectively united in their efforts to expanding that market. The short term stimulus to biomedical research because of privatisation has been celebrated by free market champions, but the unintended, long term consequences for medicine have been severe. Scientific progress is thwarted by the ownership of data and knowledge because industry suppresses negative trial results, fails to report adverse events, and does not share raw data with the academic research community. Patients die because of the adverse impact of commercial interests on the research agenda, universities, and regulators.
Universities were once respected institutions but by seeking funding from the pharmaceutical industry, they have become corrupted.
The pharmaceutical industry’s responsibility to its shareholders means that priority must be given to their hierarchical power structures, product loyalty, and public relations propaganda over scientific integrity. Although universities have always been elite institutions prone to influence through endowments, they have long laid claim to being guardians of truth and the moral conscience of society. But in the face of inadequate government funding, they have adopted a neo-liberal market approach, actively seeking pharmaceutical funding on commercial terms. As a result, university departments become instruments of industry: through company control of the research agenda and ghostwriting of medical journal articles and continuing medical education, academics become agents for the promotion of commercial products.6 When scandals involving industry-academe partnership are exposed in the mainstream media, trust in academic institutions is weakened and the vision of an open society is betrayed.
Academics no longer succeed because of their achievements but because of what they can offer to the pharmaceutical industry.
The corporate university also compromises the concept of academic leadership. Deans who reached their leadership positions by virtue of distinguished contributions to their disciplines have in places been replaced with fundraisers and academic managers, who are forced to demonstrate their profitability or show how they can attract corporate sponsors. In medicine, those who succeed in academia are likely to be key opinion leaders (KOLs in marketing parlance), whose careers can be advanced through the opportunities provided by industry. Potential KOLs are selected based on a complex array of profiling activities carried out by companies, for example, physicians are selected based on their influence on prescribing habits of other physicians. KOLs are sought out by industry for this influence and for the prestige that their university affiliation brings to the branding of the company’s products. As well paid members of pharmaceutical advisory boards and speakers’ bureaus, KOLs present results of industry trials at medical conferences and in continuing medical education. Instead of acting as independent, disinterested scientists and critically evaluating a drug’s performance, they become what marketing executives refer to as “product champions.”
Ironically, industry sponsored KOLs appear to enjoy many of the advantages of academic freedom, supported as they are by their universities, the industry, and journal editors for expressing their views, even when those views are incongruent with the real evidence. While universities fail to correct misrepresentations of the science from such collaborations, critics of industry face rejections from journals, legal threats, and the potential destruction of their careers. This uneven playing field is exactly what concerned Popper when he wrote about suppression and control of the means of science communication. The preservation of institutions designed to further scientific objectivity and impartiality (i.e., public laboratories, independent scientific periodicals and congresses) is entirely at the mercy of political and commercial power; vested interest will always override the rationality of evidence.
They discuss how the regulators have been captured without any questions raised by governments.
Regulators receive funding from industry and use industry funded and performed trials to approve drugs, without in most cases seeing the raw data. What confidence do we have in a system in which drug companies are permitted to “mark their own homework” rather than having their products tested by independent experts as part of a public regulatory system? Unconcerned governments and captured regulators are unlikely to initiate necessary change to remove research from industry altogether and clean up publishing models that depend on reprint revenue, advertising, and sponsorship revenue.
Their suggested reforms are probably what most naïve people already think happens but unfortunately doesn’t.
Our proposals for reforms include: liberation of regulators from drug company funding; taxation imposed on pharmaceutical companies to allow public funding of independent trials; and, perhaps most importantly, anonymised individual patient level trial data posted, along with study protocols, on suitably accessible websites so that third parties, self-nominated or commissioned by health technology agencies, could rigorously evaluate the methodology and trial results. With the necessary changes to trial consent forms, participants could require trialists to make the data freely available. The open and transparent publication of data are in keeping with our moral obligation to trial participants—real people who have been involved in risky treatment and have a right to expect that the results of their participation will be used in keeping with principles of scientific rigour. Industry concerns about privacy and intellectual property rights should not hold sway.
Overall, a scathing opinion piece which highlights some truths which many of us recognise but which the majority would call you crazy for suggesting. Whenever I have tried to discuss how the pharmaceutical companies “mark their own homework”, the common response I get is “rubbish, the regulators conduct their own trials to see how safe and effective the vaccines are”.
If more people understood how the system worked then we wouldn’t be in the situation we are today. However, that is easier said than done when governments and the media have also been captured along with the regulators and academia.
My new book An Encounter with Evil: The Abraham Zapruder Story has now gone live on Amazon. I am confident that you all will enjoy reading this book. I have been working on it since last summer. I consider it the best work I’ve done in the 32-year history of The Future of Freedom Foundation.
People sometimes ask me what relevance the Kennedy assassination has to our lives today. My new book answers that question completely. It shows how the assassination bears a direct relationship to the foreign-policy crises that confront our nation today and, equally important, what we need to do to extricate ourselves from these crises.
My book revolves around a book entitled Twenty-Six Seconds: A Personal History of the Zapruder Film, which was written in 2016 by Alexandra Zapruder, the granddaughter of Abraham Zapruder, the man who filmed the assassination of President Kennedy on his personal home movie camera.
As I state in the Introduction to my book, which you can read here, I figured that Alexandra’s book would be an interesting personal account of how Abraham Zapruder and his family dealt with the film. I quickly learned that her book was much more than that.
When I read that there was a 50-year-long taboo within the Zapruder family against discussing the film, I was hooked. That’s because I knew that almost always there are dark secrets behind family taboos. Violating such a taboo is not an easy thing to do, which is what Alexandra was doing by deciding to write her book. As I point out in my Introduction, in her book she herself acknowledged the danger that she might encounter things that she might not want to write about.
After embarking on her quest to discover the reasons for the family taboo, Alexandra came up with two explanations. The first one is that her grandfather was conflicted over having received so much money for his film, which in today’s dollars was about $1.3 million. The other one is that he was extremely grief-stricken over having witnessed and filmed the president’s assassination.
Neither of those two explanations involves a dark secret and, with all due respect, they are both nonsensical justifications for a decades-long family taboo. After all, throughout the weekend of the assassination, Abraham Zapruder was doing everything he could to get top-dollar for his film, something he would be unlikely to do if he was feeling so guilty about it. Moreover, if the guilt feelings arose after he struck the financial deal for the sale of his film, he could have waived the installments of money from the sale of his film, which were being sent to him annually, which he did not do.
Moreover, any trauma that Zapruder may have suffered from witnessing the assassination obviously did not interfere with his spending the entire weekend of the assassination doing everything he could to get as much money as he could for his film.
Zapruder died in 1970. If the two justifications for the family taboo (which Alexandra denies was a “taboo” but instead was what she calls a “code” or “culture” within the family) were valid, why would the family taboo against discussing the film extend for decades after Zapruder’s death?
After reading Alexandra Zapruder’s book, I decided to figure out the Zapruder film mystery. I spent last summer doing precisely that. Once I figured it out, I began writing my book. Since then, I’ve been working days, nights, and weekends to complete it. I even took a week-long vacation on a farm in southwestern Virginia over Labor Day to write the first eight chapters (the book ended up with 23 chapters).
Like I say, I believe you’re going to like this book and that you’re going to find that it is an important contribution toward understanding not just the Kennedy assassination but, more important, toward seeing where we are as a country today and what we need to do to get things back on the right track — toward restoring a society based on liberty, peace, prosperity, and harmony with the people of the world.
Again, the book is An Encounter with Evil: The Abraham Zapruder Story. It’s $9.95 for the Kindle version and $14.95 for the print version. You can buy it here.
The book The Year the World Went Mad by SAGE-member Mark Woolhouse, has now been published as an audiobook and will be available in hard cover on April 12th. This is an important book, for here the author, a key player in the pandemic response in the UK, admits that more or less everything he and his colleagues suggested and the government did was wrong.
In this interview with Spiked-online, Woolhouse admits that focused protection, as suggested by the proponents of the Great Barrington Declaration, would have been the right approach, and that he and his associates knew it. He even claims they suggested it, but nobody listened. However, even if they did, why didn’t they speak up? The scientists who wrote and published the Great Barrington Declaration were denounced as pseudo-scientists – and by whom? Among others, by the very people who knew they were right all along.
In the author‘s own words:
“So how do you protect those people? First of all, since they have to have contact with certain people, you make it as Covid-safe as possible for them to have those interactions. Take all the precautions we know to take now, about wearing masks, ventilation and physical distancing. But that alone is not enough. You need to make sure that the contact themselves does not have an infection and is not going to pass it on to the vulnerable people they’re interacting with. We were talking about this in April and May 2020 to many people in government. But we never implemented it. It never took off. And yet it’s quite clear from our work that this would have had a very significant impact. It would not be enough by itself “You still need to suppress the virus to a degree, but you would not need lockdown.”
The lockdowns, travel bans, school closures and all the rest were useless and extremely harmful to society. But still the scientists in charge of the pandemic response, including Mark Woolhouse, promoted those methods and justified them. They derided those who criticised their methods, cancelled them, claimed they didn’t respect science. But it was the other way around. This, we must never forget.
This book is a good step. But I wonder if the author has apologised to those who were right all along, to Martin Kulldorff, Sunetra Gupta, Jay Bhattacharya and all the other honest, real scientists who had the courage and moral standard to tell the truth. If he hasn’t, I urge him to do so.
For years the eminent Russia scholar Stephen Cohen had ranked President Vladimir Putin of the Russian Republic as the most consequential world leader of the early twenty-first century. He praised the man’s enormous success in reviving his country after the chaos and destitution of the Yeltsin years and emphasized his desire for friendly relations with America, but increasingly feared that we were entering into a new Cold War, even more dangerous than the last.
As far back as 2017, the late Prof. Cohen argued that no foreign leader had been as greatly vilified in recent American history as Putin, and Russia’s invasion of Ukraine two weeks ago has exponentially raised the intensity of such media denunciations, almost matching the hysteria our country experienced two decades ago after the 9/11 attack on New York City. Larry Romanoff has provided a useful catalog of some examples.
Until recently, this extreme demonization of Putin was largely confined to Democrats and centrists, whose bizarre Russiagate narrative had accused him of installing Donald Trump in the White House. But the reaction has now become entirely bipartisan, with enthusiastic Trump-backer Sean Hannity recently using his prime-time FoxNews show to call for Putin’s death, a cry soon joined by Sen. Lindsey Graham, the ranking Republican on the Senate Judiciary Committee. These are astonishing threats to make against a man whose nuclear arsenal could quickly annihilate the bulk of the American population, and the rhetoric seems unprecedented in our postwar history. Even in the darkest days of the Cold War, I don’t recall such public sentiments ever being directed towards the USSR or its top Communist leadership.
In many respects the Western reaction to Russia’s attack has been closer to a declaration of war than merely a return to Cold War confrontation. Russia’s massive foreign reserves held abroad have been seized and frozen, its civilian airlines excluded from Western skies, and its leading banks disconnected from global financial networks. Wealthy Russian private citizens have had their properties confiscated, the national soccer team has been banned from the World Cup, and the longtime Russian conductor of the Munich Philharmonic was fired for refusing to denounce his own country.
Such international retaliation against Russia and individual Russians seems extremely disproportionate. As yet the fighting in Ukraine has inflicted minimal death or destruction, while the various other major wars of the last two decades, many of them American in origin, had killed millions and completely destroyed several countries, including Iraq, Libya, and Syria. But the global dominance of American media propaganda has orchestrated a very different popular response, producing this remarkable crescendo of hatred.
Indeed, the closest parallel that comes to mind would be the American hostility directed against Adolf Hitler and Nazi Germany after the outbreak of World War II, as indicated by the widespread comparisons between Putin’s invasion of Ukraine and Hitler’s 1939 attack on Poland. A simple Google search for “Putin and Hitler” returns tens of millions of webpages, with the top results ranging from the headline of a Washington Post article to the Tweets of pop music star Stevie Nicks. As far back as 2014, Andrew Anglin of the Daily Stormer had documented the emerging meme “Putin is the new Hitler.”
Although enormously popular, such Putin-Hitler analogies have hardly gone unchallenged, and some media outlets such as the London Spectator have strongly disagreed, arguing that Putin’s strategic aims have been quite limited and reasonable.
Many sober-minded strategic analysts have made this same point at length, and very occasionally their contrary views have managed to slip through the media blockade.
Although FoxNews has become one of the outlets most rabidly hostile to Russia, a recent interview with one of their regular guests provided a very different perspective. Col. Douglas Macgregor had been a former top Pentagon advisor and he forcefully explained that America had spent nearly fifteen years ignoring Putin’s endless warnings that he would not tolerate NATO membership for Ukraine, nor the deployment of strategic missiles on his border. Our government had paid no heed to his explicit red-lines, so Putin was finally compelled to act, resulting in the current calamity:
Prof. John Mearsheimer of the University of Chicago, one of our most distinguished political scientists, had spent many years making exactly these same points and blaming America and NATO for the simmering Ukraine crisis, but his warnings had been totally ignored by our political leadership and media. His hour-long lecture explaining these unpleasant realities had quietly sat on Youtube for six years, attracting relatively little attention, but then suddenly exploded in popularity over the last few weeks as the conflict unfolded, and has now reached a worldwide audience of over 17 million. His other Youtube lectures, some quite recent, have been watched by additional millions.
Such massive global attention finally forced our media to take notice, and the New Yorker solicited an interview with Mearsheimer, allowing him to explain to his disbelieving questioner that American actions had clearly provoked the conflict. A couple of years earlier, that same interviewer had ridiculed Prof. Cohen for doubting the reality of Russiagate, but this time he seemed much more respectful, perhaps because the balance of media power was now reversed; his magazine’s 1.2 million subscriber-base was dwarfed by the global audience listening to the views of his subject.
During his long and distinguished career at the CIA, former analyst Ray McGovern had run the Soviet Policy Branch and also served as the Presidential Briefer, so under different circumstances he or someone like him would would currently be advising President Joe Biden. Instead, a few days ago he joined Mearsheimer in presenting his views in a video discussion hosted by the Committee for the Republic. Both leading experts agreed that Putin had been pushed beyond all reasonable limits, provoking the invasion.
Prior to 2014 our relations with Putin had been reasonably good. Ukraine served as a neutral buffer state between Russia and the NATO countries, with the population evenly divided between Russian-leaning and West-leaning elements, and its elected government oscillating between the two camps.
But while Putin’s attention was focused on the 2014 Sochi Olympic Games, a pro-NATO coup overthrew the democratically-elected pro-Russian government, with clear evidence that Victoria Nuland and the other Neocons grouped around Secretary of State Hillary Clinton had orchestrated it. Ukraine’s Crimea peninsula contains Russia’s crucial Sevastopol naval base, and only Putin’s swift action allowed it to remain under Russian control, while he also provided support for break-away pro-Russian enclaves in the Donbass region. The Minsk agreement later signed by the Ukrainian government granted autonomy to those latter areas, but Kiev refused to honor its commitments, and instead continued to shell the area, inflicting serious casualties upon the inhabitants, many of whom held Russian passports. Diana Johnstone has aptly characterized our policy as years of Russian bear-baiting.
As Mearsheimer, McGovern, and other observers have persuasively argued, Russia invaded Ukraine only after such endless provocations and warnings were always ignored or dismissed by our American leadership. Perhaps the final straw had been the recent public statement by Ukraine President Volodymyr Zelenskyy that he intended to acquire nuclear weapons. How would America react if a democratically-elected pro-American government in Mexico had been overthrown in an coup backed by China, with the fiercely hostile new Mexican government spending years killing American citizens in its country and then finally announcing plans to acquire a nuclear arsenal?
Moreover, some analysts such as economist Michael Hudson have strongly suspected that American elements deliberately provoked the Russian invasion for geostrategic reasons, and Mike Whitney advanced similar arguments in a column that went super-viral, accumulating over 800,000 pageviews. The Nord Stream 2 pipeline carrying Russian natural gas to Germany had finally been completed last year and was about to go into operation, which would have greatly increased Eurasian economic integration and Russian influence in Europe, while eliminating the potential market for more expensive American natural gas. The Russian attack and the massive resulting media hysteria have now foreclosed that possibility.
So although it was Russian troops who crossed the Ukrainian border, a strong case can be made that they did so only after the most extreme provocations, and these may have been deliberately intended to produce exactly that result. Sometimes the parties responsible for starting a war are not necessarily those that eventually fire the first shot.
Ironically enough, the arguments of Mearsheimer and others that Putin was greatly provoked or possibly even manipulated into attacking Ukraine raise certain intriguing historical parallels. The legions of ignorant Westerners who mindlessly rely upon our disingenuous media may be denouncing Putin as “another Hitler” but I think they may have inadvertently backed themselves into the truth.
A couple of months ago I finally read Gerd Schultze-Rhonhof’s outstanding 2011 volume analyzing the years leading up to the outbreak of World War II, a work that I would highly recommend. The author spent his career as a fully mainstream professional military man, rising to the rank of major general in the German army before retiring, and his account evoked eerie parallels to the current conflict with Russia.
As most of us know, the Second World War began when Germany attacked Poland in 1939 over Danzig, an almost entirely German border city controlled by the Poles.
But less well known is that Hitler had actually made enormous efforts to avoid war and settle that dispute, spending many months on fruitless negotiations and offering extremely reasonable terms. Indeed, the German dictator had made numerous concessions that none of his democratic Weimar predecessors had been willing to consider, but these were all rejected, while provocations increased until war with Poland seemed the only possible option. And just as in the case of Ukraine, politically influential elements in the West almost certainly sought to provoke that war, using Danzig as the spark to ignite the conflict much like the Donbass may have been used to force Putin’s hand.
We should recognize that in many respects the standard historical narrative of World War II is merely a congealed version of the media propaganda of that era. If Russia were defeated and destroyed as a result of the current conflict, we can be sure that the subsequent history books would utterly demonize Putin and all the decisions that he had taken.
Although I was very impressed by Schultze-Rhonhof’s meticulously detailed analysis of the circumstances leading up to the outbreak of war in 1939, his account merely reinforced my existing views, which had already been along entirely similar lines.
For example, back in 2019 I had used Pat Buchanan’s controversial 2008 bestseller on World War II as the starting point for a very long and detailed discussion of the true origins of that conflict:
However, the bulk of the book focused on the events leading up to the Second World War, and this was the portion that had inspired such horror in McConnell and his colleagues. Buchanan described the outrageous provisions of the Treaty of Versailles imposed upon a prostrate Germany, and the determination of all subsequent German leaders to redress it. But whereas his democratic Weimar predecessors had failed, Hitler had managed to succeed, largely through bluff, while also annexing German Austria and the German Sudetenland of Czechoslovakia, in both cases with the overwhelming support of their populations.
Buchanan documented this controversial thesis by drawing heavily upon numerous statements by leading contemporary political figures, mostly British, as well as the conclusions of highly-respected mainstream historians. Hitler’s final demand, that 95% German Danzig be returned to Germany just as its inhabitants desired, was an absolutely reasonable one, and only a dreadful diplomatic blunder by the British had led the Poles to refuse the request, thereby provoking the war. The widespread later claim that Hitler sought to conquer the world was totally absurd, and the German leader had actually made every effort to avoid war with Britain or France. Indeed, he was generally quite friendly towards the Poles and had been hoping to enlist Poland as a German ally against the menace of Stalin’s Soviet Union.
Although many Americans might have been shocked at this account of the events leading up to the outbreak of the Second World War, Buchanan’s narrative accorded reasonably well with my own impression of that period. As a Harvard freshman, I had taken an introductory history course, and one of the primary required texts on World War II had been that of A.J.P. Taylor, a renowned Oxford University historian. His famous 1961 work Origins of the Second World War had very persuasively laid out a case quite similar to that of Buchanan, and I’d never found any reason to question the judgment of my professors who had assigned it. So if Buchanan merely seemed to be seconding the opinions of a leading Oxford don and members of the Harvard history faculty, I couldn’t quite understand why his new book would be regarded as being beyond the pale.
The recent 70th anniversary of the outbreak of the conflict that consumed so many tens of millions of lives naturally provoked numerous historical articles, and the resulting discussion led me to dig out my old copy of Taylor’s short volume, which I reread for the first time in nearly forty years. I found it just as masterful and persuasive as I had back in my college dorm room days, and the glowing cover-blurbs suggested some of the immediate acclaim the work had received. The Washington Post lauded the author as “Britain’s most prominent living historian,” World Politics called it “Powerfully argued, brilliantly written, and always persuasive,” The New Statesman, Britain leading leftist magazine, described it as “A masterpiece: lucid, compassionate, beautifully written,” and the august Times Literary Supplement characterized it as “simple, devastating, superlatively readable, and deeply disturbing.” As an international best-seller, it surely ranks as Taylor’s most famous work, and I can easily understand why it was still on my college required reading list nearly two decades after its original publication.
Yet in revisiting Taylor’s ground-breaking study, I made a remarkable discovery. Despite all the international sales and critical acclaim, the book’s findings soon aroused tremendous hostility in certain quarters. Taylor’s lectures at Oxford had been enormously popular for a quarter century, but as a direct result of the controversy “Britain’s most prominent living historian” was summarily purged from the faculty not long afterwards. At the beginning of his first chapter, Taylor had noted how strange he found it that more than twenty years after the start of the world’s most cataclysmic war no serious history had been produced carefully analyzing the outbreak. Perhaps the retaliation that he encountered led him to better understand part of that puzzle.
I very recently reread Pat Buchanan’s 2008 book harshly condemning Churchill for his role in the cataclysmic world war and made an interesting discovery. Irving is surely among the most authoritative Churchill biographers, with his exhaustive documentary research being the source of so many new discoveries and his books selling in the millions. Yet Irving’s name never once appears either in Buchanan’s text or in his bibliography, though we may suspect that much of Irving’s material has been “laundered” through other, secondary Buchanan sources. Buchanan extensively cites A.J.P. Taylor, but makes no mention of Barnes, Flynn, or various other leading American academics and journalists who were purged for expressing contemporaneous views not so dissimilar from those of the author himself.
During the 1990s, Buchanan had ranked as one of America’s most prominent political figures, having an enormous media footprint in both print and television, and with his remarkably strong insurgent runs for the Republican presidential nomination in 1992 and 1996 cementing his national stature. But his numerous ideological foes worked tirelessly to undermine him, and by 2008 his continued presence as a pundit on the MSNBC cable channel was one of his last remaining footholds of major public prominence. He probably recognized that publishing a revisionist history of World War II might endanger his position, and believed that any direct association with purged and vilified figures such as Irving or Barnes would surely lead to his permanent banishment from all electronic media.
A decade ago I had been quite impressed by Buchanan’s history, but I had subsequently done a great deal of reading on that era and I found myself somewhat disappointed the second time through. Aside from its often breezy, rhetorical, and unscholarly tone, my sharpest criticisms were not with the controversial positions that he took, but with the other controversial topics and questions that he so carefully avoided.
Perhaps the most obvious of these is the question of the true origins of the war, which laid waste to much of Europe, killed perhaps fifty or sixty million, and gave rise to the subsequent Cold War era in which Communist regimes controlled half of the entire Eurasian world-continent. Taylor, Irving, and numerous others have thoroughly debunked the ridiculous mythology that the cause lay in Hitler’s mad desire for world conquest, but if the German dictator clearly bore only minor responsibility, was there indeed any true culprit? Or did this massively-destructive world war come about in somewhat similar fashion to its predecessor, which our conventional histories treat as mostly due to a collection of blunders, misunderstandings, and thoughtless escalations.
During the 1930s, John T. Flynn was one of America’s most influential progressive journalists, and although he had begun as a strong supporter of Roosevelt and his New Deal, he gradually became a sharp critic, concluding that FDR’s various governmental schemes had failed to revive the American economy. Then in 1937 a new economic collapse spiked unemployment back to the same levels as when the president had first entered office, confirming Flynn in his harsh verdict. And as I wrote last year:
Indeed, Flynn alleges that by late 1937, FDR had turned towards an aggressive foreign policy aimed at involving the country in a major foreign war, primarily because he believed that this was the only route out of his desperate economic and political box, a stratagem not unknown among national leaders throughout history. In his January 5, 1938 New Republic column, he alerted his disbelieving readers to the looming prospect of a large naval military build-up and warfare on the horizon after a top Roosevelt adviser had privately boasted to him that a large bout of “military Keynesianism” and a major war would cure the country’s seemingly insurmountable economic problems. At that time, war with Japan, possibly over Latin American interests, seemed the intended goal, but developing events in Europe soon persuaded FDR that fomenting a general war against Germany was the best course of action. Memoirs and other historical documents obtained by later researchers seem to generally support Flynn’s accusations by indicating that Roosevelt ordered his diplomats to exert enormous pressure upon both the British and Polish governments to avoid any negotiated settlement with Germany, thereby leading to the outbreak of World War II in 1939.
The last point is an important one since the confidential opinions of those closest to important historical events should be accorded considerable evidentiary weight. In a recent article John Wear mustered the numerous contemporaneous assessments that implicated FDR as a pivotal figure in orchestrating the world war by his constant pressure upon the British political leadership, a policy that he privately even admitted could mean his impeachment if revealed. Among other testimony, we have the statements of the Polish and British ambassadors to Washington and the American ambassador to London, who also passed along the concurring opinion of Prime Minister Chamberlain himself. Indeed, the German capture and publication of secret Polish diplomatic documents in 1939 had already revealed much of this information, and William Henry Chamberlin confirmed their authenticity in his 1950 book. But since the mainstream media never reported any of this information, these facts remain little known even today.
Roosevelt’s economic problems had led him to seek a foreign war, but it was probably the overwhelming Jewish hostility to Nazi Germany that pointed him in that particular direction. The confidential report of the Polish ambassador to the U.S. as quoted by John Wear provides a striking description of the political situation in America at the beginning of 1939:
There is a feeling now prevalent in the United States marked by growing hatred of Fascism, and above all of Chancellor Hitler and everything connected with National Socialism. Propaganda is mostly in the hands of the Jews who control almost 100% [of the] radio, film, daily and periodical press. Although this propaganda is extremely coarse and presents Germany as black as possible–above all religious persecution and concentration camps are exploited–this propaganda is nevertheless extremely effective since the public here is completely ignorant and knows nothing of the situation in Europe.
At the present moment most Americans regard Chancellor Hitler and National Socialism as the greatest evil and greatest peril threatening the world. The situation here provides an excellent platform for public speakers of all kinds, for emigrants from Germany and Czechoslovakia who with a great many words and with most various calumnies incite the public. They praise American liberty which they contrast with the totalitarian states.
It is interesting to note that in this extremely well-planned campaign which is conducted above all against National Socialism, Soviet Russia is almost completely eliminated. Soviet Russia, if mentioned at all, is mentioned in a friendly manner and things are presented in such a way that it would seem that the Soviet Union were cooperating with the bloc of democratic states. Thanks to the clever propaganda the sympathies of the American public are completely on the side of Red Spain.
Given the heavy Jewish involvement in financing Churchill and his allies and also steering the American government and public in the direction of war against Germany, organized Jewish groups probably bore the central responsibility for provoking the world war, and this was surely recognized by most knowledgeable individuals at the time. Indeed, the Forrestal Diaries recorded the very telling statement by our ambassador in London: “Chamberlain, he says, stated that America and the Jews had forced England into the war.”
The ongoing struggle between Hitler and international Jewry had been receiving considerable public attention for years. During his political rise, Hitler had hardly concealed his intent to dislodge Germany’s tiny Jewish population from the stranglehold they had gained over German media and finance, and instead run the country in the best interests of the 99% German majority, a proposal that provoked the bitter hostility of Jews everywhere. Indeed, immediately after he came into office, a major London newspaper had carried a memorable 1933 headline announcing that the Jews of the world had declared war on Germany, and were organizing an international boycott to starve the Germans into submission.
In recent years, somewhat similar Jewish-organized efforts at international sanctions aimed at bringing recalcitrant nations to their knees have become a regular part of global politics. But these days the Jewish dominance of the U.S. political system has become so overwhelming that instead of private boycotts, such actions are directly enforced by the American government. To some extent, this had already been the case with Iraq during the 1990s, but became far more common after the turn of the new century.
Although our official government investigation concluded that the total financial cost of the 9/11 terrorist attacks had been an absolutely trivial sum, the Neocon-dominated Bush Administration nonetheless used this as an excuse to establish an important new Treasury Department position, the Under Secretary for Terrorism and Financial Intelligence. That office soon began utilizing America’s control of the global banking system and dollar-denominated international trade to enforce financial sanctions and wage economic warfare, with these measures typically being directed against individuals, organizations, and nations considered unfriendly towards Israel, notably Iran, Hezbollah, and Syria.
Perhaps coincidentally, although Jews comprise merely 2% of the American population, all four individuals holding that very powerful post over the last 15 years since its inception—Stuart A. Levey, David S. Cohen, Adam Szubin, Sigal Mandelker—have been Jewish, with the most recent of these being an Israeli citizen. Levey, the first Under Secretary, began his work under President Bush, then continued without a break for years under President Obama, underscoring the entirely bipartisan nature of these activities.
Most foreign policy experts have certainly been aware that Jewish groups and activists played the central role in driving our country into its disastrous 2003 Iraq War, and that many of these same groups and individuals have spent the last dozen years or so working to foment a similar American attack on Iran, though as yet unsuccessfully. This seems quite reminiscent of the late 1930s political situation in Britain and America.
Individuals outraged by the misleading media coverage surrounding the Iraq War but who have always casually accepted the conventional narrative of World War II should consider a thought-experiment I suggested last year:
When we seek to understand the past, we must be careful to avoid drawing from a narrow selection of sources, especially if one side proved politically victorious in the end and completely dominated the later production of books and other commentary. Prior to the existence of the Internet, this was an especially difficult task, often requiring a considerable amount of scholarly effort, even if only to examine the bound volumes of once popular periodicals. Yet without such diligence, we can fall into very serious error.
The Iraq War and its aftermath was certainly one of the central events in American history during the 2000s. Yet suppose some readers in the distant future had only the collected archives of The Weekly Standard, National Review, the WSJ op-ed page, and FoxNews transcripts to furnish their historical understanding of that period, perhaps along with the books written by the contributors to those outlets. I doubt that more than a small fraction of what they would read could be categorized as outright lies. But the massively skewed coverage, the distortions, exaggerations, and especially the breathtaking omissions would surely provide them with an exceptionally unrealistic view of what had actually happened during that important period.
Another striking historical parallel has been the fierce demonization of Russian President Vladimir Putin, who provoked the great hostility of Jewish elements when he ousted the handful of Jewish Oligarchs who had seized control of Russian society under the drunken misrule of President Boris Yeltsin and totally impoverished the bulk of the population. This conflict intensified after Jewish investor William F. Browder arranged Congressional passage of the Magnitsky Act to punish Russian leaders for the legal actions they had taken against his huge financial empire in their country. Putin’s harshest Neocon critics have often condemned him as “a new Hitler” while some neutral observers have agreed that no foreign leader since the German Chancellor of the 1930s has been so fiercely vilified in the American media. Seen from a different angle, there may indeed be a close correspondence between Putin and Hitler, but not in the way usually suggested.
Knowledgeable individuals have certainly been aware of the crucial Jewish role in orchestrating our military or financial attacks against Iraq, Iran, Syria, and Russia, but it has been exceptionally rare for any prominent public figures or reputable journalists to mention these facts lest they be denounced and vilified by zealous Jewish activists and the media they dominate. For example, a couple of years ago a single suggestive Tweet by famed CIA anti-proliferation operative Valerie Plame provoked such an enormous wave of vituperation that she was forced to resign her position at a prominent non-profit. A close parallel involving a far more famous figure had occurred three generations earlier:
These facts, now firmly established by decades of scholarship, provide some necessary context to Lindbergh’s famously controversial speech at an America First rally in September 1941. At that event, he charged that three groups in particular were “pressing this country toward war[:] the British, the Jewish, and the Roosevelt Administration,” and thereby unleashed an enormous firestorm of media attacks and denunciations, including widespread accusations of anti-Semitism and Nazi sympathies. Given the realities of the political situation, Lindbergh’s statement constituted a perfect illustration of Michael Kinsley’s famous quip that “a gaffe is when a politician tells the truth – some obvious truth he isn’t supposed to say.” But as a consequence, Lindbergh’s once-heroic reputation suffered enormous and permanent damage, with the campaign of vilification echoing for the remaining three decades of his life, and even well beyond. Although he was not entirely purged from public life, his standing was certainly never even remotely the same.
With such examples in mind, we should hardly be surprised that for decades this huge Jewish involvement in orchestrating World War II was carefully omitted from nearly all subsequent historical narratives, even those that sharply challenged the mythology of the official account. The index of Taylor’s iconoclastic 1961 work contains absolutely no mention of Jews, and the same is true of the previous books by Chamberlin and Grenfell. In 1953, Harry Elmer Barnes, the dean of historical revisionists, edited his major volume aimed at demolishing the falsehoods of World War II, and once again any discussion of the Jewish role was almost entirely lacking, with only part of one single sentence and Chamberlain’s dangling short quote appearing across more than 200,000 words of text. Both Barnes and many of his contributors had already been purged and their book was only released by a tiny publisher in Idaho, but they still sought to avoid certain unmentionables.
Even the arch-revisionist David Hoggan seems to have carefully skirted the topic of Jewish influence. His 30 page index lacks any entry on Jews and his 700 pages of text contain only scattered references. Indeed, although he does quote the explicit private statements of both the Polish ambassador and the British Prime Minister emphasizing the enormous Jewish role in promoting the war, he then rather questionably asserts that these confidential statements of individuals with the best understanding of events should simply be disregarded.
In the popular Harry Potter series, Lord Voldemort, the great nemesis of the young magicians, is often identified as “He Who Must Not Be Named,” since the mere vocalization of those few particular syllables might bring doom upon the speaker. Jews have long enjoyed enormous power and influence over the media and political life, while fanatic Jewish activists demonstrate hair-trigger eagerness to denounce and vilify all those suspected of being insufficiently friendly towards their ethnic group. The combination of these two factors has therefore induced such a “Lord Voldemort Effect” regarding Jewish activities in most writers and public figures. Once we recognize this reality, we should become very cautious in analyzing controversial historical issues that might possibly contain a Jewish dimension, and also be particularly wary of arguments from silence.
Another aspect of Schultze-Rhonhof’s important study that was new to me but further solidified my previous conclusions was his analysis of Hitler’s public speeches. Although the German Fuhrer is notoriously portrayed as a horrific warmonger, his actual statements provide absolutely no evidence of any plans for aggressive war, and instead emphasized the importance of maintaining international peace in order to foster internal German economic development. In another 2019 article, I had similarly suggested that any examination of the reputable contemporary sources reveals that the Hitler of our history books is merely a grotesque political cartoon, similar to the one now increasingly drawn of Putin:
Although the demonic portrayal of the German Kaiser was already being replaced by a more balanced treatment within a few years of the Armistice and had disappeared after a generation, no such similar process has occurred in the case of his World War II successor. Indeed, Adolf Hitler and the Nazis seem to loom far larger in our cultural and ideological landscape today than they did in the immediate aftermath of the war, with their visibility growing even as they become more distant in time, a strange violation of the normal laws of perspective. I suspect that the casual dinner-table conversations on World War II issues that I used to enjoy with my Harvard College classmates during the early 1980s would be completely impossible today.
To some extent, the transformation of “the Good War” into a secular religion, with its designated monsters and martyrs may be analogous to what occurred during the final decay of the Soviet Union, when the obvious failure of its economic system forced the government to increasingly turn to endless celebrations of its victory in the Great Patriotic War as the primary source of its legitimacy. The real wages of ordinary American workers have been stagnant for fifty years and most adults have less than $500 in available savings, so this widespread impoverishment may be forcing our own leaders into adopting a similar strategy.
But I think that a far greater factor has been the astonishing growth of Jewish power in America, which was already quite substantial even four or five decades ago but has now become absolutely overwhelming, whether in foreign policy, finance, or the media, with our 2% minority exercising unprecedented control over most aspects of our society and political system. Only a fraction of American Jews hold traditional religious beliefs, so the twin worship of the State of Israel and the Holocaust has served to fill that void, with the individuals and events of World War II constituting many of the central elements of the mythos that serves to unify the Jewish community. And as an obvious consequence, no historical figure ranks higher in the demonology of this secular religion than the storied Fuhrer and his Nazi regime.
However, beliefs based upon religious dogma often sharply diverge from empirical reality. Pagan Druids may worship a particular sacred oak tree and claim that it contains the soul of their tutelary dryad; but if an arborist taps the tree, its sap may seem like that of any other.
Our current official doctrine portrays Adolf Hitler’s Nazi Germany as one of the cruelest and most relentlessly aggressive regimes in the history of the world, but at the time these salient facts apparently escaped the leaders of the nations with which it was at war. Operation Pike provides an enormous wealth of archival material regarding the secret internal discussions of the British and French governmental and military leadership, and all of it tends to suggest that they regarded their German adversary as a perfectly normal country, and perhaps occasionally regretted that they had somehow gotten themselves involved a major war over what amounted to a small Polish border dispute.
During late 1939, a major American news syndicate had sent Stoddard to spend a few months in wartime Germany and provide his perspective, with his numerous dispatches appearing in The New York Times and other leading newspapers. Upon his return, he published a 1940 book summarizing all his information, seemingly just as even-handed as his earlier 1917 volume. His coverage probably constitutes one of the most objective and comprehensive American accounts of the mundane domestic nature of National Socialist Germany, and thus may seem rather shocking to modern readers steeped in eighty years of increasingly unrealistic Hollywood propaganda.
Into the Darkness An Uncensored Report from Inside the Third Reich At War
Lothrop Stoddard • 1940 • 79,000 Words
And although our standard histories would never admit this, the actual path toward war appears to have been quite different than most Americans believe. Extensive documentary evidence from knowledgeable Polish, American, and British officials demonstrates that pressure from Washington was the key factor behind the outbreak of the European conflict. Indeed, leading American journalists and public intellectuals of the day such as John T. Flynn and Harry Elmer Barnes had publicly declared that they feared Franklin Roosevelt was seeking to foment a major European war in hopes that it would rescue him from the apparent economic failure of his New Deal reforms and perhaps even provide him an excuse to run for an unprecedented third term. Since this is exactly what ultimately transpired, such accusations would hardly seem totally unreasonable.
And in an ironic contrast with FDR’s domestic failures, Hitler’s own economic successes had been enormous, a striking comparison since the two leaders had come to power within a few weeks of each other in early 1933. As iconoclastic leftist Alexander Cockburn once noted in a 2004 Counterpunch column:
When [Hitler] came to power in 1933 unemployment stood at 40 per cent. Economic recovery came without the stimulus of arms spending…There were vast public works such as the autobahns. He paid little attention to the deficit or to the protests of the bankers about his policies. Interest rates were kept low and though wages were pegged, family income increased by reason of full employment. By 1936 unemployment had sunk to one per cent. German military spending remained low until 1939.
Not just Bush but Howard Dean and the Democrats could learn a few lessons in economic policy from that early, Keynesian Hitler.
By resurrecting a prosperous Germany while nearly all other countries remained mired in the worldwide Great Depression, Hitler drew glowing accolades from individuals all across the ideological spectrum. After an extended 1936 visit, David Lloyd George, Britain’s former wartime prime minister, fulsomely praised the chancellor as “the George Washington of Germany,” a national hero of the greatest stature. Over the years, I’ve seen plausible claims here and there that during the 1930s Hitler was widely acknowledged as the world’s most popular and successful national leader, and the fact that he was selected as Time Magazine’s Man of the Year for 1938 tends to support this belief.
Only International Jewry had remained intensely hostile to Hitler, outraged over his successful efforts to dislodge Germany’s 1% Jewish population from the stranglehold they had gained over German media and finance, and instead run the country in the best interests of the 99% German majority. A striking recent parallel has been the enormous hostility that Vladimir Putin incurred after he ousted the handful of Jewish Oligarchs who had seized control of Russian society and impoverished the bulk of the population. Putin has attempted to mitigate this difficulty by allying himself with certain Jewish elements, and Hitler seems to have done the same by endorsing the Nazi-Zionist economic partnership, which lay the basis for the creation of the State of Israel and thereby brought on board the small, but growing Jewish Zionist faction.
In the wake of the 9/11 Attacks, the Jewish Neocons stampeded America towards the disastrous Iraq War and the resulting destruction of the Middle East, with the talking heads on our television sets endlessly claiming that “Saddam Hussein is another Hitler.” Since then, we have regularly heard the same tag-line repeated in various modified versions, being told that “Muammar Gaddafi is another Hitler” or “Mahmoud Ahmadinejad is another Hitler” or “Vladimir Putin is another Hitler” or even “Hugo Chavez is another Hitler.” For the last couple of years, our American media has been relentlessly filled with the claim that “Donald Trump is another Hitler.”
During the early 2000s, I obviously recognized that Iraq’s ruler was a harsh tyrant, but snickered at the absurd media propaganda, knowing perfectly well that Saddam Hussein was no Adolf Hitler. But with the steady growth of the Internet and the availability of the millions of pages of periodicals provided by my digitization project, I’ve been quite surprised to gradually also discover that Adolf Hitler was no Adolf Hitler.
It might not be entirely correct to claim that the story of World War II was that Franklin Roosevelt sought to escape his domestic difficulties by orchestrating a major European war against the prosperous, peace-loving Nazi Germany of Adolf Hitler. But I do think that picture is probably somewhat closer to the actual historical reality than the inverted image more commonly found in our textbooks.
For more than a hundred years, all of America’s many wars have been fought against totally outmatched adversaries, opponents that possessed merely a fraction of the human, industrial, and natural resources that we and our allies controlled. This massive advantage regularly compensated for many of our serious early mistakes in those conflicts. So the main difficulty our elected leaders faced was merely persuading the often very reluctant American citizenry to support a war, which is why many historians have alleged that such incidents as the sinkings of Maine and the Lusitania, and the attacks in Pearl Harbor and Tonkin Bay were orchestrated or manipulated for exactly that purpose.
This huge advantage in potential power was certainly the case when World War II broke out in Europe, and Schultze-Rhonof and others have emphasized that the British and French empires backed by America commanded potential military resources vastly superior to those of Germany, a mid-size country smaller than Texas. The surprise was that despite such overwhelming odds Germany proved highly successful for several years, before finally going down to defeat.
However, matters almost took a very different turn. As I discussed in a 2019 article, for more than three generations all our history books have entirely excluded any mention of one of the most crucial turning points of the twentieth century. In early 1940, the British and French were on the very verge of launching a major attack against the neutral USSR, hoping to destroy Stalin’s Baku oil fields by means of the largest strategic bombing campaign in world history, and perhaps overthrow his regime as a consequence. Only Hitler’s sudden invasion of France forestalled this plan, and if that Panzer thrust had been delayed for a few weeks, the Soviets would have been forced into the war on Germany’s side. A full German-Soviet military alliance would have easily matched the resources of the Allies including America, thereby probably ensuring Hitler’s victory.
But this very narrow escape from strategic disaster in World War II has been entirely flushed down the memory-hole, and I doubt whether one current DC policy-maker in a hundred is even aware of it, let alone properly recognizes its significance. This reinforces the enormous hubris that America will never have to confront opposing forces of comparable power.
Consider the attitude taken during the current conflict with Russia, a severe Cold War confrontation that might conceivably turn hot. Despite its great military strength and enormous nuclear arsenal, Russia seems just as out-matched as any past American foe. Including the NATO countries and Japan, the American alliance commands a 6-to-1 advantage in population and 12-to-1 superiority in economic product, the key sinews of international power. Such an enormous disparity is implicit in the attitudes of our strategic planners and their media mouthpieces.
But this is a very unrealistic view of the true correlation of forces. Prior to the outbreak of the Ukraine war, America had spent years primarily focusing its hostility against China, forming a military alliance against that country, deploying sanctions to cripple Huawei, China’s global technological champion, and working to ruin the Beijing Olympics, while also drawing very near to the red-line of actively promoting Taiwanese independence. I have even argued that there is strong perhaps overwhelming evidence that the Covid outbreak in Wuhan was probably the result of a biowarfare attack by rogue elements of the Trump Administration. So just two weeks before the Russian attack on Ukraine, Putin and Chinese leader Xi Jinping held their 39th personal meeting in Beijing and declared that their partnership had “no limits.” China will certainly support Russia in any global conflict.
Meanwhile, America’s endless attacks and vilification of Iran have gone on for decades, culminating in our assassination two years ago of the country’s top military commander, Qasem Soleimani, who had been mentioned as a leading candidate in Iran’s 2021 presidential elections. Together with our Israeli ally, we have also assassinated many of Iran’s top scientists over the last decade, and in 2020 Iran publicly accused America of having unleashed the Covid biowarfare weapon against their country, which infected much of their parliament and killed many members of their political elite. Iran would certainly side with Russia as well.
America, together with its NATO allies and Japan, does possess huge superiority in any test of global power against Russia alone. However, that would not be the case against a coalition consisting of Russia, China, and Iran, and indeed I think the latter group might actually have the upper hand, given its enormous weight of population, natural resources, and industrial strength.
Since the fall of the Soviet Union in 1991, America has enjoyed a unipolar moment, reigning as the world’s sole hyperpower. But this status has fostered our overweening arrogance and international aggression against far weaker targets, finally leading to the creation of a powerful block of states willing to stand up against us.
One of America’s greatest strategic assets has been our overwhelming control of the global media, which shapes the perceived nature of reality for many billions, including most of the world’s elites. But one inherent danger of such unchallenged propaganda-power is the likelihood that our leaders may eventually come to believe their own lies and exaggerations, thereby making decisions based upon assumptions that do not match reality.
When we finally departed Afghanistan after twenty years of occupation and trillions of dollars spent, our military planners were confident that the heavily-armed client regime we had left behind would remain in power for at least six months or more; instead, it fell to the Taliban within days.
A much more important example was highlighted by Ray McGovern in his March 3nd presentation. During last June’s Biden-Putin summit, our president told the Russian leader that we fully understood the terrible pressure he was facing from the Chinese, and his fear of their military threat. Such statements must have been regarded as sheer lunacy by the Russian national security leadership, and a strong sign of the completely delusional nature of the American foreign policy establishment they faced. Since such bizarre beliefs might prompt America to take actions detrimental to Russian interests, Putin attempted to puncture this bubble of unreality by organizing a joint public statement with his close Chinese counterpart affirming that their relationship was “more than an alliance.”
This highly visible declaration was intended to force the DC establishment to recognize the existence of a powerful Russia-China block, and thereby persuade it to secure important concessions from its Ukraine client state, but apparently to no avail. Instead, Ukraine publicly declared its intention to acquire nuclear weapons, and Putin decided that war was his only option.
Bismarck allegedly once quipped that there is a special Providence for drunkards, fools, and the United States of America. But I fear that we have now drawn down on that Providence one too many times, and may be about to suffer the consequences.
Two myths have hindered investigations into the origins of the SARS-CoV-2 virus: one, that viruses seldom escape from laboratories; and two, that most pandemics are zoonotic, caused by a natural spillover of a virus from animals to humans.
Promoters of the first myth include the World Health Organization (WHO). At a press conference in Wuhan, China, in February 2021, Peter Ben Embarek, the head of the WHO inspection team tasked with looking into the origins of the virus, said it was “extremely unlikely” that it had leaked from a lab and as a result the lab escape hypothesis would no longer form part of the WHO’s continuing investigations.[1]
Dr Peter Daszak, president of the EcoHealth Alliance, has promoted both myths. As long ago as 2012, Dr Daszak co-authored a paper in The Lancet claiming that “Most pandemics – e.g. HIV/AIDS, severe acute respiratory syndrome, pandemic influenza – originate in animals”.[2] Since the start of the pandemic, he has claimed that “lab accidents are extremely rare”, and that they “have never led to large scale [disease] outbreaks”. He also said that suggestions that SARS-CoV-2 might have come out of a lab are “preposterous”, “baseless”, “crackpot”, “conspiracy theories”, and “pure baloney”.[3]
In September 2020 Dr Anthony Fauci, director of the US National Institutes of Health’s (NIH) National Institute of Allergy and Infectious Diseases (NIAID), and his co-author wrote in a paper about COVID’s origins, “Infectious diseases prevalent in humans and animals are caused by pathogens that once emerged from other animal hosts.”[4] Fauci has tried to quash the notion that SARS-CoV-2 could have come from a lab. In May 2020 he said that the virus “could not have been artificially or deliberately manipulated” and in October 2020 that year that the lab leak theory was “molecularly impossible”.[5]
But emails uncovered this year by a Freedom of Information request in the US reveal a wide gap between what Fauci was being told by experts about the virus’s origins and what he was saying publicly. In January 2020, a group of four virologists led by Kristian G. Andersen of the Scripps Research Institute told Fauci that they all “find the genome inconsistent with expectations from evolutionary theory”[6] – in other words, it likely didn’t come from nature and could have come from a lab.
Fauci hastily convened a teleconference with the virologists on 1 February 2020.[7] As the New York Postreported, “Something remarkable happened at the conference, because within three days, Andersen was singing a different tune. In a Feb. 4, 2020, email, he derided ideas about a lab leak as ‘crackpot theories’ that ‘relate to this virus being somehow engineered with intent and that is demonstrably not the case’.”[8]
Andersen and his colleagues then published an article on 17 March 2020 in the journal Nature Medicine that declared, “Our analyses clearly show that SARS-CoV-2 is not a laboratory construct or a purposefully manipulated virus.”[9] The article was highly influential in persuading the mainstream press not to investigate lab leak theories.[10]
While the emails do not prove a conspiracy to mislead the public, they certainly make it more plausible. Just one day after the teleconference at which his experts explained why they thought the virus seemed manipulated, Francis Collins, then-director of the NIH, complained about the damage such an idea might cause.
“The voices of conspiracy will quickly dominate, doing great potential harm to science and international harmony,” he wrote on 2 February 2020, according to the emails.[11]
But there is another reason why Fauci and Collins might not want the lab leak idea to take hold. Dr Daszak’s EcoHealth Alliance had channelled funding from the NIH’s NIAID to the Wuhan Institute of Virology (WIV) in China, for dangerous gain-of-function (GoF) research on bat coronaviruses. So money from organisations headed by Fauci, Collins, and Daszak funded research that could have led to the lab leak that some believe caused the pandemic.[12]
While it should have been clear from the beginning that Drs Fauci and Daszak have strong vested interests in denying the lab leak theory, until recently their assertions were taken as objective fact by most science writers and media.
But a brief look at the history of lab leaks and the origins of pandemics confirms that their claims are highly misleading. Research shows that the escape of viruses from laboratories and supposedly contained experiments, such as vaccine research and programmes, is a common occurrence. In addition, many pandemics have arisen from lab escapes and almost all have not been directly zoonotic. Even when viruses do ultimately originate in animals and make the jump into humans, they mostly fester in a separated community of human beings for many years – centuries or millennia – before spreading during abnormal movements of people due to wars and famines.
What is GoF research?
In its broadest definition, GoF research provides a virus or other microbe with a new function, such as making it more virulent or transmissible, or widening its host range (the types of hosts that the organism can infect).[13] Through GoF, researchers can create new diseases in the laboratory.
GoF can be achieved by any selection process that results in changes in the genes of the organism and as a result, its characteristics. One example of such a process is passing a virus through different animal cells, which can result in a loss of function (weakening it) or a gain of function (making it more able to replicate in a new host species). The researcher can then select the altered organism, depending on the purpose of the research.
In the last decade, GoF researchers have used genetic engineering to directly intervene in the genome of viruses to enhance a desired function.
But long before GoF studies involving deliberate genetic alteration, researchers had started to experiment with widening the host range of certain viruses, in order to develop vaccines. Often these experiments had unintended outcomes, including causing outbreaks of the disease being targeted.
Smallpox
An example is the development of the smallpox vaccine. Most of us are aware of how Edward Jenner in 1796 put cowpox to work in a new way, to infect humans. This led to the successful vaccination programme that eventually eliminated smallpox from the world.
But what many people do not know is that the experiments of 1796 were not his first attempts at using an animal pox in humans. His first subject was his baby son, who had been born in 1789. He inoculated the lad with swinepox and later tested the inoculation’s effectiveness with smallpox. As Greer Williams pointed out in the book Virus Hunters, “The best we can say for this experiment is that it muddied the water… whether the experimental infections had anything to do with [the son’s] mental retardation it is impossible to say.”[14]
Vaccination does not give immunity from smallpox for life: A booster is required every few years. The last person to die from smallpox was Janet Parker, a photographer who worked on the floor above a lab in Birmingham, UK, where research on the virus was being conducted. She had been vaccinated against smallpox in 1966 but contracted the disease in 1978 when the virus escaped from the lab by an unknown route. She died some days later (see Table 1).
Introducing a virus or other microbe to a new host has historically been associated with problems. Before Jenner, inoculation with variola minor (smallpox from a sufferer with minor disease), had been used as a preventive measure in China as early as the tenth century.[15] Variolation, as it was termed, was introduced to the UK in 1717, but is reported to have killed 1 in 25. So Jenner’s experiments have to be viewed in the light of the contemporary practice, which was killing 4% of those inoculated.
What is more, as Greer Williams noted, variolation was an “excellent way of spreading the disease and starting new epidemics”.[16]
Yellow fever
In 1900 the French had given up on building the Panama Canal due to yellow fever decimating the workers. Eventually the disease was conquered in the region by a mosquito eradication programme based on the experiments of the US Army surgeon Major Walter Reed.[17] This success was crucial to the completion of the project in 1914.
But what is often forgotten is that a series of doctors and laboratory workers died trying to combat yellow fever. In 1900 Dr Jesse W. Lazear was the first researcher to die from yellow fever after he apparently allowed himself to be bitten by an infected mosquito as part of his experiments.[18] Between 1927 and 1930, yellow fever caused 32 laboratory infections, killing five people.[19]
As the research into viruses continued, so did the infection rate amongst the researchers and the death toll of researchers and those inoculated against diseases rose. I do not doubt that the final outcome was to the good of mankind, but occasionally a “vaccine” would go spectacularly wrong.
Polio
In the 1930s, 40s and 50s the infection that seemed to most frighten Western society was poliomyelitis. Perhaps it was because unlike with most infectious diseases, cleanliness did not seem to be a protection and exercising could be positively harmful. In fact polio struck those who were healthy and wealthy and was worse if the person was fit and active. Much effort was put into finding a vaccine and among the first to succeed was Dr Jonas Salk. There had been abortive attempts in the 1930s but the 1935 vaccination programme had actually killed people.
Salk was a meticulous researcher and his technique was excellent. Unfortunately this was not the case with all of the laboratories that prepared the vaccine for public use. In particular, the Cutter Laboratories failed to kill the virus and poliomyelitis was spread by their version of the Salk vaccine, paralysing and killing the recipients. Eventually the proper controls permitted the successful rollout of the killed vaccine. It was later replaced by an attenuated polio virus vaccine, which has nearly eliminated polio from the world. It will not, however, succeed in completely eliminating the disease, as the attenuated virus can revert to a wild form. Thus the final push may require the use, once again, of the killed virus polio vaccine.
The infection of laboratory workers with the microbes they were working on was so common that steps were introduced in the 1940s to prevent escape of the organisms. According to Wikipedia, the first prototype Class III (maximum containment) biosafety cabinet was fashioned in 1943 by Hubert Kaempf Jr., then a US Army soldier.[20] The regulations were enhanced and the escape of dangerous organisms decreased, but has never disappeared. This is clearly demonstrated in Table 1, which lists some, but by no means all, of the known lab leaks since the 1960s.
Escapes from bioweapons facilities
Whilst all of the incidents in the table are of interest, some are more worrying than others. In 1971 and 1979 there were outbreaks of smallpox and anthrax in the Soviet Union, caused by escapes of weaponised smallpox and weaponised anthrax from their own bioweapons facilities. In 1977 it is believed that a laboratory somewhere on the border of China and Russia put the H1N1 virus back together and it escaped and caused at least two pandemics. SARS1, which erupted first in 2003, later escaped from laboratories six times, four of which were in China, plus Singapore and Taiwan.[21]
The more you look at the table, the more you wonder if there is any virus that has not at some time escaped from a laboratory. Laboratory workers have told me that it is common for technicians to become infected with the organisms they are working with and their usual response in the past has been to take multivitamins and hydroxychloroquine.
Table 1: Some serious leaks of viruses from laboratories[22]k
The recent history of gain-of-function studies
Since 2010, GoF studies have increasingly focused on finding out whether non-pathogenic strains of viruses could be made infective and harmful to human beings.[23] This was supposedly in order to know whether or not the microbe was likely to be hazardous to human beings and then, if it was, devise vaccines and drugs against it.
In my opinion, such work simply increases the sum total of different pathogens that can affect human beings. When medical doctors are made aware of this type of research, they are usually speechless at the stupidity that anybody would contemplate doing such work. I now call such studies Make Another Disease (MAD) research.
This type of MAD research dramatically increased in laboratories in the USA between 2012 and 2014. The resulting accidents in which small outbreaks of novel viral diseases occurred led to three hundred scientists writing to the Obama administration asking for GoF to be stopped. The US Government responded by announcing a pause on the research in 2014 because of the inherent dangers.[24]
In the same year Dr Fauci, whose recorded belief was that the studies were worth the risk,[25] gave money from the NIH to Dr Daszak of Ecohealth Alliance to continue GoF research on coronaviruses.[26] This was carried out at the Wuhan Institute of Virology using genetically engineered humanized mice, culminating in reports in 2017 and 2018 that the researchers had successfully made harmless coronaviruses pathogenic to humans.[27]
In the autumn of 2019 the Covid-19 pandemic of SARS-2 started in Wuhan and, to date, over five million people across the world have died from the virus.
Are pandemics ever zoonotic?
In addition to stating erroneously that viruses only rarely escape from laboratories and/or that SARS-Cov-2 was unlikely to have done so, Drs Daszak and Fauci hold that most pandemics are zoonotic in origin. They say that pandemics start from a disease spreading from an animal but they do not state the time period involved. I would suggest that pandemics never occur from the immediate spread from an animal. In order for a pandemic to occur, a reservoir of the infection, adapted to human beings, must develop. This usually takes many years. Moreover the spread usually occurs due to the unnaturally large movement of people that occurs due to wars and famines.
I will give just a couple of well known examples.
When the Europeans invaded the Americas, 90% or more of the indigenous people of America died from the introduced diseases, which included measles, smallpox and mumps. In return, syphilis spread to Europe. Yes, the diseases had all arisen from animals initially, but the adaptation to make them pathogenic enough to cause a pandemic must have occurred over a period of the several thousand years during which the populations of Europe and America were separated.
AIDS was discovered in the early 1980s and it was soon clear that the Human Immunodeficiency Virus had arisen from the Simian Immunodeficiency Virus. However, studies have concluded that the first transmission of SIV to HIV in humans took place around 1920 in Kinshasa in the Democratic Republic of Congo (DR Congo),[28] so that it had at least 40–50 years of sporadic infection of human beings before it started to spread round the world as a pandemic. During that time there were many local wars in Africa and, of course, the 2nd World War.
In my book PANDEMIC, I document the world’s worst pandemics and conclude that it is only malaria that seems to be indifferent to wars, killing people whether or not there are hostilities. All other historical pandemics have at least some connection with war and occur when isolated groups with an endemic disease meet another group without the disease.
Conclusion
Thus historically we come to an impasse with SARS-CoV-2. This arose in a city many miles away from an animal population that might have harboured a similar virus, at a time when the supposed original host was dormant (late autumn), near a laboratory known to be working on the viruses. It then spread from person to person at an alarming rate and was seen to be totally adapted to human beings, to the extent that it was unable to even infect the bat it was supposed to have arisen from.
As a person who has studied the history of pandemics and lab leaks, imagine my surprise when authorities, not only in China but also in the USA and UK, stated categorically that the virus was obviously zoonotic and we were conspiracy theorists if we proposed the opposite. I had to conclude that they were misguided or purposely lying.
About the author: Professor Paul R Goddard BSc, MBBS, MD, DMRD, FRCR, FBIR, FHEA is Emeritus Professor, University of the West of England, Bristol; retired consultant radiologist; and former president of the Radiology Section of the Royal Society of Medicine. He is the author of PANDEMIC, A Personalised History of Plagues, Pestilence and War, Clinical Press Ltd, August 2020, and PANDEMIC, 2nd Edition 2021, Clinical Press, Bristol, available from Gazelle Book Services Ltd and good bookshops, ISBN 978-1-85-457105-2. On a similar theme, see The Origin of the Virus, Clinical Press 2021.
The above article is adapted from material that was first presented as the Long Fox lecture to The Bristol Medico-Chirurgical Society and Bristol University (2017) and to the British Society for the History of Medicine Biennial Congress (September 2021).
As the old says goes: “there’s no rest for the wicked.”
That’s certainly the case with vaccine mogul Bill Gates.
As the world finally gets an opportunity to take a breath easy – after being suffocated by two years of pandemic theatre and 24/7 government and corporate pharmaceutical propaganda, the notorious architect the global COVID-19 ‘vaccine’ roll-out, billionaire tech monopolist turned pharmaceutical scion, Bill Gates, is still determined to realize his life’s ambition of achieve 100% global vaccine compliance.
To keep the game going, Gates has reemerged from the shadows this week to prepare the public for “the next pandemic.”
Bill Gates said Friday that the risks of severe disease from Covid-19 have “dramatically reduced” but another pandemic is all but certain.
Speaking to CNBC’s Hadley Gamble at Germany’s annual Munich Security Conference, Gates, co-chair of the Bill & Melinda Gates Foundation, said that a potential new pandemic would likely stem from a different pathogen to that of the coronavirus family.
But he added that advances in medical technology should help the world do a better job of fighting it — if investments are made now.
“We’ll have another pandemic. It will be a different pathogen next time,” Gates said.
Initially, Gates had been actively promoting each and every ‘variant’ – constantly talking-up the crisis in order to help maintain the perception of a constant demand for the highly controversial experimental COVID-19 ‘vaccine’ gene-jabs. This includes the most recent media creation known as the “Omicron” variant. But as the public began shunning the booster shots en masse, the media gradually began the abandon the Omicron narrative. Gates has clearly read the propaganda tea leaves, and has since started backtracking from some of his previous positions – and is even admitting that natural immunity is more effective than the dubious pharmaceutical-based synthetic immunity he’s been pushing for the last two years through the media and his proxy organizations the World Health Organization (WHO) and the GAVI vaccine alliance.
Two years into the coronavirus pandemic, Gates said the worst effects have faded as huge swathes of the global population have gained some level of immunity. Its severity has also waned with the latest omicron variant.
However, Gates said that in many places that was due to virus itself, which creates a level of immunity, and has “done a better job of getting out to the world population than we have with vaccines.”
In order to further shield him from an increasing public backlash for his role in shamelessly promoting the ‘global pandemic’ and vaccine narratives, Gates has also carefully admitted the existence of comorbidities among the alleged COVID deaths.
“The chance of severe disease, which is mainly associated with being elderly and having obesity or diabetes, those risks are now dramatically reduced because of that infection exposure,” he said.
However, the vaccine kingpin is still lamenting his failure to reach 70% ‘penetration’ of the experimental mRNA toxic injections into the arms of the global population.
Gates said it was already “too late” to reach the World Health Organization’s goal to vaccinate 70% of the global population by mid-2022. Currently, 61.9% of the world population has received at least one dose of a Covid-19 vaccine.
He added that the world should move faster in the future to develop and distribute vaccines, calling on governments to invest now.
“Next time we should try and make it, instead of two years, we should make it more like six months,” Gates said, adding that standardized platforms, including messenger RNA (mRNA) technology, would make that possible.
It’s important for people to realize that Gates and his network are not finished in their plan to establish a global conveyor belt for experimental gene-based pharmaceutical injections – and he is already eyeing ‘the next pandemic’ in order to roll out the next phase of this globalist agenda. There is no shortage of funds either:
“The cost of being ready for the next pandemic is not that large. It’s not like climate change. If we’re rational, yes, the next time we’ll catch it early.”
Gates, through the Bill & Melinda Gates Foundation, has partnered with the U.K.’s Wellcome Trust to donate $300 million to the Coalition for Epidemic Preparedness Innovations, which helped form the Covax program to deliver vaccines to low- and middle-income countries.
The CEPI is aiming to raise $3.5 billion in an effort cut the time required to develop a new vaccine to just 100 days.
“I’ve been following COVID since the early days of the outbreak, working with experts from inside and out of the Gates Foundation who are championing a more equitable response and have been fighting infectious diseases for decades. I’ve learned a lot in the process—both about this pandemic and how we stop the next one—and I want to share what I’ve heard with people. So, I started writing a book about how we can make sure that no one suffers through a pandemic ever again.”
To mark the occasion, Gates released this disturbing propaganda video – littered with many of the staged images and government tropes used to reinforce the COVID ‘global pandemic’ crisis narrative since the winter of 2020. Watch:
In their Fifth Assessment Report the IPCC, the ‘internationally accepted scientific authority on climate change’, gave their opinion of how much of the recent global warming was caused by human activity: ‘It is extremely likely [95-100 percent confidence] more than half of the observed increase in global mean surface temperature from 1951 to 2010 was caused by the anthropogenic [i.e. man-made] increase in greenhouse gas concentrations and other anthropogenic forcings together’. Reflecting that opinion Wikipedia states that the ‘Scientific consensus on climate change’ is that ‘the Earth is warming and… this warming is mainly caused by human activities’. It claims that 97-100% of actively publishing climate scientists endorse this opinion. Similarly, NASA claim that, ‘A consensus on climate change and its human cause exists… human activities are the primary cause of the observed climate-warming trend over the past century.’ And in an October 2020 interview on CBS’s 60 Minutes climatologist Dr Michael Mann said, ‘There’s about as much scientific consensus about human-caused climate change as there is about gravity.’ So is it actually true that 97-100% of climate scientists explicitly or implicitly endorse this key IPCC opinion?
Although science is not remotely democratic (it only needs one scientist to prove that the ‘consensus view’ is wrong and it is wrong) the fact remains that if this 97-100% consensus assertion is true then it is indeed very powerful. If the ‘internationally accepted scientific authority on climate change’ says something is almost certainly true and almost all climate scientists in the world agree then it almost certainly must be true – mustn’t it? Whilst there is undoubtedly almost total scientific consensus amongst the scientific authorities (literally dozens of scientific academies from around the world explicitly or implicitly endorse the IPCC’s opinions) that does not necessarily reflect the consensus view amongst climate scientists themselves. So what exactly is it that climate scientists agree on?
The consensus argument is epitomized by Barack Obama’s 2013 tweet that, ‘Ninety-seven percent of scientists agree: climate change is real, man-made and dangerous’. He tweeted this immediately after the publication of the most famous climate change consensus survey, Quantifying the consensus on man-made global warming in the scientific literature (John Cook et al, 2013) conducted by Skeptical Science, a small group of climate change activists, who, despite their name, are precisely the opposite of climate change skeptics (their strapline is ‘Getting skeptical about global warming skepticism’). This study examined the Abstracts from 11,944 climate science papers published over the twenty-year period from 1991 to 2011. It concluded that 97.1% of the Abstracts (that actually expressed an opinion on the causes of global warming) endorsed the view that man-made greenhouse gas emissions (or, at least, greenhouse gases) cause global warming. Although this was 97% of Abstracts, not 97% of climate scientists, it is not unreasonable to suppose that, based on this survey, about 97% of climate scientists endorse the view that man-made greenhouse gas emissions (or, at least, greenhouse gases) cause global warming. It said nothing whatsoever about how much warming those emissions were causing and whether or not such warming was ‘dangerous’. It is probably the case that at least 99.9% of people who might describe themselves as climate scientists (including those most skeptical about the climate change crisis idea) endorse the view that man-made greenhouse gas emissions (or, at least, greenhouse gases) cause global warming, i.e. some global warming. That is not in any serious dispute. The dispute is about how much global warming human activity is causing and whether or not it is ‘dangerous’. So the study revealed nothing that was not already well known and uncontroversial.
Skeptical Science summarized their findings with the statement, ‘97% of climate papers expressing a position on human-caused global warming agree: global warming is happening and we are the cause’ – where ‘we are the cause’ clearly implied ‘we are the sole cause’ instead of what it actually found, viz. that we are the cause of some of the global warming. If the study had been able to show convincingly that 97% of climate scientists endorsed the IPCC’s opinion that human activity was the predominant cause of global warming between 1951 and 2010 then that would certainly have strongly supported the view that there was almost total scientific consensus that the IPCC was right. But of all the Abstracts reviewed in this study only 0.3% explicitly endorsed that central IPCC opinion1. Even (ex-IPCC) Mike Hulme has noted that, ‘The Cook et al study is hopelessly confused… in one place the paper claims to be exploring “the level of scientific consensus that human activity is very likely causing most of the current GW [Global Warming]” and yet the headline conclusion is based on rating abstracts according to whether “humans are causing global warming”. These are two entirely different judgements.’ The recently published paper Greater than 99% consensus on human caused climate change in the peer-reviewed scientific literature (Lynas et al, 2021) claims that the consensus is actually 2% higher – but once again only actually finds a 99% consensus that human activity contributes to climate change to some extent2; in fact about 99% of the papers reviewed in this study failed to explicitly quantify the extent. A survey3 of more than 1,800 climate scientists conducted in 2015 concluded that just 43% of them would endorse the IPCC opinion about our recent predominant role in global warming (and how many of them were agreeing based primarily on their faith in the IPCC and/or their self-interest in staying ‘on message’ to the climate change crisis narrative?)
Mike Hulme has stated that, ‘Claims such as “2,500 of the world’s leading scientists have reached a consensus that human activities are having a significant influence on the climate” are disingenuous. That particular consensus judgement, as are many others in the IPCC reports, is reached by only a few dozen experts.’ Supporting that view, an independent study4 found that the views expressed by the IPCC were the consensus of a leadership cadre of just 53 (about 2%) of them, 44 of whom were very closely linked professionally, having co-authored papers with one another and so very likely to share the same opinions. The author of the study, John McLean (climate data analyst at the Australian Climate Science Coalition and an Expert Reviewer for the IPCC’s Fifth Assessment Report), concluded that ‘Governments have naively and unwisely accepted the claims of a human influence on global temperatures made by a close-knit clique of a few dozen scientists, many of them climate modellers, as if they were representative of the opinion of the wider scientific community.’
One of the most comprehensive reviews5 ever performed of surveys of the scientific consensus on climate change concluded:
The articles and surveys most commonly cited as showing support for a ‘scientific consensus’ in favor of the catastrophic man-made global warming hypothesis are without exception methodologically flawed and often deliberately misleading.
There is no survey or study showing ‘consensus’ on the most important scientific issues in the climate change debate.
Extensive survey data show deep disagreement among scientists on scientific issues that must be resolved before the man-made global warming hypothesis can be validated. Many prominent experts and probably most working scientists disagree with the claims made by the United Nations’ Intergovernmental Panel on Climate Change (IPCC).
So what is the real scientific consensus on climate change? There is almost total scientific consensus that carbon dioxide concentrations in the atmosphere are increasing, that that increase is predominantly due to human activity, that the climate system is warming, that climate change is happening and that human activity has contributed to some extent to the warming, changing climate. Note again that skeptical scientists, like Dr Roy Spencer and Dr Judith Curry and Dr Richard Lindzen, are part of this ‘scientific consensus on climate change’; the idea that they constitute the 3% of scientists who do not support the scientific consensus on climate change is a false idea, misrepresenting what the ‘scientific consensus on climate change’ actually is6. This misrepresentation is designed to bolster the ‘climate change crisis’ narrative and to marginalize and neutralize the skeptical scientists by making their views appear to fall far outside the overwhelming consensus view, even though they actually share that consensus view. Basically, the ‘consensus’ breaks down over the issue of whether or not human activity has been predominantly responsible for recent warming – and whether or not that warming is ‘dangerous’. The power of the false ‘97% scientific consensus that human activity has been predominantly responsible for climate change’ meme, perpetuated by Wikipedia, NASA, Facebook (and many others) is that it can be used very effectively to strangle at birth any debate about the science. As Dr Richard Lindzen has put it, ‘The claim is meant to satisfy the non-expert that he or she has no need to understand the science. Mere agreement with the 97 percent will indicate that one is a supporter of science and superior to anyone denying disaster. This actually satisfies a psychological need for many people.’
So if we return to Dr Michael Mann’s statement that, ‘There’s about as much scientific consensus about human-caused climate change as there is about gravity’ this is very disingenuous. Whilst there is almost total scientific consensus that climate change is ‘real’ and happening and that there has been some human-caused influence, there is no such scientific consensus over the extent of the human-caused influence and whether or not it could reasonably be described as ‘dangerous’, let alone a ‘crisis’.
References
1 Legates et al. (2015), Science & Education and ‘Consensus? What Consensus?’, GWPF Note 5, thegwpf.org, September 2013 and ‘Richard Tol’s Excellent Summary of the Flaws in Cook et al. (2013) and ‘The Infamous 97% Consensus Paper’, wattsupwiththat.com, 26 March 2015 and ‘The Cook ‘97% consensus’ paper, exposed by new book for the fraud that it really is’, wattsupwiththat.com, 12 March 2016
2 ‘Cooked Up Consensus: Lynas et al “Should Rather Be Classified As Propaganda, Bad Science”’, wattsupwiththat.com, 26 October 2021
By Kit Klarenberg and Wyatt Reed | The Grayzone | October 5, 2025
A roving reporter who covered Italy’s top politicians explains to The Grayzone how his country was reduced to a joint US-Israeli “aircraft carrier,” and raises troubling questions about an Israeli role in the killing of Prime Minister Aldo Moro.
For years, Israel’s Mossad monitored and secretly influenced a violent communist faction that carried out the March 16, 1978 kidnapping and murder of Italian statesman Aldo Moro, veteran investigative journalist Eric Salerno has documented.
Having worked closely alongside multiple Italian heads of state during his 30-year career as a correspondent, Salerno published an expose of their secret relationship with Israeli intelligence in 2010 called Mossad Base Italy. … continue
This site is provided as a research and reference tool. Although we make every reasonable effort to ensure that the information and data provided at this site are useful, accurate, and current, we cannot guarantee that the information and data provided here will be error-free. By using this site, you assume all responsibility for and risk arising from your use of and reliance upon the contents of this site.
This site and the information available through it do not, and are not intended to constitute legal advice. Should you require legal advice, you should consult your own attorney.
Nothing within this site or linked to by this site constitutes investment advice or medical advice.
Materials accessible from or added to this site by third parties, such as comments posted, are strictly the responsibility of the third party who added such materials or made them accessible and we neither endorse nor undertake to control, monitor, edit or assume responsibility for any such third-party material.
The posting of stories, commentaries, reports, documents and links (embedded or otherwise) on this site does not in any way, shape or form, implied or otherwise, necessarily express or suggest endorsement or support of any of such posted material or parts therein.
The word “alleged” is deemed to occur before the word “fraud.” Since the rule of law still applies. To peasants, at least.
Fair Use
This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a ‘fair use’ of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. For more info go to: http://www.law.cornell.edu/uscode/17/107.shtml. If you wish to use copyrighted material from this site for purposes of your own that go beyond ‘fair use’, you must obtain permission from the copyright owner.
DMCA Contact
This is information for anyone that wishes to challenge our “fair use” of copyrighted material.
If you are a legal copyright holder or a designated agent for such and you believe that content residing on or accessible through our website infringes a copyright and falls outside the boundaries of “Fair Use”, please send a notice of infringement by contacting atheonews@gmail.com.
We will respond and take necessary action immediately.
If notice is given of an alleged copyright violation we will act expeditiously to remove or disable access to the material(s) in question.
All 3rd party material posted on this website is copyright the respective owners / authors. Aletho News makes no claim of copyright on such material.