Aletho News

ΑΛΗΘΩΣ

Latest Modelling on Omicron Ignores All Evidence of Lower Severity, Among Numerous Other Problems

By Mike Hearn | The Daily Sceptic | December 11, 2021

Today the Telegraph reported that:

Experts from the London School of Hygiene and Tropical Medicine (LSHTM) predict that a wave of infection caused by Omicron – if no additional restrictions are introduced – could lead to hospital admissions being around twice as high as the previous peak seen in January 2021.

Dr Rosanna Barnard, from LSHTM’s Centre for the Mathematical Modelling of Infectious Diseases, who co-led the research, said the modellers’ most pessimistic scenario suggests that “we may have to endure more stringent restrictions to ensure the NHS is not overwhelmed”.

As we’ve come to expect from LSHTM and epidemiology in general, the model forming the basis for this ‘expert’ claim is unscientific and contains severe problems, making its predictions worthless. Equally expected, the press ignores these issues and indeed gives the impression that they haven’t actually read the underlying paper at all.

The ‘paper’ was uploaded an hour ago as of writing, but I put the word paper in quotes because not only is this document not peer reviewed in any way, it’s not even a single document. Instead, it’s a file that claims it will be continually updated, yet which has no version numbers. This might make it tricky to talk about, as by the time you read this it’s possible the document will have changed. Fortunately, they’re uploading files via GitHub, meaning we can follow any future revisions that are uploaded here.

Errors

The first shortcoming of the ‘paper’ becomes apparent on page 1:

Due to a lack of data, we assume Omicron has the same severity as Delta.

In reality, there is data and so far it indicates that Omicron is much milder than Delta:

Early data from the Steve Biko and Tshwane District Hospital Complex in South Africa’s capital Pretoria, which is at the centre of the outbreak, showed that on December 2nd only nine of the 42 patients on the Covid ward, all of whom were unvaccinated, were being treated for the virus and were in need of oxygen. The remainder of the patients had tested positive but were asymptomatic and being treated for other conditions.

The pattern of milder disease in Pretoria is corroborated by data for the whole of Gauteng province. Eight per cent of Covid-positive hospital patients are being treated in intensive care units, down from 23% throughout the Delta wave, and just 2% are on ventilators, down from 11%.

Financial Times, December 7th

The LSHTM document claims to be accurate as of today, but just ignores the data available so far and replaces it with an assumption; one that lets them argue for more restrictions.

What kind of restrictions? The LSHTM modellers are big fans of mask wearing:

All scenarios considered assume a 7.5% reduction in transmission following the introduction of limited mask-wearing measures by the U.K. Government on November 30th 2021, which we assume lasts until April 30th 2022. This is in keeping with our previous estimates for the impact of increased mask-wearing on transmission.

I was curious how they arrived at this number given the abundant evidence that mask mandates have no impact at all (example oneexample two). But no such luck – a reference at the end of the above paragraph points to this document, which doesn’t contain the word “mask” anywhere and “7.5%” likewise cannot be found. I wondered if maybe this was a typo but the claim that the relevant reference supports mask wearing appears several times and the word “mask” isn’t mentioned in references before or after either.

There are many other assumptions of dubious validity in this paper. I don’t have time today to try and list all of them, although maybe someone else wants to have a go. A few that jumped out on a quick read through are:

  • An assumption that S gene drop-outs, i.e. cases where a PCR test doesn’t detect the spike protein gene at all, are always Omicron. That doesn’t follow logically given the very high number of mutations and given that theoretically PCR testing is very precise, meaning a missing S gene should be interpreted as “not Covid”. Of course, in reality – as is by now well known – PCR results are routinely presented in a have-cake-and-eat-it way, in which they’re claimed to be both highly precise but also capable of detecting viruses with near arbitrary levels of mutation, depending on what argument the user wishes to support.
  • We use the relationship between mean neutralisation titre and protective efficacy from Khoury et al. (7) to arrive at assumptions for vaccine efficacy against infection with Omicron” – The cited paper was published in May and has nothing to say on the topic of vaccine effectiveness against Omicron, which is advertised as being heavily mutated. Despite not citing any actual measured data on real-world vaccine effectiveness, the modelling team proceeds to make arguments for widespread boosting with a vaccine targeted at the original 2019 Wuhan version of SARS-CoV-2.
  • They make scenarios that vary based on unmeasurable variables like “rate of introduction of Omicron”, making their predictions effectively unfalsifiable. Regardless of what happens, they can claim that they projected a scenario that anticipated it, and because such a rate is unknowable, nobody can prove otherwise. Predictions have to be falsifiable to be scientific, but these are not.
  • Their conclusion says “These results suggest that the introduction of the Omicron B.1.1.529 variant in England will lead to a substantial increase in SARS-CoV-2 transmission” even though earlier in the ‘paper’ they say they assume anywhere between a 5%-10% lower transmissibility than Delta to 30%-50% higher (page 7), or in other words, they have no idea what the underlying difference in transmissibility is – and that’s assuming this is actually something that can be summed up in a single number to begin with.

Analysis

If you’re new to adversarial reviews of epidemiology papers some of the above points may seem nit-picky, or even made in bad faith. Take the problem of the citation error – does it really matter? Surely, it’s just some sort of obscure copy/paste error or typo? Unfortunately, we cannot simply overlook such failures. The phenomenon of apparently random or outright deceptive citations is one I’ve written about previously. This problem is astoundingly widespread in academia. Most people will assume that a numerical claim by researchers that has a citation must have at least some level of truth to it, but in fact, meta-scientific study has indicated the error rate in citations is as high as 25%. A full quarter of scientific claims pointing to ‘evidence’ turn out when checked to be citing something that doesn’t support their point! This error rate feels roughly in line with my own experiences and that’s why it’s always worth verifying citations for dubious claims.

The reality is that academic output, especially in anything that involves statistical modelling, frequently turns out to not merely be unreliable but leaves the reader with the impression that the authors must have started with a desired conclusion and then worked backwards to try and find sciencey-sounding points to support it. Inconvenient data is claimed not to exist, convenient data is cherry picked, and where no convenient data can be found it’s just conjured into existence. Claims are made and cited but the citations don’t contain supporting evidence, or turn out to be just more assumptions. Every possible outcome is modelled and all but the most alarming are discarded. The scientific method is inconsistently used, at best, and instead scientism rules the day; meanwhile, universities applaud and defend this behaviour to the bitter end. Academia is in serious trouble: huge numbers of researchers just have no standards whatsoever and there are no institutional incentives to care.

Some readers will undoubtably wonder why we’re still bothering to do this kind of analysis given that there’s nothing really new here. On the Daily Sceptic alone we’ve covered these sorts of errors hereherehereherehere and here – and that’s not even a comprehensive list. So why bother? I think it’s worth continuing to do this kind of work for a couple of reasons:

  1. Many people who didn’t doubt the science last year have developed newfound doubts this year, but won’t search through the archives to read old articles.
  2. The continued publication of these sorts of ‘papers’ is itself useful information. It shows that academia doesn’t seem to be capable of self-improvement and despite a long run of prediction failures, nobody within the institutions cares about the collective reputation of professors. The appearance of being scientific is what matters. Actually being scientific, not so much.

December 11, 2021 - Posted by | Deception, Fake News, Mainstream Media, Warmongering, Science and Pseudo-Science | , ,

No comments yet.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.