茄子直播

Why science takes so long: 茄子直播鈥檚 Ross Upshur on our evolving understanding of COVID-19

Ross Upshur
(photo courtesy of the Dalla Lana School of Public Health)

As COVID-19 swept across the world, scientists scrambled to learn as much as they could about the new disease and share their findings with policy-makers and the public.

Many researchers shared their work on preprint servers before it had the chance to undergo the lengthy peer-review process required before research is published in academic journals.

The practice was intended to quickly make available potentially critical information about a deadly, fast-spreading virus so it could inform public policy decisions.

However, as Ross Upshur, a professor and bioethicist at the University of Toronto鈥檚 Dalla Lana School of Public Health, points out, preprint articles can sometimes be misleading because they haven鈥檛 been thoroughly vetted by other researchers.

He adds that, even in the case of peer-reviewed studies, it鈥檚 unrealistic for policy-makers 鈥 or the public 鈥 to expect scientists to produce definitive information about COVID-19 in the near-term.

鈥淪cience is not about the production of certain truths,鈥 he says.

Upshur recently spoke to 茄子直播 News writer Geoffrey Vendeville about how our understanding of COVID-19 has evolved since last spring and the process by which researchers make sense of a new disease.


There has been an explosion of interest in preprint studies about COVID-19. What are the advantages and disadvantages of relying on these studies for information?

The critique of the current process by which scientific evidence becomes available for public scrutiny is that it has to go through a peer review process, which tends to take a long time 鈥 anywhere up to a year. So, the principal advantage of preprint services is speed, in the sense that as soon as a manuscript is completed it can be posted to a preprint server. There has been some talk for some time about using this method in the medical sciences.

Those are the virtues. On the other side of the coin are a larger number of pitfalls.

One is that, unlike many scientific disciplines, people will make decisions to change their behaviour on the basis of science published in the health domain, particularly when it comes to therapeutic interventions.

Two, there's a massive interest in whatever's being published that's being picked up and disseminated through media channels as well as social media.

Three, none of these preprinted materials have been through rigorous peer review, which means that studies with very severe methodological flaws get broadly disseminated, stripped of methodological content and reduced simply to the claim that 鈥淴鈥 works. A good example would be the hydroxychloroquine and azithromycin study, which was widely touted to give indication that the combination of drugs was effective in treating COVID-19. But the study showed no such thing because it wasn't really a clinically relevant outcome that was being reported on, and that was left out of the discussion. And it was a deeply methodologically flawed study.

So, the peril of this strategy is a profusion of conflicting, non-peer-reviewed studies that actually may confuse people more than it benefits them. It also provides a very poor basis for sound policy and public health interventions.

Although preprint studies have yet to go through peer review, are there safeguards to make sure their findings are accurate?

There should at least be a first cut of review to determine whether the study was methodologically sound. But when you think of the process of actually monitoring that, it takes a lot of human resources. And, of course, many of the people who would have the critical skills to be able to do that sort of work are busy doing their own research on COVID-19 and posting their results to a preprint server.

What checks and balances are involved in peer review?

I鈥檒l give you a good example, one where some of those checks and balances failed. At journals like The Lancet and The New England Journal of Medicine, two of the most prestigious scientific journals, you would submit your paper to the journal and the receiving editor would look at it and, most of the time, send you an email saying, 鈥淭hanks, but no thanks.鈥

But let's say you make it through the first assessment. Then it goes to another table of editors who decide if this paper is worthy of going out for peer review. Then it goes out for peer review, usually to two or three reviewers. They provide extensive comments on the paper and they feed that back to the editors. The editors give it back to the authors. And the authors are expected to revise their paper in light of both the editorial comments and the peer-review comments.

When all of those comments have been satisfactorily addressed, then it's deemed ready for publication in a peer-reviewed journal.

But of course that process has flaws as well, as the . A team of researchers put forward an analysis of data that made it through both the New England Journal of Medicine and The Lancet, but when the community of scientists read it 鈥 this is where external monitoring and criticism from the scientific community plays a very important role 鈥 they said this can't possibly be legitimate.

There were several flaws with the study, notwithstanding the fact that nobody had ever heard of some of the investigators or the data source.

The other important role here was the role of the media. It was a Guardian investigation into Surgisphere that raised serious concerns about the legitimacy of the enterprise.

So, there are external checks and balances throughout the peer review mechanism and they, for the most part, work and they work well. But it's hard to detect fraud if it's done extremely well.

With preprints, on the other hand, you have no idea. You or I could sit down and create a table of data and invent a process by which we had a study and post it [to a preprint server] and nobody would be the wiser.

Given that preprint studies have these shortcomings, how should the media report on them?

That is a great question. I was reading something about the different time frames in which different interests operate. The media likes things today, tomorrow or within a week at most. Politicians like days or even hours, usually. But science sometimes takes years, decades or in some cases centuries to come to a consensus on what is a credible result.

So, we need to ask ourselves why we鈥檙e so consumed with getting the latest information about the coronavirus. And the media kind of perpetuates a view of science that I think is highly problematic and probably false 鈥 so, people are disappointed when science doesn't provide the kind of "certitude" that they think science is intended to provide.

I think that's what Vivek Goel鈥檚 most recent podcast very nicely highlights. Science is not about the production of certain truths. It's about bounding uncertainty and limiting uncertainty; not the production of certainty.

We need a much broader discourse and engagement on how we think about science and the role that science plays in the public sphere.

As for the public, we have probably the highest educated populace in the history of our species. But we still need to reinforce critical appraisal skills and critical thinking. With respect to the media, there have been a lot of courses that train journalists on how to properly interpret statistics and clinical studies. I think it's important that those efforts continue. I think we also need scientists to provide really good easily accessible guides on to how to understand the scientific process and how it's functioning within the COVID-19 response.

Can you give me some examples of how our understanding of COVID-19 has changed just since this spring?

There are several.

When you have a completely uncertain phenomenon, there are many things that are not known. With infectious diseases we work within an established framework so there are some critical first steps such as determining the mode of transmission, the incubation period, the duration of symptoms etc. These are questions we know how to answer with reasonable precision. So it is important to recognize that uncertainty is not the same as ignorance. Our knowledge of COVID-19 was built on what we had learned about infectious diseases in the past. In this sense, it was not a mystery.

There are things that we now believe we have fairly good grip on now that we didn't have at the beginning of the pandemic when there was a lot of uncertainty. You might recall there was a big debate about whether it was spread from person to person. That was back in January when, quite clearly, it was determined that it was spread from person to person and that it probably was respiratory droplet transmission. But there is still an ongoing discussion about whether it's airborne or droplet borne, about how much risk surfaces or fomites present.

Everybody would come in and take Lysol wipes to everything they bought at the grocery store, right?

There's still a lot of uncertainty, but we do have a fair amount of solid, trustworthy, empirical claims concerning incubation period, mode of transmission, duration of viral shedding.

But the next level of uncertainty is around immunity. If you become ill and recover, are you then immune?

Another area of uncertainty that just opened up in the last little while is re-infection. First people were saying it's not likely you can be re-infected with COVID-19. Now we've got a couple of very credible, well-documented case reports of individuals being re-infected after recovering from a previous infection.

Those are all good examples of how our understanding is changing rapidly. But we've got some fairly solid, actionable knowledge that we're using to guide policy.

At what point can we say scientists have reached a consensus?

That's a great question. There's two thousand years of scientific inquiry and philosophy of science exactly on that question. When do experiments end? How is it that we're reassured that we have sufficient evidence and that we don't need to seek more?

There's an approach called evidence-based medicine that more or less says if you've got a large number of randomized trials, and you put those into a meta-analysis, and that meta-analysis has tight confidence limits, and it looks like the acquisition of more evidence isn't going to change that estimate, that you might have consensus.

But that doesn't end the story.

You might have very tight confidence levels around a treatment effect but that doesn't mean you have the best possible therapy. There's still research that needs to go on.

Take, for example, the use of a steroid, dexamethasone, in ventilator-dependent people with COVID-19 disease. Well-designed randomized trials have shown that it has about a 30 per cent reduction in mortality. Probably randomizing more people won't change that estimate. But 30 per cent means that 70 per cent still don't get the benefit.

So even if you've got consensus on a finding, that finding may still need improvement. So, the story isn't over by any means just because you have some credible evidence.

Unlike hydroxychloroquine, which probably harms people, a particular subset of patients with severe COVID-19 disease get a benefit, but it doesn't benefit everybody.

There is no treatment that 100 per cent of people respond to and get better.

But we want to get as close to that as possible. So even having good evidence isn't good enough, so to speak. It's not the end of the story.

Listen to Vivek Goel鈥檚 podcast on understanding pandemic science

 

 

Topics

The Bulletin Brief logo

Subscribe to The Bulletin Brief

UTC