By Olavo Amaral
How to deal with data that is too good to be true?
The chorus of Covid-19’s early treatment took a hit two weeks ago with news that an Egyptian clinical trial demonstrating ivermectin’s efficacy against the disease has been pulled from the Research Square preprint platform. The study showed a 90% reduction in mortality in patients with severe disease compared to a group that had received hydroxychloroquine.
Many people hadn’t taken the work seriously from the start, either because it came from obscure researchers, because it was written in poor English, or because it produced a result too spectacular to be true. None of this, however, prevented it from being included in several meta-analyses defending ivermectin, being responsible for much of the positive effect observed in them.
The article was only withdrawn from circulation after English journalist Jack Lawrence decided to investigate it when he noticed signs of plagiarism. One version of the article included a link to the original data—payable and password-protected. In a fluke, Lawrence kicked a creative little “1234” and watched the Excel spreadsheet with the raw data materialize on his screen.
Thereafter, the work of “data cop” Nick Brown showed not only inconsistencies but strong evidence of fraud: several patients appeared to be clones created by copy-paste, with some data modified to disguise. As a result, the platform removed the article and the authors have yet to respond.
The story is illustrative to analyze another case that has been gaining space in the Brazilian media. In March, a team of researchers led by endocrinologist Flávio Cadegiani reported at a press conference spectacular results of proxalutamide, an antiandrogen drug originally developed to treat prostate cancer, which would have led to a 92% reduction in mortality in patients hospitalized with Covid- 19.
The degree of success soon caught the attention of critics, who called it unlikely. The delay in publishing the data (which only appeared as a preprint more than three months later), the high mortality in the placebo group, the meteoric recruitment of more than 600 patients in less than a month and evidence of deviations in regarding the protocol approved by the ethics committee.
Part of the skepticism, however, is due to factors unrelated to the data. Since the beginning of the pandemic, Cadegiani had already claimed positive results for early treatment with hydroxychloroquine, ivermectin, nitazoxanide and dutasteride, in addition to proxalutamide itself in outpatients – a sequence of successes that is at least unlikely. His collaborator Ricardo Zimerman was a guest of the government party at the CPI of the pandemic and became a digital influencer in social networks and right-wing media, making his presence known in channels such as Osmar Terra. It is also worth noting that the president’s repeated mentions of proxalutamide do not work as an academic seal.
Are these good reasons, however, to turn a blind eye to a study that claims over 90% efficacy for a disease that causes millions of deaths? The Royal Society’s motto, after all, is “nullius in verba” (“in no one’s words”): scientific data should be more important than who presents it.
Judging by the reception of the article, however, impersonality is at a low point. In an article in the journal Science, the cardiologist and digital medicine guru Eric Topol says the results are “too good to be true” and that “there are almost no interventions in the history of medicine with benefits of this magnitude”. The same article mentions that the New England Journal of Medicine rejected the article on the grounds that “the results are unexpectedly good”, which would lead to the need to review the primary data – which the journal claims it has no capacity to do.
Having had its reputation exposed in the Surgisphere scandal, it’s understandable that the New England Journal doesn’t want to take chances with articles that raise suspicions. Still, the decision’s heuristic seems unjustifiable — as does the claim that the world’s largest medical journal is unable to check the original study data, which Cadegiani claims to have offered the editor.
That said, the offer doesn’t seem to apply to everyone. Even though the preprint states that the data is available upon justified request, my request to receive it was met with the response that “the authors prefer not to share it at this time” – a false availability that echoes the file’s password-protected link Egyptian. When questioned on Twitter, Cadegiani justified the denial by “inequity in the treatment of different studies”, suggesting that the fact that I had not requested data from other works put my impartiality in question.
Incredibly, the refusal to provide original data from a study is a common reality in academic science. In the impossibility of accessing them, the belief in the statements of an article is almost always based on the authors’ words. The words may sound like no one else’s, but as a viral commercial from a few decades ago put it, “la garantía soy yo”. Which makes the speaker’s reputation count, and a lot, to decide what to believe.
With that, the debate ends up migrating to investigative journalism – or to social media, where virulent “ad hominem” arguments from both sides try to resolve an unsolvable issue by attacking the reputations of authors and critics. And as with any topic, each group will end up finding the truth that suits them, leading to a polarization between doctors and lay people.
The first step in solving the problem is obvious — anonymized data from a study should be accessible to anyone who wants to analyze it. Although this data is usually requested by regulatory agencies, and most articles claim that they can be obtained, they are rarely actually available.
Even with open data, however, fraud better than the crude copy-paste of the Egyptian article can be difficult to detect. Thus, it is necessary to evolve to auditing systems that allow checking whether what is written in an article reflects reality. In a world where millions of secret votes are counted in hours, it shouldn’t be difficult to check whether people who took a drug in a study are alive or dead. Strangely, however, this does not seem to be a priority in the academy, which is satisfied with a system based on trust that ends up sowing discord.
Eventually we will know if Cadegiani and his colleagues’ statements are true – proxalutamide was approved by Anvisa for new tests, and the Paraguayan government granted an emergency authorization for the use of the drug. Until then, however, we will spend several months harming thousands of people, either by depriving them of effective treatment or by selling the false hopes and side effects of an ineffective drug and its commercial analogues, which are already being prescribed in a way that ” off-label” in Brazil.
Both alternatives are unacceptable and attest to the failure of academic science to exercise the most basic degree of quality control – that of knowing whether published data is true. Something that should be a right for anyone, without the need for passwords, investigations or pleas to the authors.
Olavo Amaral is a professor at the Leopoldo de Meis Institute of Medical Biochemistry at UFRJ and coordinator of the Brazilian Reproducibility Initiative.
Subscribe to Serrapilheira’s newsletter to follow more news from the institute and from the Ciência Fundamental blog.