Bold blue and white capital letters tell us that this woman with thick, black-rimmed glasses and a bleach blonde plait is a nurse. She speaks in a serious, urgent tone. “Women and men who come into contact with people who have had this ‘vaxx’ … have suddenly become covered in strange bruises,” she says, looking directly into the camera lens, before spending three further minutes falsely claiming that COVID-19 vaccine particles “shed” to others, making them ill or infertile, and directing viewers to find more information on her website.
The blog is littered with further misleading health content, from claims the coronavirus pandemic is a “scam” to far-right conspiracy theory QAnon. As with many social media hoaxes, the clip initially appeared on the digital fringes before bursting out onto mainstream platforms, including the Meta-owned social media platform Facebook, where it’s been shared at least 650 times. The video is still live on conspiracy theory video-sharing site BitChute, racking up more than 44,300 views.
Social platforms have provided fertile ground for the seeding and propagation of extreme content, particularly over the course of the COVID-19 crisis. In response to the resulting “infodemic”, big tech companies have made commitments to clean up their sites. But largely left to self-regulate, experts say platforms are falling short of dismantling misinformation, instead prioritising business interests.
False information has boomed alongside coronavirus cases, as inaccuracies plug gaps left by medical unknowns and global lockdown regulations spur COVID-denialism. Misinformation — false information shared unintentionally — reached new, harmful heights, while agents of disinformation — deliberately fabricated information — have seen audiences surge.
For years, researchers have revealed how far-right conspiracy theories radicalise people globally, sowing division and undermining democracy. Most recently, such online networks coaxed along the pro-Trump US Capitol riot in which five died. While some have long warned of misinformation’s threat to public health, only recently has it emerged so visibly as a key, universal threat, with COVID-19 vaccines helping historic anti-vaccine movements find new messengers for their deceptions.
With unreliable news shown to travel faster than factual stories, the problem presents a major threat to democracy and public health. Misinformation sources got six times the number of likes, shares and interactions on Facebook than trustworthy news outlets during the 2020 US elections. Misleading content has also diminished confidence in COVID-19 vaccines. “Unfortunately misinformation gets a lot of engagement because it’s sensational and it can be targeted at people who are likely to believe it,” says Matt Skibinski, general manager for NewsGuard, which tracks online misinformation and rates the credibility of publishers.
False information has boomed alongside coronavirus cases, as inaccuracies plug gaps left by medical unknowns and global lockdown regulations spur COVID-denialism.
The problem plagues the entire social media ecosystem. Twitter has been home to at least hundreds of thousands of anti-vaccine tweets, while false COVID-19 claims have spread in at least 51,000 posts on TikTok’s video-waves. YouTube has also been home to harmful misinformation throughout the pandemic, with a study finding the platform had a strong association with COVID-19 conspiracy theory content. Another found that misleading videos about the virus had been seen more than 62 million times. They’re far from the only networks where falsehoods flourish.
Elusive algorithms have exacerbated misinformation online, deciding what posts to promote to users based on their most intimate interests and behaviours. Facebook has not only hosted inflammatory content but promoted QAnon groups and anti-vaccine propaganda pages to users. It is “the worst offender”, says Skibinski. “[Its] algorithm rewards that kind of content.”
But all algorithmically powered feeds contribute, says Aoife Gallagher, an analyst at the Institute for Strategic Dialogue (ISD), a non-profit researching global disinformation. That includes TikTok’s “For You” page and Instagram’s “Explore” page. YouTube’s recommendations were also found pushing extremist content and misinformation to users. According to Gallagher, “These algorithms, as far as we know, cannot tell the difference between reliable content and content that is full of falsehoods and will therefore continue to target the user with content that will keep their eyes on screens for as long as possible.”
Elusive algorithms have exacerbated misinformation online, deciding what posts to promote to users based on their most intimate interests and behaviours.
The misinformation industry has exploded over the course of the pandemic. NewsGuard and Comscore analysis found $2.6 billion of advertising revenue annually is sent to publishers of mis- and disinformation, including those covering health claims, anti-vaccine myths, partisan propaganda and election falsehoods. NewsGuard also found more than 4,000 “top brands” advertising on websites containing COVID-19 misinformation. It’s a lucrative business model.
Tech platforms themselves have also benefited from disinformation agents and their content. In 2020, Facebook promised to stop users and companies profiting directly from misinformation about COVID-19 vaccinations. Despite this, a Bureau of Investigative Journalism investigation found 430 pages, including some that were verified, spreading false theories and claims about COVID-19 and vaccines while using Facebook’s money-making tools.
Many social network revenues rely on scrolls and clicks, retained through carefully-crafted algorithms based on user habits, which in turn are used to sell targeted advertising to brands. But these algorithms also contribute to the amplification of misinformation.
Former Facebook employees have exposed how the platform makes money from misinformation and profits from amplifying lies. As one of them, Yaël Eisenstat — former Facebook elections integrity head and CIA officer — has put it, “[platform] business models have exploited our biases and weaknesses and abetted the growth of conspiracy-touting hate groups and outrage machines.”
According to a Wall Street Journal report, as part of a series based on documents leaked by Facebook whistleblower Frances Haugen, chief executive Mark Zuckerberg resisted proposed fixes to a 2018 algorithm if they hurt the business. This resistance rewarded sensationalism, outrage and misinformation with increased interaction and reshares. “[The research] shows that Facebook is more than aware that their platform promotes toxicity, lies and misinformation, but is unwilling to tackle this because of its effect on user engagement,” says Gallagher.
“Facebook lives and dies by its algorithm,” adds Skibinski.
Former Facebook employees have exposed how the platform makes money from misinformation and profits from amplifying lies.
Nevertheless Silicon Valley and its tech monoliths have made some moves to combat the “infodemic”. Platform reactions include the use of third-party fact checking teams to vet posts and outrightly banning false claims about COVID-19 and vaccines. Some have allowed users to report posts lacking credibility. Meanwhile, several have partnered with health authorities, including the World Health Organization, to promote trustworthy information.
Content removal numbers are high: Facebook recently removed 20 million posts containing COVID-19 misinformation. Twitter has reportedly taken down at least 43,000 misleading posts about COVID-19 and suspended more than 1,500 accounts. And YouTube reported removing one million videos related to “dangerous coronavirus information”.
Despite the efforts, many platforms are falling short of effectively combating widespread misinformation. In February 2021, Facebook vowed to remove false or misleading claims about the coronavirus and vaccines. But at least 3,200 posts containing forbidden claims about COVID-19 vaccines received over 12,400 interactions before some were taken down. Meanwhile on Twitter, anti-mask messages were amplified by its “trending” algorithm in the face of policies against harmful COVID-19 misinformation.
“Platforms have been making empty promises for years to tackle this issue,” says Gallagher. “Their lack of meaningful action and unwillingness to really get to the root of these issues indicates that there is no business or financial incentive for them to do so.”
A lack of accessible, dependable data also makes it difficult to decipher the overall reach and impact of misinformation, while grand take-down figures announced by social platforms are hard to put into context. In August 2021, Facebook launched a report on its most widely viewed content as part of perceived transparency efforts. But experts, including former Facebook employee Brian Boland and media scholar Ethan Zuckerman, said it fails to deliver transparency and doesn’t share enough data to come to any meaningful conclusions.
Facebook also shut down one research institute’s project and gave external researchers access to a dataset that turned out to be majorly flawed, destroying years-long studies of how misinformation spreads on the site. The platform has also recently walked back some features of its monitoring tool CrowdTangle, used by researchers to track misinformation, and has been criticised for unreliable data and guarded access.
“The data available to researchers is pretty abysmal across the board,” says Gallagher, “and makes it difficult to truly understand the scale of the spread of misinformation and the effect it’s having on people.” She notes TikTok provides no API access to researchers, making analysis of the platform “laborious” and the extent of the problem unclear.
Despite the efforts, many platforms are falling short of effectively combating widespread misinformation.
To deal with disinformation during the pandemic, some policy-makers have stepped in. Among other initiatives, the White House teamed up with influencers to promote pro-vaccine posts. The UK government has pushed social media companies to promote reliable messages and focused vaccine hesitancy counter-efforts on communities who have historically lower levels of uptake. Others took to the law. In March 2021, Malaysia criminalised the creation, publication and dissemination of false news. Nearly 40 countries, including Ecuador and Argentina, have arrested people spreading COVID-19 falsehoods. But human rights groups have argued such legislation impedes freedom of expression.
All of this frames one of the most significant legislative debates of the decade: if and how world leaders should regulate social media as a whole. The UK has already started, with a draft bill to tackle online harms. The European Commission's Code of Practice on Disinformation is also likely to morph into a co-regulation agreement. In the US, campaigners are pushing for the reevaluation of Section 230 of the Communications Decency Act, which has been interpreted to immunise tech companies from third-party content.
“The best scenario would be giving platforms a clear mandate that they’re on the hook if their platform causes harm,” says Skibinski. But there are difficult questions being asked about who would be in control. “It should definitely not be about governments deciding what is and what isn’t misinformation, or even platforms alone deciding that, because it puts a lot of power in their hands.” US politicians on all sides of the spectrum support Big Tech regulation. But managing large companies or an entire industry is complicated. “Those are powerful entities and there’s going to be pressure on politicians not to ruffle feathers,” Skibinski adds.
Research suggests even the toughest interventions to counteract misinformation can be ineffective at stopping its spread. When Twitter flagged President Donald Trump’s misleading tweets between November 2020 and January 2021, a Harvard Kennedy School peer-reviewed study found that his claims spread further and for longer elsewhere. Its authors highlighted the “importance of considering content moderation at the ecosystem level”.
Concentrating on removing individual pieces of content doesn’t tackle the issue’s root cause, says Gallagher. “Overall, this approach is not sustainable and acts like a plaster on a gushing wound.” Instead, she suggests platforms take action against those responsible for creating and spreading false information and provide greater transparency and access to reliable and robust data.
Research suggests even the toughest interventions to counteract misinformation can be ineffective at stopping its spread.
Disinformation was a problem long before the pandemic, but, on the coattails of the social web’s current framework, it reached new heights and is likely to persist long past COVID-19’s obsolescence. “It’s great that Covid has shone a light on the problem,” says Skibinski. “But when we think about solving misinformation it’s not just about Covid. It’s a much broader problem that extends to a lot of areas of information.”
Removing the profile of a conspiracy theorist nurse and taking down webs of lies may be a small step towards securing an accurate picture of the world online. But a long-term, multidimensional fix may be the big leap. The question is, who is ready to make the jump?