Welcome Your IET account
Couple look astonished at fake news on their mobile phones
Comment

Fighting the Covid-19 ‘infodemic’ isn’t just a job for experts

Image credit: Dreamstime

Tackling the tidal wave of online misinformation around the current pandemic will require collaboration between government, business and the public.

When we’re in a crisis and we’re afraid, we crave information. We hang on to every word of our government officials in their daily press briefings, check our social media constantly and pour over the daily stats about infection rates and deaths. But with so much still unknown about Covid-19, both misinformation and disinformation are thriving and creating what the World Health Organization’s director general Tedros Adhanom Ghebreyesus calls an ‘infodemic’. At a time when trust in our institutions, the media and political leaders has never been so important, we need to tackle this challenge head-on by rallying our innovation community and acting as an ecosystem.

State-sponsored disinformation campaigns are changing the very nature of war. At least 70 nation states have some form of disinformation programme, which often sees fake news, competing narratives or deepfakes being disseminated across multiple platforms by anonymous bots. Information warfare in itself isn’t new, but the pervasiveness of technology within the core fabric of our society means that misinformation and disinformation can reach more people and have a much higher impact.

As the issue is moving up the innovation agenda for the defence sector, the private sector is starting to grapple with the same challenge. Commercial entities are borrowing the same tactics to launch technology-led smear campaigns to gain a competitive advantage.

These campaigns, whoever is waging them, erode trust and pose very real threats to society by undermining elections or attacking an organisation’s (or individual’s) reputation. Sometimes, they’re simply designed to create an atmosphere of confusion and instability where people distrust everything and everyone.

And we’ve seen these sorts of tactics being used in light of Covid-19: a report from the Commons Foreign Affairs Committee warned that disinformation campaigns by China in the early days of the outbreak has cost lives, while pro-Kremlin media have been spreading disinformation about the virus to stoke up fear in the West.

We’re also seeing a huge amount of dangerous misinformation (false or partly true information that’s spread accidentally), from advice about killing the virus with a hairdryer to worried citizens destroying 5G infrastructure because of false links to the transmission of Covid-19. Misinformation spreads rapidly thanks to social media and our 24-hour news cycle, but in a time of national crisis the implications of somebody not following official advice could be deadly.

The difficulty lies in what to do about this challenge, and where the responsibility sits. The UK government has set up specialist units to “combat false and misleading narratives about coronavirus, ensuring the public has the right information to protect themselves and save lives” – it’s dealing with about 70 cases a week. But when it comes to determining what’s fake and what’s real, do we really want a Singapore-style approach, which some critics say could deter people from speaking freely online? Most of us live in liberal democracies, which means having the state or the media as the arbiter of truth is politically and ethically complicated.

Meanwhile, social media platforms, which have largely preferred to stay neutral, have shown more willingness to take action in light of Covid-19. Facebook-owned messaging app WhatsApp says it will put a much stricter limit on message forwarding to slow the spread of fake news, while Twitter will remove tweets that could cause harm by spreading dangerous misinformation related to the virus.

There’s clearly been a realisation that we’re in different territory and allowing content to be shared without oversight or intervention can have grave consequences. Mark Zuckerberg could never have imagined that the social network he was building in his college dorm room could wield enough geopolitical power to topple a president, but there’s no denying the reach that these platforms have now. This is an opportunity for Big Tech and the sector as a whole to be part of the solution by working with industry and government to take a more imaginative and proactive stance and help slow the flow of disinformation.

We’re fast entering a new paradigm for technology where advances in AI, 5G and quantum computing will hugely impact the way we live. Now’s the time for the tech sector, industry, society and governments to work as a collective to write the new ethical, regulatory and technical frameworks that will determine what the internet looks like – and tackling our current infodemic has to be top of the agenda.

We also need to call on the cyber-security community to offer up solutions and recognise disinformation as an emerging challenge. There will no doubt be plenty of security technologies already in existence that could be re-applied to address the challenge of disinformation. For example, approaches using AI, algorithmic pattern recognition, automation and the orchestration of complex tools to flag suspicious data and untrusted sources. We need more innovation in this space.

But this isn’t just a technology issue that requires a tech-centric solution – it’s also about educating people so we all understand the risks involved when we encounter information online, and the real-world consequences hitting the share button can have. The internet is accessible to everyone in this country, as it should be, but we have a collective responsibility to be more thoughtful about the way we use it. We need to empower individuals to do this by equipping them to spot fake news themselves.

This matters now more than ever because Western democracies like the UK and US are even more susceptible to trust-busting information wars, and there’s evidence that suggests that trust is already being threatened. A recent poll showed that only 36 per cent of the British population trusted Covid-19 advice given by Prime Minister Boris Johnson. We don’t know to what extent this is down to misinformation and disinformation, but with malicious actors actively amplifying sources of conflicting guidance, you can see the impact this can have – both on day-to-day compliance with lockdown laws and on democracy in the longer term.

The internet is a vital lifeline to individuals craving connection and our crisis-struck economy right now. So let’s act as an ecosystem to shape its future, build in trust and weed out some of its worst aspects.

Saj Huq is programme director for LORCA, a UK cyber-security initiative delivered by Plexal and backed by the Department for Culture, Media & Sport.

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Recent articles

Info Message

We use cookies to give you the best online experience. Please let us know if you agree to all of these cookies.


Learn more about IET cookies and how to control them