absolutvision-WYd_PkCa1BY-unsplash(1)

Why PR Teams Need To Be Prepared To Fight Fake News And Weaponised Misinformation

For the past decade, global organisations have emphasised building brand reputation, heavily investing both time and money into building this endeavour. However, the threat of misinformation and disinformation can dismantle all of this work in a matter of days. 

Misinformation and disinformation are amongst the biggest threats facing organisations today. In fact, 87% of business leaders agree that the spread of disinformation is one of the greatest reputational risks to businesses and is costing the global economy billions of dollars every year

With disinformation and misinformation a significant problem – or at least worry – for many brands, PR and communication professionals need to be aware of how fake news and weaponised misinformation campaigns spread, and develop a strategy to safeguard decades of goodwill.

Social media as fertile ground for amplifying fake news

The old Churchill aphorism, “a lie can spread halfway around the world before the truth has got its boots on” holds true in today’s digital landscape. Misinformation is not a novel occurrence nor is it new. The difference now is the massive amplifier known as social media. In 2018, researchers from the Massachusetts Institute of Technology received access to a full historical archive of tweets to track the diffusion of true and false news stories on Twitter between 2006 and 2017.

It found that falsehoods spread “significantly farther, faster, deeper and more broadly” than true claims in all categories of information. False news stories were 70% more likely to be retweeted than true ones. On average, it took the truth six times as long as false stories to reach 1,500 people. 

In today’s media landscape, what is viral on social media tends to rear its ugly head every now and then, and can potentially make its round in mainstream media more than once. Earlier this year, accusations were levelled against NTUC Fairprice for Muslim insensitivities after a picture of the grocer’s arabiki pork sausages circulated across social media.

Although the word arabiki is Japanese for ‘coarsely ground’, netizens took to infer it as an attack on the Arab community. Notably, this complaint first surfaced in 2020 but came back once more in 2023. As fresh complaints surfaced, the communications team behind NTUC FairPrice proactively clarified the origin and meaning of the word, debunking the idea that it was racially insensitive. 

Another local brand, Toast Box Singapore, grappled with a boycott campaign after a viral photo circulated on Whatsapp, accusing the chain of significantly increasing its food prices to take advantage of the GST hike. According to Toast Box, the side-by-side picture of their food prices was not a recent comparison of before and after GST, as the photo alleges, but a comparison with prices from years before.

These recent examples go to show that brands are one Whatsapp away from a crisis. This points to a real need for marketing, PR and communication professionals to pay attention to how their brand is being talked about outside the realm of mainstream media and proactively monitor social media conversations – especially since sometimes these seemingly innocent misunderstanding are actually weaponised campaigns.

Disinformation-for-hire is a booming industry

Recent trends also revealed that disinformation is not conducted by a sole actor. In fact, disinformation-for-hire is a booming shadow industry where firms are paid to sow discord by spreading false information and manipulating content online. Much like how businesses can hire PR agencies and marketing firms to build up reputation, so too can threat actors hire disinformation agencies to spread fake news, half-truths and everything in-between.

Their tactics include creating batches of fake social media accounts to spread falsified information, or even setting up fake news and fact-checking websites that promote these ‘key messages’. Since 2018, more than 65 private companies in 48 different countries have emerged offering these services. One firm even promised to “use every tool and take every advantage available in order to change reality according to our client’s wishes.”

Such disinformation typically originates in the seedy underbelly of the dark web. Often it’ll move from towards social media platforms, jumping from one to another before it spreads via mainstream news. In this case, there is a greater need for media monitoring and social listening. Communications teams should tap on dark web crawlers or engage fake news monitoring services which provide real-time alerts if targeted disinformation is gaining traction and momentum. 

The era of AI-enabled disinformation begins

Looking towards the near future, the threat of disinformation is rising as AI goes mainstream. Just like any technological tool, the human user determines whether AI tools are put to beneficial or malicious use. While social media content creators cheered ChatGPT since it can aid in automating content ideation, so too did threat actors. With generative AI, these bad actors can tap onto these open-source AI platforms to produce an infinite amount of low-effort and low-cost content designed to misinform or deceive readers. For instance, the GPT-2 software can generate convincing versions of fake news articles from just a summary sentence.

Deepfakes are another example of how AI can be used to create realistic but fake videos or images. While Kendrick Lamar famously used deepfakes to superimpose celebrity faces onto his body for his music video, the use of deepfakes have vast potential for harm to discredit known figures and influence public opinion. As social media is loosely regulated when it comes to this form of content, communication professionals need to be aware of this trend and have strategies in place to identify and address any deepfakes that may circulate. 

However, the antidote for weaponised AI is AI. Cybersecurity and threat intelligence experts have been building AI-powered solutions to combat the threat of disinformation. For instance, Blackbird.AI’s solutions use artificial intelligence and deep contextual insights to help brands identify emerging risks within narratives through toxic language, hate speech and bot behaviours. Communication teams should consider investing in risk intelligence to shore up their defences.

With these recent developments, it is unsurprising that the 2022 Asia-Pacific Communications Index found that crisis and issues management edged past corporate reputation as the top PR agency service being called on by clients this year. 

It is crucial for companies to be prepared for a communications crisis rooted in misinformation and disinformation, including who may target the company and their motivations, and have a strategy in their crisis toolkit for addressing misinformation. Brands need messaging, responses and safeguards in place so that when an attack happens, it can be swiftly nipped in the bud before negative stories get out of hand.

This information warfare demands a rise of a new generation of PR practitioners, armed with technological tools themselves, who are not only tasked with raising brand awareness and strengthening reputation – but also monitoring and fighting misinformation.

Want to continue the conversation? Talk to us at hello@mutant.com.sg

Tags: No tags