Disinformation: An old security threat made new

Disinformation: An old security threat made new

Beyond the threats of terror and cyber attacks, a new kind of threat has come to the fore: misinformation and influence campaigns to subvert societies.

As head of a centre on security issues in Singapore, I am often asked what are the most serious threats facing Singapore.

As time passes, I become increasingly reluctant to enumerate the most obvious ones: terror attacks and cyber hacks. These are capable of causing great damage, but recent developments should cause us to rethink what should lie within the scope of core national security issues.

It is increasingly evident that we should not neglect threats that fly under the radar - slow-burn issues that can be equally, if not more, pernicious.

A case in point is sophisticated disinformation campaigns and influence operations aimed at subverting the resilience of societies.


During last year's United States presidential election campaign, Facebook users in key constituencies were targeted with personalised messages or fake news that played on their existing biases. The originating information came, it seems, from companies that had developed models to translate social media data into personality profiles that could be used to predict, and then influence, user behaviour.

By correlating subjects' Facebook Likes, building profiles and data harvesting, some companies have claimed to be able to identify an individual's gender, sexuality, political beliefs and personality traits. This method also uses artificial intelligence to find out about the individual, and is able to make accurate predictions about how to convince them with what sort of advertisement. Some argue, not without reason, that the Trump campaign was successful because it used micro-targeting and online persuasion efforts that were precisely tailored to the right people in the right place (those especially who were sitting on the fence, or who could be persuaded).

Some researchers think they have found fake Facebook groups almost entirely populated by bots. These fake groups, convincingly operated and orchestrated, leveraged on existing filter bubbles and echo chambers, eventually attracting real fans. It is possible, as some researchers have posited, that many Trump fans were emboldened to declare their support for the candidate by the artificially created perception of a swell in support for him. And in this way, some of these originally fake pages or groups swelled with real people, with the "fake" aspects of these groups withering away.

There are indications, according to the writer, that those behind fake news are beginning to evolve their methods in ingenious ways - telling fewer lies and more truth, with the same objectives and possibly even more success, using slant, interpretat
There are indications, according to the writer, that those behind fake news are beginning to evolve their methods in ingenious ways – telling fewer lies and more truth, with the same objectives and possibly even more success, using slant, interpretation, or weasel words. ST PHOTO: KUA CHEE SIONG

A shadowy market exists where these private-sector entities make their methods available to governments seeking to influence the course of affairs in other countries. It is theoretically possible that methodologies might be used by other actors to see if there are vulnerabilities within Singapore society, with influence efforts subsequently leveraging on these.


The actual extent of online influence operations conducted by Russia to influence the 2016 US presidential election will be debated for years to come. What needs to be understood, however, is that these operations are simply one component part of a suite that state actors employ to influence domestic affairs in other countries.

At the S. Rajaratnam School of International Studies (RSIS), our interactions with experts in this field suggest that these tools can work synergistically or independently. They include conventional intelligence operations, cyber attacks, disinformation operations, leveraging on political allies, agents of influence and non-governmental organisations (NGOs) in the targeted country, support for local extremists and fringe groups as well as disenfranchised ethnic minorities, and economic operations with political goals.

They can also pick politically sensitive times to do so - witness the alleged subversion of the democratic - and specifically electoral - process in the US and France.

Whether or not an influence campaign is state-sponsored, the nature of the disinformation is not static.

Consider Germany, which has faced a spate of fake news on a range of issues, from asylum seekers to information in support of an anti-Islamist agenda.

Just as it has clamped down on fake news through legislation, fact-checking websites and NGOs that put out correctives, the actors behind fake news appear to have evolved their methods somewhat.

There are indications that those behind fake news are beginning to evolve their methods in ingenious ways - telling fewer lies and more truth, with the same objectives and possibly even more success, using slant, interpretation, or weasel words. This has implications for how we should think about legal regimes and legislation - any legislation introduced in Singapore would have to be future-proofed.


There is of course no silver bullet.

Even the methods employed with the other threats, cyber and radicalisation, cannot be the same as for disinformation. For cyber threats, advanced methods can sometimes sandbox a virus, observe it, and then inoculate. We cannot do this for disinformation as this is going on in real time within the fabric of our society.

The countering violent extremism (CVE) experience had also taught us that the source of the counter-messaging matters. Sometimes official sources are needed - trusted facts.

But equally, this sometimes leads to the "backfire" effect - where one, confronted by a rational and fact-based rebuttal, is reinforced even further in his or her original beliefs. So too more generally with the whole fake news phenomenon.

What might work better in some cases - both with CVE and with fake news - is not official messaging. In both cases, the extremist or subversive messaging can out-evolve the official counter-narrative. What is needed just as much, if not more, are credible voices.

Counter-terror experts remark that while there have been cases where individuals have been radicalised wholly online, without human contact, there do not (yet) appear to have been instances where individuals have been deradicalised solely online through counter-narratives.

Face-to-face contact matters. Extrapolating from this, it would be useful to consider the extent to which any initiative to combat fake news or disinformation can be successful if only the media and online platforms (or legislation, for that matter) are used in the rebuttal.

It could instead be argued that critical discourse in the real world (and not simply the increasingly touted critical thinking) will be required in order to extract individuals from cognitive bubbles.


RSIS' contacts - particularly those who face the subversion threat with larger powers at their doorstep - have pointed out the importance of the need for governments to have open, frank discussions with the people about subversion. People thereby become attuned, but not paranoid.

Singapore saw the Our Singapore Conversation (OSC) initiative (2012-2013). The OSC was born out of the need for the Government to discuss issues and find out from the ground the views of the people - their hopes and fears. Individuals involved in OSC sessions found it a useful thought exercise to come up against the sometimes very different views of their fellow citizens, helping all concerned realise that their own worldview, however rational it seemed to them, was not the only one.

We have never had a Singapore Security Conversation. Shouldn't we?

Two suggestions come to mind.

One would be using SGSecure, which has at its core national resilience, to talk openly about disinformation and subversion, and not simply terrorism.

The second would be identifying grassroots actors and trusted authorities, and involving them heavily in the process. The Government need not and should not completely evacuate this space, but one wonders if taking some of the agency and putting it in the hands of people seen to be impartial arbiters (NGOs, even the private sector - because they too should have a role) may have some additional inoculative effect.

Researchers from the Baltic states - which have become used to disinformation - point to the importance of investing behind the scenes in groups of people and tackling disinformation within the community.

In Europe, some of the key advocacy has been done by think-tanks.

Some of their activities include publicly challenging supporters of Russian-sponsored disinformation, disclosing the disinformation campaign substance and the vehicles, and systematically building social resilience.

Researchers from these states who I have interacted with tell us that building trust is key. What these states have, and what we have too, is a high degree of trust and loyalty from their citizens. Because of this underlying trust, their citizens are less disposed to believe fake news.

Efforts against fake news and disinformation, then, must go hand in hand with ongoing efforts at shoring up resilience and a national consensus.

In February this year, the Russian defence minister acknowledged the existence of a corps of Russian information troops, declaring at the same time that propaganda needs to be "clever, smart and efficient".

We can safely assume that many states are watching the Russian playbook with great interest - and realising that these means are less bloody and cheaper than warfare, while sometimes achieving the desired result.

It is not just the big powers that have the means. Unlike traditional weapons, the technological and psychological tools to carry out these operations are available to nations of all sizes with the requisite technological capability and imagination.

More study is needed too on the particular effect that organised disinformation campaigns can have on states that are polyglot and multiracial, and which are also data rich - states that aim to be smart nations. These would be tempting targets.

If all concerned - think-tanks, governments and the public - study these and have rational, open conversations about the threat, then it is possible that we will get somewhere.

Some of the best disinformation campaigns proceed long before they are noticed, and long before any kinetic action or visible damage is registered. Likewise the counteraction has to start in the same vein. We have to make a start.

•The writer is head of the Centre of Excellence for National Security and oversees future issues and technology research at S. Rajaratnam School of International Studies, Nanyang Technological University.

A version of this article appeared in the print edition of The Straits Times on August 08, 2017, with the headline 'A slow-burn menace, a real security risk'. Print Edition | Subscribe