Mind Games – How hacking perceptions is becoming a weapon of choice for online warfare

Mind Games – How hacking perceptions is becoming a weapon of choice for online warfare

NOTE: This piece was originally published in the E-Zine HVCK. You can see that original version here.

There was a time when I’d open my phone first thing in the morning, not to read messages from family, work, or to do my morning meditation, but to read the Twitter feeds of a middle-aged male called Marco and a Sikh female called Nupur Kaur. 

At least, that’s what ‘they’ pretended to be, and despite the interactions they had on Twitter with what appeared to be real users, I knew that these were far from being a real Marco, or a real Nupur. 

In fact, I knew they were ran by groups of people, some from marketing firms, others likely from an organisation that specialises in running accounts to manipulate and deceive.

Their friends, their posts, memes and videos were all part of a hidden agenda to wage information warfare campaigns on civilians and hack perceptions using covert propaganda tactics.

You see, Marco and Nupur were not real people. They were what I like to refer to as sock puppets. We used to make these in school as kids, where we’d put a sock over our hand, draw eyes on it, and pretend to be something different for a few brief seconds. That’s what Marco and Nupur were, only this time we are not able to see the person holding or controlling the puppet.

What I knew back then, and what we definitely know now, is that those sock puppet accounts were part of well-planned campaigns with set objectives, actions and standard operating procedures all to shift our perceptions on issues where someone has an agenda.

Marco was part of a marketing firm run out of Jakarta, Indonesia. The campaign’s purpose was to drown out and discredit any human rights issues happening in West Papua at the time. This was performed by a double-pronged approach of using automation tools to repeatedly post promotional content about the Indonesian Government’s support to West Papua through videos and websites, and discredit activists who spoke out against the government’s role in the country.

Caption: Marco267 on Twitter and the other places his image was used. 

You might be wondering how I know Marco was created in a Jakarta-based marketing firm? Well he, and the several hundred other accounts were part of that campaign were created by individuals with pretty poor operational security skills. 

Sure, they could run an effective social media network on an automated script (bot network), but they forgot to leave their phone numbers off the registration of the hundreds of propaganda and fake news sites they were promoting, and they were all friends on LinkedIn.  

It is worth mentioning that at the height of this network’s activity online, the on-ground activity in West Papua was blurred. Severe internet restrictions were imposed on the country amidst a crackdown by Indonesian security forces on pro-Independence activists. Foreign journalists and humanitarian agencies had been banned from entering the country as well

After we exposed the actors behind this bot network in a Bellingcat report, Twitter, Facebook and Google had announced takedowns of accounts, pages and channels. Facebook had found that accounts in the takedown of those targeting West Papua had spent about $300,000 in advertising.

That’s Marco’s story, now over to Nupur. 

Nupur was part of a network of very active accounts who called themselves ‘Real Sikhs’. Their purpose was to discredit Sikh independence across the world and promote Indian Government values, as well as doing a bit of PR for the Indian Army. They did this by pretending to be Sikh influencers, using images of celebrities in India and repeatedly posting about what it means ‘to be a real Sikh’ – sadly, there were many that fell for these so-called influencers.

Much like the West Papua case, this was being conducted during a time when internet and media censorship issues were at their highest and when there were heightened tensions during the 2020-2021 Farmer Protests in India.

Caption: Nupur and some of the other self-labelled “RealSikhs” fake accounts.

The investigations into the West Papua and ‘Real Sikh’ networks resulted in widely spread BBC collaborations, where the original reports and methodologies were published on Bellingcat (for the West Papua network) and the Centre for Information Resilience (for the RealSikh network).

Both are just one of many campaigns I’ve researched, investigated, and worked with journalists and civil society groups to tackle and raise awareness of how these networks have attempted to control the information space on international platforms. 

These influence operations are not alone and are on the rise, especially for authoritarian states. Much like how on-ground operations would focus on the cutting of supplies and gaining on-ground superiority, these examples of information warfare are the online version of information dominance after a country restricts or censors freedom of speech, media reporting and access to the internet. 

These campaigns are waged by both state and non-state actors, and there’s more entering and advancing in the field in what appears to be a digital information arms race. 

But before we fall deeper down the information warfare rabbit hole, let’s zoom out a bit and look at some basics. First, what are these terms sock puppets, campaigns and influence operations and warfare? 

It should be noted that much of the research in this field has been written by very different stakeholders which results in contrasting terminology. For example, military actors provide definitions that differ to those of civil society groups, NGOs and social media platforms. 

However, put simply, a campaign is the mission of the network attempting to hack perceptions and gain an information advantage over a subject. 

A definition on influence operations from RAND describes them as the “collection of tactical information about an adversary as well as the dissemination of propaganda in pursuit of a competitive advantage over an opponent”. Information Warfare is similarly described by NATO as “an operation conducted in order to gain an information advantage over the opponent”.

In practical terms, not all operations are carefully planned out, and they’re not all waged by governments or big marketing firms, there are also influence operations waged by terrorist-listed organisations, smaller marketing firms, scammers, hacking groups and more. If we look at practical scenarios, these campaigns can play out in a number of ways, for example: 

  • It could be a campaign to get you to vote for a certain individual
  • To believe that a human rights issue is incorrect
  • To make you think a person you are talking to online is real
  • To make you buy a product or invest in an investment that might be a scam
  • Or it could be a campaign to make you and many others enraged and cause social chaos and divide, as we will see in the example from Russia further down in this piece.

What is my involvement in this world? Well for a job, I used digital open source intelligence techniques to hold authoritarian states to account, and dig into where hostile states and bad actors target communities. This might sound exciting and thrilling, but in practice it doesn’t necessarily look that exciting when I spend hours sitting in front of a computer. I’ve often met documentary makers and journalists that have asked if they can ‘see me at work’ thinking it’s some Jason Bourne thing, but the job is far from that. 

The majority of my time in this job is spent squinting at a computer screen looking at a satellite image, a street sign in a reflection of a window, or just endlessly scrolling through news feeds in the hope that I come across that pin in the stack of pins. My screen is often a collage of the worst content on the internet, mixed with protests, cats on Roombas, a village on fire and people dancing on the street. 

When I emerge from the depths of my digital journeys to chat to folks about how my weekend was, my eyes are strained red and I’m spaced out from just absorbing the world’s worst videos. This reality just doesn’t make for gripping television.

But sometimes, just sometimes amidst that stack of grim content, we find a loose thread, and that’s where we pull that thread until we end up finding things we shouldn’t be finding – that’s the scratch that feels good to itch. 

So now that you know a bit more about this world, you can start to think of how important those accounts like Marco and Nupur and their thousands of copies are, and how they can hack the perceptions of vulnerable audiences who don’t see them for what they really are

Caption: A network visualisation of the “RealSikhs” network on Twitter, and its supporting accounts that retweeted and amplified the content for further reach.

The danger of influence operations and attempts to distort perceptions is that it undermines free thought and democratic systems. It can have a disastrous impact on people’s beliefs and behaviours, especially in the field of political and social issues.

These operations, if successful, can be used to sway elections, incite violence, and undermine trust in institutions. On top of that, they can be used to promote products or services, influence policy decisions, or shape public perceptions of societal issues.

Caption: Facebook posts from the ‘RealSikhs’ network spamming several hashtags.

Let’s take a look at a much larger example of this type of influence. 

We met Marco and Nupur at the beginning of this piece. I did that tactically because they are two very basic influence networks with simple purposes, to promote the values of a given government and discredit and undermine the values of minority targets. A much bigger and more complex network of accounts set up to target perceptions is a campaign set up by Russian threat actors to interfere in US social issues and undermine America’s democratic values.

In this specific operation, thousands of accounts from the Russian troll farm Internet Research Agency (IRA), as well as pages and other digital fronts, were created online to appear as authentic US-based accounts, some with left-leaning views, others with right-leaning views.

The campaign was unique in that it didn’t just target one issue from one side, but rather the accounts would target both sides of the political spectrum. Accounts on the left would amplify content related to that specific group (note the images below) and the accounts on the right would amplify and post right-leaning content. 

The accounts would post about the Black Lives Matter movement, while others would post about blue lives or white lives matter issues, in a bid to drive a social wedge between the communities. It was also identified that the accounts attempted to organise rallies and demonstrations in the US. 

All of this activity, amidst existing social debates, amplified and inflamed underlying tensions. This also allowed Russian trolls to capitalise on this digital tension by feeding its own narratives and divisive content into both sides of the social spectrum.

Caption: analysis of accounts posting on social issues in the US between 2015 and 2016. Left shows the entirety of left-leaning and right-leaning accounts. Right shows where Russian trolls were identified as being in the polarised groups. Source: University of Washington (Stewart, Arif, Starbird).

Outside of the networks, there were already quite serious polarising societal issues happening in American society about inequality, police brutality, conservatism, wars and politics, so it should be highlighted that the campaign in no way undermines the reality of what was and still is happening in the US. 

In this space, where societal emotions are running high, perceptions are susceptible to influence through  the power of an image or a few words.

By now you are probably thinking this is a serious issue, and you’re right. 

So how do you combat this form of weaponized information? Solutions to this type of global threat are big and resource-demanding and not always at the forefront of decisions for profit-driven social media platforms.

However, there are large communities working together to combat the problem head on, as well as working towards a more resilient future. 

The efforts by many in the digital community work to help mitigate, combat and solve the issue of influence, cyberoperations and information warfare and mitigate the risks associated with influence operations and their impact. I’ve had the pleasure of working with many groups and communities from around the world who are creating inspiring solutions and strategies to counter influence operations.

The communities working on this are crucial, especially given the advancements in techniques to deliver more camouflaged influence operations and innovations in information warfare online, coupled with the ability to push more narratives and propaganda to hack consumer perceptions, especially from states with high expenditure for digitised propaganda. 

Caption: Twitter accounts identified from a pro-China network that used StyleGAN AI generated profile pictures created by sites similar to thispersondoesnotexist.com. These accounts were identified in an investigation we conducted into pro-China networks and was published on the Centre for Information Resilience. 

So what can be done at the community and individual level to combat these threats?

One way to support the counterring of this is to promote digital media literacy so that vulnerable communities remain up to speed with the familiarity and understanding of online information systems. So whatever communities, groups or workplace you are in, encourage workshops on simple lessons such as how to critically assess what they encounter online and how to spot fake news, propaganda, and other forms of manipulation. 

This simple strategy might be the difference between someone being scammed online, being phished or finding themselves unknowingly spreading content created by a hostile state. 

This leads into my ambition to write for HVCK Magazine, which is to promote the uptake in the use of OSINT techniques in the community. I’ve been creating YouTube tutorials on simple techniques like image reverse search, Google dorking, geolocation, using satellite imagery and other concepts all with the effort to spread these skills far and wide. I have no doubt many of the readers here have, or are able to, champion these techniques, but sharing the basics with your family, friends and colleagues works towards a stronger future generation.

Going back to accounts like Marco and Nupur, I was only able to get into this field and investigate and expose their networks not because of any courses or training, but because those before me were sharing with their time, skills and expertise – and it was through those people that I have been able to become pretty good at pulling on the threads of operations attempting to hack perceptions on behalf of hostile states and malign actors.

Having presented these sorts of things at conferences and workshops, I always like to provide lists to cater for those that like action points. So here is a list of steps and prompts you can take, or share with others, to minimise the risk of falling prey to the actors that want to mislead you and influence you to make an action:

1.          Source check: who or what is posting the content? Is the report you are about to share from a trusted source? Have they been transparent in their own claims and sources? Is the account providing a review on a product you are about to buy trusted or verified? If in doubt, don’t act on the post.

2.          Patterns:  If you see a particular pattern then it could be a sign of coordinated activity as part of an operation. Examples might be where there are a lot of the same posts promoting a hashtag or where there are identical reviews or comments under a post.

3.          Research: Before forming an opinion about something you see online, do some research. Look for other media sources making that corroborate a claim, check what a photo or video really shows before sharing it and before buying a product make sure there are other reviews from reputable sources.

4.          Be skeptical: Don’t believe everything you read online. Use your critical thinking skills to evaluate the information and decide whether it’s credible. The person you have met from a dating site could be out to steal your banking details, the email you have received might not actually be from a real Nigerian prince, and the post you are about to share could have been written by someone from a very different country to where you think they are from.   

5.          Report suspicious activity: If you suspect that an account, person, brand or page is engaging in suspicious activity and attempting to deceptively influence you to, report it to appropriate authorities, such as social media platforms or consumer protection agencies.

NOTE: This piece was originally published in the E-Zine HVCK. You can see that original version here.

Leave A Comment

Your email address will not be published. Required fields are marked *