PDA

View Full Version : The cat-and-mouse game of ferreting out influence operations



reporter2
02-10-21, 17:36
The cat-and-mouse game of ferreting out influence operations

Fake accounts that pop up by the thousand overnight. Content farms for hire that churn out material to tear down opponents or as clickbait for ad revenue. Perception hacking to fool people into an alternate reality. Welcome to the evolving world of influence operations.

Chua Mui Hoong
Associate Editor

Oct 1, 2021

Way back in 2016, Facebook founder Mark Zuckerberg famously dismissed as "pretty crazy" the notion that Russian networks were spreading divisive messages to influence the 2016 United States presidential election. Much later, he had to admit the platform was a conduit for fake election news.

In the five years since, Facebook and other social media companies have awakened to the threat that influence operations (IOs) pose to their platforms. Most now have dedicated teams to protect their platforms from such manipulation; some are working together; and all want to work with government, academics and civil society groups.

Over the past week, I took part in four virtual events on IO-related issues, organised by Facebook and the Lee Kuan Yew School of Public Policy, National University of Singapore.

My takeaway: This is a newfangled cloak-and-dagger world with unknown operators using shady means to spread misinformation, to derail opponents or just create traffic for profit. Digital sleuthing is needed to sniff out, stamp out and outwit such operations.

It is a cat-and-mouse game.

Every action has a reaction; just as fast as the "defender community" (as the folk engaged in this IO-fighting enterprise are called) comes up with tools to fight IOs, the threat actors (the bad guys) morph and come up with new methods.

CIB: Coordinated inauthentic behaviour

Facebook has come a long way since its denial in 2016. It now has over 40,000 people working on safety and security issues, four times as many as in 2017. It has invested at least US$13 billion (S$17.7 billion) in teams and technology to enhance safety since 2016.

Facebook defines influence operations as "coordinated efforts to manipulate or corrupt public debate for a strategic goal".

Of particular concern is what it terms coordinated inauthentic behaviour (CIB) - defined as any coordinated network of accounts, pages and groups on Facebook's platforms that relies on fake accounts to mislead people about who are behind the operation and what they are doing.

The team leading anti-CIB efforts is over 200 strong, with expertise in open-source research, threat investigations, cyber security, law enforcement and national security, investigative journalism, engineering, data science and academic studies in disinformation.

Facebook's head of cyber-security policy Nathaniel Gleicher said that from 2017 to mid-2021, the network has taken down and publicly reported on over 150 covert IOs that violated Facebook's policy against CIB. These originated from over 50 countries. In the Asia-Pacific region, most targeted domestic audiences.

For every such network takedown, other efforts such as automated account detection would have stopped hundreds more in their tracks. Facebook now even has a threat ideation team actively looking out for new threats.

From wholesale fakes to retail fraud

One common way to spread misinformation is through the use of multiple fake accounts to share content and comment on them. When done in sufficient numbers, it can discredit a candidate, an ideology or a government; or create a buzz over an idea or product.

The main platform companies like Facebook, Twitter and Google have all developed automated tools to detect fake accounts and bots, take them down and ban the people responsible from the networks.

At Twitter, a proactive detection and enforcement framework screens accounts, looking at whether actors (account holders, people, organisations or governments), behaviours (how accounts interact with one another), and content put out are inauthentic. For example, whether they artificially amplify or suppress information; or engage in behaviour that manipulates or disrupts people's experience on Twitter.

One way Twitter does this is to track behavioural signals like how accounts interact with one another, to pick up suspicious behaviours such as high-volume tweeting, repetitive use of the same hashtags, or tweeting to someone's handle without a corresponding reply. Twitter may then ask the account holders to confirm they control the accounts using identity verification.

Ms Kathleen Reen, senior director of public policy and philanthropy in the Asia-Pacific at Twitter, said: "Our latest Transparency Centre update from July to December 2020 showed that Twitter removed 3.8 million tweets that violated the Twitter Rules; of these, 77 per cent received fewer than 100 impressions prior to removal."

Such upstream detection has forced IO actors to move from "wholesale" production of mass fake accounts, to "retail" efforts that take over existing users' accounts, or create a few higher-quality accounts used to share content. Such accounts take effort to maintain and are harder to scale up.

One common way to gain access to bona fide accounts is through old-fashioned methods such as password hacking or phishing.

Google designs products with built-in security features like protections against phishing or features enhancing safe browsing in its browser. In addition, says Google's Asia-Pacific information policy lead Jean-Jacques Sahel: "We dedicate substantial resources to develop new tools, new technology to help users identify and track and to stop this kind of activity, as it evolves, and it constantly does. So we have to move with it and continuously improve."

Influence for hire

One issue of growing concern is the rise in companies offering their services to sway social media agendas for clients.

An ecosystem has grown around such efforts: Strategists coordinate the campaigns, and hire content farms operating out of lower-cost countries, which hire digitally savvy workers on gigs to set up fake accounts or create misleading content cut-and-paste style. Political candidates, parties, businesses and even governments are said to use these.

Such commercial, influence-for-hire entities create a buffer of deniability for clients. They can be successful in domestic campaigns. Philippine President Rodrigo Duterte's election in 2016 was in part supported by a social media manager who controlled an Internet brigade that, among other things, circulated the falsehood that Mr Duterte was endorsed by the Pope.

But many of these operations struggle to succeed in other countries because they lack convincing domestic context. For example, Chinese networks targeting Taiwanese audiences may use simplified Chinese characters or terms more commonly used on the mainland.

Facebook's May 2021 Threat Report on The State of Influence Operations 2017-2020 said: "In May 2019, for example, we identified and removed an Israeli firm - Archimedes Group - that was running campaigns on behalf of its clients in Nigeria, Senegal, Togo, Angola, Niger and Tunisia, along with some activity in Latin America and South-east Asia. This network repeatedly made blatant mistakes in posts regarding the on-the-ground reality in the countries targeted."

The campaigns did not gain much local traction.

The Singapore context

So is Singapore a target?

When I asked the platforms this at the webinars, I heard the same answer: not directly.

Singapore is, however, part of the Chinese diaspora that may be targeted by broad campaigns from China. But China tends to use overt campaigns, not covert CIB, to drive its narratives. China's efforts take the form of Chinese state entities or related individuals putting out their perspectives on, say, origins of the Sars-CoV-2 virus that causes Covid-19, or to denigrate Hong Kong protests, or pronounce that US democracy and society are in decline.

Facebook director of global threat disruption David Agranovich said: "We actually haven't seen foreign coordinated inauthentic behaviour targeting Singapore.

"It's not for want of looking. We're constantly looking for these types of operations, constantly monitoring threat actors in the region. And so, it doesn't mean that they don't exist, but that we just haven't seen them. That said, it's certainly a threat that we know we need to be prepared for."

While targeted campaigns seek to discredit a specific individual or government, broad-based campaigns to reshape narratives can be both more insidious (shadowy) and invidious (divisive).

Analysts now talk about perception hacking, when threat actors capitalise on the public's fear to create the perception that everything is tainted and nothing is true. For example, they may foster the view that the electoral system is hacked even when there is no such evidence. Such efforts, if successful, can erode trust invisibly and quickly.

As influence operations to manipulate public opinion can now be mounted with ease and anonymity using influence networks for hire, every state will want to safeguard its political process from shadowy threat actors.

In Singapore, legislation is one tool to combat this threat. The Foreign Interference (Countermeasures) Bill is to be debated in Parliament next week. While legislation is one tool, the law should not overreach and stifle legitimate debate on political issues.

At the same time, as those in the defender community know, when it comes to this cat-and-mouse game of staying one step ahead of the bad guys, it's all hands on deck.

Laws are not enough, and the Government and regulators have to work with platform companies, academics, corporations, civil society, schools and the public to combat this emerging information infection.

Two case studies

Twinmark Media, The Philippines

On Jan 10, 2019, Facebook's head of cyber-security policy Nathaniel Gleicher announced that it had banned a digital marketing group in the Philippines - Twinmark Media Enterprises and all its subsidiaries - from Facebook.

He said: "This organisation has repeatedly violated our misrepresentation and spam policies - including through coordinated inauthentic behaviour, the use of fake accounts, leading people to ad farms, and selling access to Facebook pages to artificially increase distribution and generate profit.

"We do not want our services to be used for this type of behaviour, nor do we want the group to be able to re-establish a presence on Facebook."

The company was set up in 2015 and, when taken down, had 220 Facebook pages, 73 Facebook accounts and 29 Instagram accounts. About 43 million accounts followed at least one of these Facebook pages.

In a report after the takedown, Nikkei Asia said Twinmark was one of the major sources of spam and fake news in the Philippines.

Before it was banned, Twinmark made millions on Facebook and Google, reported ABS-CBN News, a Philippine media company. An employee said the company could earn as much as US$100,000 (S$136,000) for a Facebook page in a month, via clicks on "Instant Articles" that let users read stories or view videos within the Facebook site, keeping them exposed to advertisements within the ecosystem.

The takedown of Twinmark in 2019 put the spotlight on Philippine content farms, outfits that tap a young, digitally savvy, English-speaking workforce to create social media accounts, content and comments to fuel engagement. Such activity might be used to talk up a client, discredit opponents, or simply to generate traffic to sell advertisements.

In March 2019, Facebook took down another network that was linked to the social media manager of Philippine President Rodrigo Duterte's election campaign.

Announcing this, Facebook said: "The individuals behind this activity used a combination of authentic and fake accounts to disseminate content across a variety of pages and groups.

"They frequently posted about local and political news, including topics like the upcoming elections, candidate updates and views, alleged misconduct of political opponents, and controversial events that were purported to occur during previous administrations.

"Although the people behind this activity attempted to conceal their identities, our investigation found that this activity was linked to a network organised by Nic Gabunada."

Despite the action against his network, Mr Gabunada was awarded a contract in June this year by the Philippine Department of Finance to carry out communications campaigns for the Duterte administration.

The Philippines has a large, influential disinformation ecosystem that is "embedded within the political system and the creative industries", according to a report released in August by the Australian Strategic Policy Institute, titled Influence For Hire: The Asia-Pacific's Online Shadow Economy.

The report said media strategists and even government departments use disinformation tools, sometimes collaboratively.

"State disinformation producers or political strategists may collaborate with specialists operating clickbait websites, just as local PR firms worked with Chinese business entities to promote specific political candidates in 2019," wrote Dr Jonathan Corpus Ong, one of the report's co-authors.

China spam network

Google's threat analysis group reported on its blog last October that since summer 2019, it had been tracking a large spam network linked to China that was attempting to run an influence operation, primarily on YouTube.

"This network has a presence across multiple platforms, and acts by primarily acquiring or hijacking existing accounts and posting spammy content in Mandarin such as videos of animals, music, food, plants, sports, and games," wrote Mr Shane Huntley from the threat analysis group.

"A small fraction of these spam channels will then post videos about current events. Such videos frequently feature clumsy translations and computer-generated voices.

"Researchers at Graphika and FireEye have detailed how this network behaves - including its shift from posting content in Mandarin about issues related to Hong Kong and China's response to Covid-19, to including a small subset of content in English and Mandarin about current events in the US (such as protests around racial justice, the wildfires on the West Coast and the US response to Covid-19)."

FireEye is a cyber-security firm while Graphika specialises in analysis of the social media landscape.

Google's teams terminated more than 3,000 YouTube channels linked to this network. "As a result, this network hasn't been able to build an audience. Most of the videos we identify have fewer than 10 views, and most of these views appear to come from related spam accounts rather than actual users," wrote Mr Huntley.