PRC-Disinformation and U.S. Elections: What to Watch For (Part II)
Why fake U.S. voters, AI-altered videos, and audio deepfakes could be go-to tactics
Since 2017, the Chinese Communist Party has increased its use of online information operations, fake accounts, and disinformation (the deliberate spreading of false and misleading content) to target the United States. With under two months to go before U.S. presidential and legislative elections, my last post focused on WHO the regime might target, arguing that while an attempt to tilt the presidential election is unlikely, campaigns to undermine Congressional candidates or simply sow discord and distrust among U.S. voters are much more likely—even certain—to occur.
This post will focus on the HOW and key tactics to watch for.
The CCP has a robust toolbox—and significant financial and human resources—at its disposal to potentially disrupt the U.S. information space ahead of the polls, with varying potential outcomes in terms of efficacy.
A review of recent research and investigations regarding actual incidents in the United States, Taiwan, and elsewhere, however, point to three key technical tactics that are especially important to watch for in the run-up to November:
fake accounts impersonating U.S. voters,
artificial intelligence (AI)-enhanced or altered videos,
and AI-generated audio deepfakes.
Fake U.S. voters gaining traction
At the center of this tactic are fake social media accounts that impersonate Americans, but which in fact are part of Spamouflage, a large and persistent network of inauthentic accounts linked to the Chinese government. Spamouflage accounts have typically amplified each other’s content but struggled to get engagement from real users. This may be changing.
The network’s approach of making a greater investment in trying to imitate real people is a relatively new tactic, but recent investigations show clear evidence of experimentation and increased use. In at least a few cases, such accounts have garnered massive organic attention from real users.
An April 2024 report by the Institute for Strategic Dialogue (ISD), for example, identified a handful of accounts “posing as right-wing Americans and exploiting domestic divisions.” The accounts often use language like first person references to being an American, a military veteran, or a fireman. The content posted can include memes (including AI-generated ones) or text copy-pasted from real users’ posts that have already gone viral elsewhere.
Among the accounts ISD highlights were three that took on the monikers “Ben,” “Gail,” and “Liam,” and succeeded in garnering between 7,000 to 12,000 followers within a relatively short period of time.
One example ISD found demonstrates the potential reach such accounts have managed on occasion, while tying together multiple strands of social media platforms’ vulnerability to manipulation by malign actors. The fake “Ben” account shared a video by Russian state-owned outlet RT with disinformation about the CIA and the war in Ukraine. The post was shared 567 times and gathered 416,100 views. It was then retweeted by US conspiracy theorist Alex Jones and received an additional 3,200 shares and 365,500 views, further amplifying its reach.
Earlier this month, cybersecurity firm Graphika published a new report with eerily similar examples, concluding that the PRC-linked Spamouflage network is “expanding its use of personas that impersonate U.S. voters.” They cite a cross-platform persona whose account on X accrued 11.2K followers. One video by the same persona posted to TikTok garnered 1.5 million views. Two others collected 256,000 and 356,300 views.
Such cross-platform fertilization can increase a fake persona account’s perceived credibility. Meta reported in November that fake China-linked accounts posing as U.S. military families had managed to garner 300 signatures on a petition criticizing U.S. support for Taiwan.
While most of the above examples are on the right-leaning end of the political spectrum, Graphika’s report reinforces other research that PRC influence operations targeting U.S. elections tend to be party agnostic, noting that:
These accounts have seeded and amplified content denigrating Democratic and Republican candidates, sowing doubt in the legitimacy of the U.S. electoral process, and spreading divisive narratives about sensitive social issues including gun control, homelessness, drug abuse, racial inequality, and the Israel-Hamas conflict. This content, some of which was almost certainly AI-generated, has targeted President Joe Biden, former President Donald Trump, and, more recently, Vice President Kamala Harris.
Avatars versus AI-altered videos
As generative AI technology has become more accessible, experts have warned of its potential use in disinformation campaigns—by both domestic and foreign actors. When it comes to China-linked accounts experimenting with its deployment, I would argue that the greater risk appears to in its use to alter or enhance existing real footage of actual people rather than for fully artificial avatars.
There have been several reported cases of artificial avatars being used by PRC-linked accounts. In the United States, AI-generated avatars appeared in a campaign uncovered in February 2023 in which Spamouflage accounts circulated videos from a fictitious outlet Wolf News that used fake male and female anchors to present reports on gun violence in the United States and US-China relations. More recently, ahead of elections in Taiwan in January, avatars were used in an initiative aimed at spreading falsehoods about former Taiwanese president Tsai Ing-wen in an effort to more broadly discredit her political party and its candidates, which are disfavored by Beijing.
In these instances, experts note that the avatars were fairly “crude” and while they might deceive a mobile user quickly scrolling through a social media feed, for the most part, they are easily detectable even to an amateur eye.
By contrast, AI-altered or enhanced videos of actual people could be more convincing and difficult to debunk, rendering them more treacherous. When it comes to foreign influence operations trying to create fake news reports or outlets, experts have warned that rather than fully AI-generated avatars being a threat, AI enhanced or manipulated footage of real TV anchors could be more effective.
The tactic can also be applied to politicians and several PRC-linked examples have already appeared. In Taiwan, a video of a speech by then-candidate and now president Lai Ching-te removed the word “no” from his remarks, changing the meaning. In Canada, multiple manipulated videos of a Chinese dissident changed his words in an apparent attempt to discredit both him and Canadian politicians critical of the CCP.
Notably, there is at least one precedent involving a U.S. lawmaker’s image. In January, AFP FactCheck debunked a video of Congressman Rob Wittman, a Republican from Virginia, in which he appeared to say that the United States would increase military aid to Taiwan if Lai won.
In fact, the footage was from 2022 and about the war in Ukraine and the U.S. economy. AFP debunked the video which appeared on TikTok, while the Canadian DisinfoWatch speculated that it served Beijing’s goals and “intended to stoke false voter concerns about US meddling in the Taiwanese election.”
Deepfake audio
If at some point, the CCP did indeed wish to do real damage to a U.S. electoral candidate or to influence U.S. public opinion at the 11th hour, the use of a deepfake audio could be a go-to tactic.
When it comes generative AI technology, its ability to replicate someone’s voice convincingly is more advanced than creating a video likeness. This can also be taken advantage of by those wishing to influence potential voters.
I have yet to find an example of this tactic being deployed by PRC-linked actors in the U.S. context. But there were several high-profile and carefully timed examples from the recent elections in Taiwan that indicate this tactic is important to watch for. The first hint of its use emerged in August 2023 when a file of an audio recording was supposedly “leaked” from an internal meeting and sent to local news media. It claimed to be the voice of third-party candidate Ko Wen-jie criticizing DPP candidate Lai’s visit to the United States and accused him of embezzlement. Ko’s party immediately disavowed the recording and reported it to the police for investigation, which within days announced that the recordings were highly likely to be deepfakes.
More notable was a supposed recording by Terry Gou—a tech tycoon who had not publicly supported any presidential candidate in the election—that appeared on election day It portrayed Gou as announcing his endorsement of Beijing’s preferred candidate days after a fake letter making similar claims circulated online. The recording was quickly detected, disavowed by Gou himself, and taken down by tech firms, but the potential for it to have caused mischief was real.
If at some point, the CCP did indeed wish to do real damage to a U.S. electoral candidate or to influence U.S. public opinion at the 11th hour, the use of a deepfake audio could be a go-to tactic.
Closing thoughts
As we consider the regime and its proxies’ potential deployment of these and other technical tactics in the run-up to the elections, two last points are important to mention.
These tactics are not deployed in isolation. Even if large-scale networks of fake accounts like Spamouflage do not typically garner real user engagement, they are useful for flooding an ecosystem and multiple platforms simultaneously with certain content (including AI-generated fakes), amplifying problematic content created by domestic political actors, and potentially affecting how search and other automated algorithms display and prioritize information offered to would-be voters.
It is impossible to fully isolate the tactics—or even individual accounts—used by PRC-linked actors to influence elections versus those deployed against other prime targets and perceived “enemies” of the CCP. The same accounts used to spread anti-DPP content in Taiwan have also been used in campaigns targeting Chinese dissidents, Japanese wastewater releases, or the ruling party in India. Several Spamouflage accounts that have recently turned to impersonating U.S. voters previously posted in Chinese and promoted pro-CCP content or smeared pro-democracy Hong Kongers. In addition, even as fake personas are being created to disrupt elections or domestic political debates, NGOs and tech firms are also reporting fake accounts impersonating Tibetan activists, an imaginary Human Rights Watch advisor, or Falun Gong practitioners. These have then been used used to spread misleading claims aimed at discrediting the Dalai Lama or the Shen Yun Performing Arts company.
Such realities once again serve as a reminder that the CCP’s apparatus takes a holistic approach to its foreign influence operations. Those trying to detect, disrupt, and counter these efforts should do the same.
Really useful information. When I’m on X, I always wonder whether the pro-CCP accounts are real or affiliated with the PRC. This article helps distinguish that, while also raising awareness on CCP-disinformation campaigns.
Our democratically elected governments need to protect the electorate from outside influences. The appearance of AI and its use in social media is a real and present threat to society.
Short of cutting the underwater fibreoptic cables connecting us, and even then we'd still need to deal with satellites, there's only one sure way of tackling this surge of fake AI generated disinformation. Verification.
Legislation is needed to ensure all online social media accounts are real people. Perhaps we could have one central government managed verification server which is queried when accounts are set up by real people, so we don't hand over sensitive data to potentially unscrupulous actors, much like our authentication apps on our phones, or maybe all we need is the authenticator app anyway? Not my field of expertise, unfortunately.
And if bot accounts are to operate, they need to be licensed to real people or organisations. Not all bots are malicious, and indeed some are very useful, but like anything mankind invents, there are always bad actors who'll use it to gain advantage over others.