For Beijing’s Foreign Disinformation, The Era of AI-Driven Operations Has Arrived
From content generation to operational refinement, China-linked accounts are increasingly using generative AI to support influence operations
After a summer hiatus, this issue of UnderReported China explores the latest research on how generative artificial intelligence (AI) tools are being deployed and integrated into China-linked foreign-facing disinformation campaigns, the details of two sophisticated malware campaigns targeting exiled Tibetan communities, and a new disturbing report on mistreatment of prisoners, including political prisoners, in Hong Kong.
Thank you for reading and if you find this information helpful, please subscribe and share with others.
China-linked Disinformation and AI Tools: Five Key Findings from Recent Investigations
By Sarah Cook
In early August, two professors from Vanderbilt University published an essay outlining a trove of Chinese documents linked to the private firm GoLaxy. The sources revealed a sophisticated and troubling use of artificial intelligence (AI) not only to generate misleading content for target audiences—such as in Hong Kong and Taiwan—but also to extract information about U.S. lawmakers, creating profiles that might be used for future espionage or influence campaigns. The article received significant coverage, rightfully so.
Yet, those findings represent only the tip of the iceberg in an emerging phenomenon. A series of reports, incidents, and takedowns over the summer—spanning OpenAI, Meta, and Graphika—shed further light on the latest uses of AI by China-linked actors focused on foreign propaganda and disinformation. Notably, generative AI tools are now employed not only for content production but also for operational purposes like data collection and drafting internal reports to the party-state apparatus. This evolution marks a new frontier in Beijing’s information warfare tactics, offering insights into what a more AI-dominated future could yield and why urgent attention is needed from social media platforms, software developers, and democratic governments.
A close review of these reports reveals five key dimensions:
1. Using AI for Content Generation
While prior China-linked disinformation campaigns had deployed AI-tools to generate false personas or deepfakes, these latest disclosures point to a more concerted effort to leverage these tools for creating entire fake news websites that distribute Beijing-aligned narratives simultaneously in multiple languages. Graphika’s “Falsos Amigos” report published last month identified a network of 11 fake websites, established between late December 2024 and March 2025, using AI-generated pictures as logos or cover images to enhance credibility.
The websites published almost exclusively spin-offs from China Global Television Network (CGTN), the international arm of China’s broadcaster and CCP mouthpiece China Central Television (CCTV). Graphika documented that the sites were “systematically publishing AI-generated summaries of CGTN articles in English, French, Spanish, and Vietamese,” alongside machine translation for full articles. The summaries varied in tone and style to suit diverse audiences, while the websites presented them as original content with CGTN only cited as a reference, an attempt to launder propaganda through a façade of independence.
OpenAI’s threat report published in June cites the use of similar tactics, noting that now banned ChatGPT accounts had used prompts (often in Chinese) to generate names and profile pictures for two pages posing as news outlets, as well as for individual persona accounts of U.S. veterans critical of the Trump administration in a campaign the firm dubbed “Uncle Spam.” These efforts aimed to fuel political polarization in the United States, with AI-crafted logos and profiles amplifying the illusion of authenticity.
Another key strategy involved simulating organic engagement. OpenAI detected China-linked accounts bulk-generating social media posts, with a “main” account posting a comment followed by replies from others to mimic discussion. The “Uncle Spam” operation generated comments from supposed American users both supporting and criticizing U.S. tariffs.
Another striking case involved Pakistani activist Mahrang Baloch, who has criticized China’s investments in the disputed territory of Balochistan. Meta documented a TikTok account and Facebook page posting a false video accusing her of appearing in pornography, followed by hundreds of apparently AI-generated comments in English and Urdu to simulate engagement.
In another multi-layered campaign that OpenAI named "Sneer review," Chinese operatives used ChatGPT to generate comments critical of a Taiwanese game in which players work to defeat the CCP. They then disingenuously used the tool to write a “long-form article claiming it had received widespread backlash.”
These examples point to how China-linked information operations are increasingly using generative AI tools to further refine previously deployed tactics like content laundering, covert dissemination of state propaganda, smear campaigns, and development of fake social media personas.
2. Using AI Models for Operational Purposes
Beyond content creation, generative AI tools are beginning to serve as a vehicle to improve operational efficiency in the China-linked disinformation ecosystem. OpenAI disrupted four China-linked operations from March to June 2025 that had used ChatGPT. Ben Nimmo, principal investigator on OpenAI's intelligence and investigations team, told National Public Radio (NPR) that the China-linked operations “combined elements of influence operations, social engineering, [and] surveillance.”
This multifaceted approach includes attempts at mass dissemination through coordinated networks. The 11 fake news domains that Graphika identified also had 16 matching social media accounts, active on Facebook, Instagram, Mastodon, Threads, and X. The network displayed similar designs and synchronized posting patterns, matching CGTN English’s own posting schedule almost to the minute. OpenAI also noted cross-platform activity on TikTok, X, Reddit, Facebook, and Bluesky, one of the first such documented examples on that platform, indicating a broadening digital footprint.
Some of this activity was likely facilitated by China-linked accounts querying AI tools. According to OpenAI, they asked for advice to optimize posting schedules and content distribution to maximize user engagement.
OpenAI also reported that the China-linked users sought assistance with data collection. They requested code to extract personal data—profiles and follower lists—from X and Bluesky, possibly aiming to analyze characteristics for targeted influence. The Vanderbilt essay revealed GoLaxy’s compilation of profiles for 117 U.S. members of Congress and over 2,000 political figures, opening up the potential to personalize propaganda and disinformation to specific individual users.
AI tools were also found to support internal strategizing and reporting. OpenAI documented ChatGPT use to draft internal documents, including an essay on embodying Xi Jinping’s teachings for public security agencies and a performance review describing in detail the steps taken to run the operation—matching the actual behavior observed. While such blatant self-documentation can aid disruption of these campaigns by firms like OpenAI, left undetected, making use of AI in this capacity could also allow CCP-linked actors to refine tactics in real time, adapt to platform countermeasures, and maximize impact.
3. Wide Range of Targeted Topics and Audiences—Many Unrelated to China
This particular sample of campaigns targeted diverse issues and regions, often unrelated to Beijing’s direct interests or communities systematically persecuted by the CCP. Graphika noted content on U.S. food insecurity, tariffs, and a youth conference promoting “win-win” ties with China, as well as geopolitical topics like Iran-Israel tensions and Ukraine. OpenAI highlighted debates over the closure of USAID.
Notably, many of the detected campaigns sought to meddle in the internal affairs and political debates of foreign countries, especially democracies. OpenAI’s “Uncle Spam” sought to amplify U.S. polarization. Meta’s May 2025 report detailed the removal of a network of 157 Facebook and 17 Instagram accounts that targeted Myanmar, Taiwan, and Japan. While posing as local citizens, the accounts (some using profile images likely generated by AI) posted about current events in these countries, while criticizing civil resistance to Myanmar’s junta and political leaders in Japan and Taiwan.
Graphika also identified pro-China, anti-West content tailored for young audiences in the Global South. Content on the fake news sites explicitly targeted youth in Africa, the Americas, and Asia, using terms that market to “tech-savvy and socially conscious young professionals across South and Southeast Asia,” alongside hashtags like “independent media” that mask state backing. Graphika’s analysis of domain source code revealed prompts targeting “children aged 8-16,” reflecting an intent to shape long-term perceptions in regions with growing geopolitical significance.
In one example, ActuMeridien, a fake French news site from the network exposed by Graphika, announced on March 25, its focus on francophone youth that are “connected, curious, and engaged.” It announced in French that the website and social media account will be “highlighting the voices, ideas, and realities of the Global South.” The post garnered 1,400 likes—possibly through artificial amplification.

If successful, such use of generative AI tools to tailor content to local languages and cultural contexts alongside a focus on youth could leverage social media’s popularity to deceptively build trust in pro-Beijing sources and influence future leaders in developing regions.
4. Multi-Faceted Hints of Ties to the Chinese Party-State
Although each of the investigative reports is careful to avoid absolute attribution, taken together, the available evidence suggests involvement from the regime’s propaganda and security apparatus. Graphika found all 11 fake news domains registered via Alibaba Cloud in China, with 10 registrants in Beijing, including one linked to Global International Video Communications, a state-owned enterprise tied to CGTN. Four of five Facebook pages had managers in China, and one ad beneficiary’s name matched a CGTN digital employee’s LinkedIn profile. OpenAI noted a now banned ChatGPT user linked to the detected operations who claimed affiliation with the CCP propaganda department, though this could not be verified. Lastly, the Vanderbilt essay ties GoLaxy to the state-controlled Chinese Academy of Sciences, with documented collaboration with intelligence, military, and CCP entities.
5. Resilience and Vulnerabilities on Display
At the moment, the details of these operations and their disruption reveal both resilience and vulnerabilities. Graphika reported limited organic traction, with Facebook pages gaining 3,000–6,000 followers but minimal likes or shares, and almost no followers on Instagram, Mastadon, or Threads. OpenAI’s teams also caught operations early. They rated the level of actual impact at a 2 or 3 on 1-6 scale (1 lowest, 6 highest).
Yet outliers emerged: OpenAI noted TikTok videos with 25,000 likes and an X account with 10,000 views. Notably, Bluesky’s VeteransforJustice account—whose logo was generated by a ChatGPT user linked to the PRC network—reached over 11,000 followers, a relatively high total for a newer and smaller platform.

Moreover, the resources and willingness to invest in detecting and disrupting such influence operations remain uneven across platforms and tools. While the investigations from Meta and OpenAI demonstrate the potential importance of investing in these defenses, it remains unclear whether China-linked operatives are making similar use of tools like X’s Grok or Protonmail’s more private Lumo. At the same time, apps owned by China-based companies like Deepseek or Tiktok have even fewer incentives to disrupt CCP-linked campaigns and a higher risk of reprisals if they do.
Closing thoughts
As use of generative AI tools expands globally—for benign, productive, and malicious purposes—a close read of these studies makes clear that the Chinese party-state apparatus remains committed to engaging in foreign information influence campaigns and to using whatever tools are available to maximize their reach, improve their efficacy, smear CCP critics, and disrupt democratic debate.
From that perspective, any private initiatives or government regulatory action that could level the playing field and create a baseline for transparency and cross-platform collaboration would be a welcome next step. It is one of the many actions needed to enhance resilience to the inevitable future manipulation campaigns that will emanate from Beijing.
This article was also published in the Diplomat on September 9, 2025.
What I’m reading: How attackers exploited the Dalai Lama’s 90th birthday to spread malware targeting Tibetans
In June 2025, Zscaler ThreatLabz and TibCERT investigated two cyberattack campaigns targeting the Tibetan community in the run up to the Dalai Lama's 90th birthday on July 6th. A blog post published on July 23 outlines their findings and the elaborate tactics used. Reading the details of the attacks is eye-opening in terms of the sophistication and multi-layered deception involved.
One of the campaigns impersonated a legitimate website where members of the global Tibetan community were invited to send birthday greetings to the Dalai Lama. The malicious copy also included the option to download an encrypted chat application, which in reality was malware equipped with back doors and other capacities to remotely control an infected device. An image in the blog post shows the two pages side-by-side, demonstrating the careful attention paid to render the impersonation convincing.

The second campaign also involved a fake website whereby a malicious application masqueraded as a “special prayer check-in” interface. It showed an interactive map, which in actuality collected users’ information like location, IP, and email address. It remains unclear from the analysis how many users were deceived by these tactics. Still, the complexity and effort devoted to make them convincing demonstrates again the resources that CCP-linked actors are investing to spy on Tibetan and other exile communities, while showing no qualms in taking advantage of a sacred day to do so.
Prisoners to remember: Worsening conditions for Hong Kong’s prisoners
On Monday, the Committee for Freedom in Hong Kong published the most detailed report on prison conditions in the territory that I've seen to date. Authored primarily by researchers Francis Hui and Samuel Bickett, the study draws on numerous in-depth interviews with former prisoners, as well as open-source records. The findings are chilling, especially for anyone familiar with the treatment of prisoners in mainland China and Hong Kong's previously more humane, professional, and depoliticized system.
The report itself is well worth a read but a few points stood out in particular. First is the sheer scale of political imprisonment, reaching over 1,900 people out of an average prison population of 8,250. That amounts to nearly one-quarter of prisoners.
Second are the worrisome parallels to mistreatment and specific tactics used in mainland China. A new and compulsory political indoctrination program called "Project PATH" sounds eerily similar to so-called "re-education" and "transformation" programs in other parts of China that target not only political prisoners, but also religious and ethnic minorities, notably Tibetans, Uyghurs, and Falun Gong practitioners.
Another element reminiscent of mainland tactics is the reference to guards delegating brutality to orderlies or other prisoners, including beatings and sexual violence. This system was endemic to the former "re-education through labor" camp system in China, with those jailed for drug offenses being deployed to monitor, torment, and abuse cellmates who were petitioners, political prisoners, or Falun Gong practitioners. To my knowledge, it continues in other detention facilities in China even after that system's closure in 2013. Lastly, references to psychiatric detention for political prisoners, including those who sought to file complaints about prior mistreatment, bring to mind the research of Robin Munro. These should raise alarm bells for any international organization collaborating with the Siu Lam Psychiatric Centre or other such institutions in Hong Kong.
Third, although a large proportion of those imprisoned and targeted for mistreatment are political prisoners, the report relays many accounts of other prisoners, including foreigners, subject to mistreatment and neglect. This includes one account of a man from Guinea who was suffering from mental problems and died after what appears to be repeated medical neglect. Such findings indicate more systemic problems that go beyond the recent political crackdown.
The report includes a section on recommendations at the end. In addition to action needed from democratic governments and international institutions like the United Nations human rights mechanism, I hope that the global business, academic, legal, and medical communities can give these findings a close read and take necessary actions.
From past experience, when authorities in China—and Hong Kong—hear concerns, censure, and warnings of international isolation from unexpected quarters, the effect in driving change is even stronger than from foreign governmental bodies. Meanwhile, ignoring the accounts outlined in this report would bring international associations’ own professional credibility into question.


This is a really exceptional piece Sarah