WHAT ARE YOU LOOKING FOR?

Raleigh, NC

32°F
Scattered Clouds Humidity: 79%
Wind: 2.06 M/S

Claude AI Used to Power Global Influence Network of 100+ Fake Political Personas

Claude AI Used to Power Global Influence Network of 100+ Fake Political Personas

Anthropic Uncovers AI-Driven Influence Campaign Using Claude Chatbot 

Artificial intelligence firm Anthropic has disclosed a complex "influence-as-a-service" operation that exploited its Claude chatbot to interact with genuine users on Facebook and X (formerly Twitter). Unknown threat actors reportedly used Claude to manage and coordinate a network of around 100 politically-aligned personas across both platforms, which in turn engaged with tens of thousands of real accounts. 

The campaign, believed to be financially motivated, aimed to sustain long-term influence rather than generate viral content. According to Anthropic researchers, the operation supported and undermined various geopolitical narratives, with a focus on European, Iranian, U.A.E., and Kenyan issues. Examples include promoting the U.A.E. as a business-friendly nation, criticizing European regulations, pushing cultural themes for Iranian audiences, supporting political figures in Albania and Kenya, and discrediting their opposition. 

Anthropic emphasized the operation’s novel use of Claude—not just for generating content, but also for orchestrating strategic decisions like when bot accounts should comment, like, or share posts. The AI tool was also employed to produce native-language responses in line with each persona's political stance and to craft prompts for image-generation tools. 

The company believes this campaign was likely operated by a commercial vendor offering influence services to clients in multiple countries. At least four distinct campaigns were identified using a shared programmatic infrastructure. This framework relied on structured JSON files to maintain consistent behaviour across platforms, enabling realistic, human-like engagement patterns and scalable persona management. 

One notable tactic included directing bot accounts to respond with sarcasm or humor when accused of being inauthentic—an effort to deflect suspicion and appear more human. 

Anthropic warns that this case signals the growing threat of AI-powered manipulation and calls for updated frameworks to assess influence operations that focus on relationship building and community infiltration. As AI tools become more accessible, such influence campaigns could become increasingly common. 

In a separate incident, Anthropic banned a threat actor who used its models to process leaked passwords, build scripts to brute-force internet-facing systems, and scan credential dumps from Telegram. Claude was also misused to improve targeting and automation. 

In March 2025, Anthropic reported two additional misuse cases: 

  • A recruitment scam targeting Eastern European job seekers, where Claude was used to refine fraudulent messages. 
  • A low-skill attacker who used Claude to build advanced malware capable of evading detection and maintaining persistent access to systems. 

Anthropic concluded that AI tools are lowering the barrier for cybercriminals, allowing even inexperienced actors to develop sophisticated capabilities more rapidly than ever before. 

Found this article interesting? Follow us on X(Twitter) ,Threads and FaceBook to read more exclusive content we post. 

Image

With Cybersecurity Insights, current news and event trends will be captured on cybersecurity, recent systems / cyber-attacks, artificial intelligence (AI), technology innovation happening around the world; to keep our viewers fast abreast with the current happening with technology, system security, and how its effect our lives and ecosystem. 

Please fill the required field.