Head & Footer Code
arrow-up

Artificial intelligence swarms. A new front in the information war

The development of generative artificial intelligence is changing the way disinformation is created and spread. A recent study published in the journal Science points to a growing threat from the So-called AI swarms - coordinated artificial intelligence systems capable of mass infiltration of social media, to manipulate public opinion and undermine the foundations of democratic debate.

Scientists are sounding the alarm that the development of generative artificial intelligence is opening a new phase of threats to public debate and democracy. In a commentary published in the journal Science researchers describe the phenomenon of so-called „artificial intelligence swarms” (AI swarms) - sophisticated, multi-agent AI systems that can infiltrate social media en masse, mimicking human behavior, manipulating emotions and amplifying selected narratives.

Unlike traditional bots, which typically perform simple and repetitive actions, AI swarms are expected to be coordinated by large language models (LLMs) and act like a self-sustaining organism. Such a system can learn in real time, adapt to the platform environment and user responses, while avoiding detection by moderation mechanisms. This is a qualitative shift: no longer automatons replicating messages, but an adaptive, distributed influence system capable of engaging in conversations and building credibility.

Evolution from bots to „malicious AI swarms”

Authors Science emphasize that the technology of generative language models and the development of multi-agent systems enable the emergence of so-called „malicious AI swarms” - autonomous, coordinated groups of artificial intelligence agents capable of acting like an adaptive, collective organism. Such swarms can coordinate their actions and adapt to the context and reactions of their environment. They can maintain the memory and consistent identity of multiple agents, generate content tailored to specific audiences, and operate long-term and in parallel across multiple platforms.

As the authors of the publication note, it is these capabilities that create a new category of threat:
„With these capabilities, a destabilizing threat is emerging: swarms of cooperative, malicious AI agents. By adaptively mimicking human social dynamics, these systems threaten democracy.”

„Synthetic consensus” as a mechanism of manipulation

The central risk is not just the production of false content. The publication identifies a much deeper problem: the manipulation of social structures through the production of a so-called synthetic consensus, i.e. the impression that „everyone thinks this way,” even if in reality this is not the case. In this model, influence is not about convincing with arguments, but about forming a sense of dominant opinion.

As a result, AI swarms can influence citizens' social norms, beliefs and decisions, undermining rational debate and trust in independent voices. As the researchers point out:
„The main threat is not just false content. It is the synthetic consensus: the illusion of universal agreement, produced on a mass scale.”

How AI swarms work

In the authors„ terms, an ”AI swarm" is a network of agents controlled by artificial intelligence, functioning in a distributed but focused manner with common goals. The key is that agents can maintain persistent identities and memory of relationships and previous interactions, so that their presence in the community does not resemble random spam.

The swarm can coordinate activities around a single goal, while varying the tone of speech, content and roles of individual accounts to resemble the natural diversity of human users. The system works adaptively: it observes platform signals, audience reactions, learns and optimizes its strategies, with minimal human oversight. As the authors define it:
„A malicious AI swarm is a network of agents controlled by artificial intelligence, capable of maintaining persistent identities and memories, coordinating around common goals with varying tone and content of messages, and adapting to user engagement and response.”

Scale and effects: from manipulation to harassment

The scale at which AI swarms can operate depends on the technical resources and barriers built into the platforms. While hundreds of thousands or millions of agents are often mentioned, the authors point out that it is not just the number that is critical, but also the reliability and adaptability of the system. Even a relatively small swarm can effectively generate majority pressure and influence the dynamics of the debate.

The publication points out that AI swarms can harm on many levels. They can manipulate public opinion and affect democratic processes, lead to the erosion of independent critical debate, and fragment shared information reality, especially in already highly polarized societies. The authors also note the broad catalog of possible applications - from commercial to political purposes to state actions.

As they conclude:
„Advances in AI development create the possibility of manipulating beliefs and behaviors at the level of entire populations. Generative tools make it possible to increase the scale of propaganda without losing credibility.”

In practice, AI swarms can combine two vectors of influence: manipulating perceptions of what is „common” and exerting social pressure. The latter can take the form of mass harassment, simulating an „angry mob,” and attacking those presenting dissenting views, discouraging them from participating in debate or pushing them off platforms.

Proposed defense measures

The publication's authors stress that defense against AI swarms cannot be based solely on deleting individual accounts or reactive content moderation. Due to the adaptive and distributed nature of swarms, a multi-layered approach combining technical and institutional solutions is necessary.

They point to the need to develop detection based on behavioral analysis, including the identification of abnormal patterns of coordination of activities that may indicate the existence of a swarm. Strengthening user identity verification and developing proof-of-personhood solutions is also an important direction. The authors also advocate the use of simulations and robustness tests to predict the evolution of AI swarm behavior. This would be complemented by the establishment of an AI Influence Observatory, a global network to monitor and analyze influence operations conducted using artificial intelligence.

As the researchers point out: „Defenses against these systems must be multi-layered and pragmatic so as to raise the cost, risk and visibility of manipulation.”

What this means for public debate and companies

If AI swarms begin to operate on a large scale, the pressure on platforms, institutions and organizations will steadily increase. For democracy, this means the risk of a gradual erosion of trust, a weakening of pluralism, and the displacement of rational debate by emotional pressure and the illusion of the majority. From the perspective of companies and brands, it also means greater vulnerability to artificially generated reputational crises and the difficulty of distinguishing authentic public sentiment from coordinated influence operations.

In a world where the line between human and artificially generated activity is increasingly blurred, information resilience is becoming a strategic element. AI swarms function best in chaos and the most effective response remains to consistently strengthen the credibility, transparency and quality of debate in the spaces for which we are responsible.

Bibliography

  1. Science (2026). How malicious AI swarms can threaten democracy. Policy Forum.
  2. Live Science (2026). Next-generation AI swarms will invade social media by mimicking human behavior and harassing real users, researchers warn.
    https://www.livescience.com/technology/artificial-intelligence/next-generation-ai-swarms-will-invade-social-media-by-mimicking-human-behavior-and-harassing-real-users-researchers-warn
  3. EurekAlert! (2026). AI swarms could quietly distort democracy, science policy forum warns.
  4. City St George's, University of London (2026). AI swarms could fake public consensus and undermine democracy.
MediaPeople_logo_white
Address
Media People sp. z o.o.
Bukowińska Street 24a / 113
02-703 Warsaw
KRS-0001208569
NIP-1182128142
REGON-365173083
Copyright © 2025.