
In the near future, social networks may confront large-scale assaults by advanced bots capable of mimicking regular users and influencing audience behavior. Such systems will be able to spread false information, pressure dissenters, and interfere in political processes.
Researchers note that these bot swarms will become a new type of weapon in information conflicts. They emphasize that AI agents will simulate natural online conduct, avoiding detection and creating the appearance of activity by living people. Norwegian Professor Jonas Kunst explains that these bots can exploit humanity’s tendency to follow the majority opinion. Those who do not support the “virtual crowd” may be targeted by the AI agents, attempting to suppress any objections.
Scientists do not name precise timelines for the emergence of these networks, but caution: identifying them will be difficult, and it cannot be ruled out that such systems already exist. An added risk is that the digital environment is already suffering from a decline in critical thinking and a shared understanding of events. Currently, even rudimentary bots form a substantial portion of web traffic, despite performing only simple tasks and being easily detectable.
Bots built on large language models will be far more complex. They will be able to adapt to specific communities, create diverse personas, and retain memory of interactions. According to Kunst, this is effectively a “self-sufficient organism,” capable of learning, coordinating actions, and exploiting human vulnerabilities.
Researchers stress that the menace has already become actual. Last year, the Reddit administration threatened legal action against a scientific group that launched AI chatbots for an experiment manipulating the opinions of four million users. Preliminary data indicated the bots’ responses were significantly more persuasive than human ones. Depending on the attacker’s capabilities, an AI swarm might comprise hundreds to millions of agents, although in smaller communities, only a few such accounts might suffice for a noticeable impact.
Since AI agents can operate around the clock, users will be unable to counter them alone. Platforms, in the specialists’ view, will soon need to implement more sophisticated identity verification methods. However, this measure is not perfect: in several nations, anonymity is necessary for protecting dissidents.
Additional protective measures include real-time traffic analysis systems that allow for the detection of statistical anomalies, and the creation of professional communities that study such attacks and raise public awareness. Ignoring the threat, researchers warn, could lead to the disruption of elections and other significant events.