
Researchers concluded that cybercriminals have failed to significantly integrate artificial intelligence tools into their operations, based on an analysis of over 100 million messages sourced from illicit forums and cybercrime-related communities. This study draws its data from the CrimeBB database, which compiles posts from both the dark web and private platforms.
The research, conducted by a team from the University of Edinburgh, the University of Cambridge, and the University of Strathclyde, examined discussions dating back to November 2022, coinciding with the widespread emergence of ChatGPT and other large language models. The objective was to document precisely how members of criminal communities attempt to leverage AI and whether this usage is altering the structure of their activities.
The primary finding proved unexpectedly “inertial”: While AI is indeed being deployed, it is not acting as a widespread tool for lowering the barrier to entry into cybercrime. Instead, the most notable application scenarios involve tasks already characterized by automation and technical sophistication—such as obscuring patterns typically detected by cybersecurity defense systems, and managing social media botnets used for scams and coordinated action campaigns.
It is separately noted that AI assistant tools for code generation primarily benefit existing, experienced participants. Effective utilization of such systems still necessitates a deep understanding of attack infrastructure and defensive mechanism logic, meaning a “democratization of cybercrime” has not yet occurred.
The authors also observe that illicit online communities have long been industrialized to a large extent: many attacks rely on pre-built toolkits, automated services, and purchased templates. Against this backdrop, the introduction of AI is viewed more as an evolution of established practices rather than a radical overhaul.
However, the investigation reveals a mixed risk profile. On one hand, the built-in limitations and safeguards within major chatbots are already noticeably mitigating some potential harm. On the other hand, there are documented attempts within closed communities to circumvent these restrictions and manipulate model outputs.
A separate red flag pertains not to the criminals themselves, but to the legitimate sector. According to the researchers, the main risk area is shifting toward poorly secured “agentic” systems—models capable of independently executing actions and making decisions to fulfill tasks. Also highlighted are “vibecoded products”—software solutions partially written using AI without adequate security vetting.
The paper also emphasizes a sociological effect: some members of criminal forums express concern that AI could render them jobless in the legal IT sector, which might potentially push some toward illegal activities.
The study’s authors anticipate presenting their findings at the Workshop on the Economics of Information Security in Berkeley this June. The overarching conclusion is cautiously worded: at present, the chief threat originates not from “AI criminals,” but from the widespread deployment of AI systems lacking adequate protection.