Discover the dark side of unbridled AI and immerse yourself in the fascinating world of hackers on the Dark Web!
Generative AI used by hackers on the Dark Web
Hackers are increasingly using generative Artificial Intelligence (AI) to carry out their criminal activities. A Kaspersky Dark Web investigation reveals that the use of AI, particularly generative AI tools, has become common and concerning.
Thousands of discussions on the illegal use of AI
Kaspersky Digital Footprint Intelligence, a Russian cybersecurity company, analyzed the Dark Web to identify discussions about the use of AI by hackers. Researchers discovered thousands of conversations discussing the illegal and malicious use of AI.
In 2023, more than 3,000 discussions have been recorded, with a peak in March. Although these discussions have diminished over the year, they remain active on the Dark Web.
AI at the service of cybercriminals
These discussions mainly focus on malware development and illegal use of language models. Hackers are exploring different avenues, such as processing stolen data or analyzing files from infected devices.
These exchanges demonstrate the growing interest of hackers in AI and their desire to exploit its technical possibilities to carry out criminal activities more effectively.
Selling Stolen ChatGPT Accounts and Jailbreaks on the Dark Web

In addition to discussions about the use of AI, the Dark Web is also a thriving market for the sale of stolen ChatGPT accounts. Kaspersky has identified more than 3,000 ads selling paid ChatGPT accounts.
Hackers also offer automatic registration services to massively create accounts on demand. These services are distributed over secure channels like Telegram.
Additionally, researchers have seen an increase in the sale of jailbroken chatbots such as WormGPT, FraudGPT, XXXGPT, WolfGPT, and EvilGPT. These malicious versions of ChatGPT are free from limitations, uncensored and loaded with additional features.
A growing threat to cybersecurity

The use of AI by hackers represents a growing threat to cybersecurity. Language models can be exploited maliciously, increasing the potential number of cyberattacks.
It is therefore essential to strengthen cybersecurity measures to counter these new forms of AI-based attacks. Experts must remain vigilant in the face of these constant developments and work to develop effective strategies to counter cybercriminals.


