A new wave of black market chatbots is rapidly gaining traction in the AI sector, with malicious large language models (LLMs) generating significant profits for their creators.
These underground LLMs, operating outside the boundaries of regulated AI companies like OpenAI and Perplexity, have found a lucrative market, thriving in an environment where they face little to no regulatory oversight.
The rise of unregulated LLMs
An American study has recently uncovered the extent of the AI black market, where unregistered LLMs make substantial sums by covertly offering their services. Unlike registered and public-facing AI platforms that adhere to strict regulatory standards, these underground LLMs operate without such constraints, allowing them to dodge security measures, including antiviruses. The study focused on the Malla ecosystem, a network notorious for exploiting legal LLMs, and analyzed 212 black-market LLMs listed on underground marketplaces between April and October 2023.
Zilong Lin, a researcher from Indiana University and one of the study’s authors, revealed that most illegal services are primarily profit-driven. The research highlighted that some platforms, including DarkGPT and EscapeGPT, have generated as much as $28,000 in just two months from subscriptions and purchases by users seeking to bypass regulated AI systems.
The dangers of malicious LLMs
These black market LLMs are a financial concern and a significant cybersecurity threat. A separate study has shown that these illicit models can be used for various malicious activities, such as creating phishing emails. DarkGPT and EscapeGPT, for instance, were found to generate accurate code nearly 75% of the time, with antiviruses unable to detect these codes. Another model, WolfGPT, was identified as a potent tool for crafting phishing emails and evading spam filters.
Andrew Hundt, an expert in computing innovation at Carnegie Mellon University, emphasized the need for robust legal frameworks to prevent the replication of LLMs by individuals with malicious intent. Hundt called for a legal requirement that companies developing LLMs implement strong safeguards to mitigate the risks posed by these underground models.
Understanding and mitigating the threat
Professor Xiaofeng Wang from Indiana University, who also contributed to the study, stressed the importance of understanding how these black market LLMs operate to develop targeted solutions. Wang noted that while the manipulation of LLMs is inevitable, the focus should be on creating robust guardrails to minimize the potential harm caused by cyberattacks facilitated by these models.
According to Wang, it is crucial to study these threats now to prevent significant damage in the future. He emphasized that every technological advancement, including LLMs, has positive and negative aspects, and the industry must take proactive steps to address the darker side of AI.
Cybersecurity experts are increasingly urging the AI industry to pause and reflect on the growing risks associated with their technology. A Forbes article published last year echoed these concerns, advocating for responsible AI innovation and emphasizing the role of regulation in ensuring the ethical development of LLMs.
The article called for AI developers and providers to slow the rush to create increasingly complex models and instead collaborate with regulators to adopt frameworks that promote ethical and responsible AI use. The emergence of these black market chatbots highlights the urgent need for stronger regulations and industry collaboration to safeguard against the misuse of AI technology.