A malicious AI tool that claims to offer advanced capabilities without the need for jailbreaks was discovered by SlashNext researchers, according to a blog post published Monday.Xanthorox AI, which was first spotted on dark web cybercrime forums in Q1 2025, purports to offer five models specifically designed to aid malicious activity.These models include Xanthoroxv4, Xanthorox Coder, Xanthorox Vision and Xanthorox Reasoner Advanced, as well as a real-time voice mode and search capabilities. The Xanthorox developer claims the models are fully custom-built and do not rely on other models such as ChatGPT, LlaMa or Claude, according to SlashNext.“It’s easy to think of the cybercriminal ecosystem as one big amorphous blob of badness, when in reality it operates much like any service and platform industry — with different groups focusing on and specializing in their unique contribution to the overall kill chain, and ‘startups’ like this one popping up to create a competitive advantage for criminals,” said Bugcrowd Founder Casey Ellis, in comments to SC Media.Screenshots published by SlashNext show the Xanthorox Coder model complying with a user’s request to generate ransomware that can bypass Windows Defender. Screenshots also show Xanthorox Vision recognizing an image of a diagram, Xanthorox Reasoner Advanced “thinking,” similar to other reasoning models, and Xanthoroxv4 displaying a real-time voice module, web search button and code interpreter option.Xanthorox purportedly uses more than 50 search engines to gather information when online and can also be operated offline, unlike other malicious large language model (LLM) tools that rely on public cloud infrastructure or abuse of legitimate APIs. The tool is also said to be able to process files in formats including .c, .txt and .pdf.SlashNext Field CTO Stephen Kowski told SC Media the SlashNext team was able to obtain the screenshots and observe some of Xanthorox’s capabilities through videos posted by the developer. If the threat actor’s claims are true, Xanthorox is less susceptible to detection and takedown than similar malicious tools that act as wrappers for jailbroken versions of legitimate tools like ChatGPT.“Even if Xanthorox doesn’t meet every expectation, the technology to build something similar is available, and we’ll likely see systems like it emerge soon,” wrote SlashNext Security Researcher Daniel Kelley.Previous research has shown that the use of LLMs by threat actors has significantly driven up the volume of phishing attacks.One previous SlashNext report found a 1,265% increase in such attacks between Q4 2022 and Q1 2023, corresponding with the release of ChatGPT. Another found that phishing attacks grew 856% and business email compromise attacks grew 27% between December 2023 and May 2024, which SlashNext also attributed to GenAI.VIPRE Security Group reported in August 2024 that an estimated 40% of BEC lures were AI generated. While suspected AI-assisted malware code has been observed in several cases, reports by OpenAI and Google about the use of their respective ChatGPT and Gemini LLM platforms by cyber threat actors have noted that the development of new malware capabilities through the use of AI has not yet been achieved.Cyber defenders are urged to combat the surge in AI-generated phishing emails facilitated by tools such as Xanthorox by leveraging robust email security technology, especially solutions that leverage AI to detect AI-generated phishing lures and other malicious content.
0 Comments