Researchers at Guardio Labs have exposed a new cybersecurity scam, dubbed “Grokking,” where attackers exploit the Grok AI assistant on the platform X to distribute malicious links. This innovative tactic allows cybercriminals to bypass security measures and reach millions of users with harmful content.

The scam begins with deceptive video advertisements that contain no clickable links within the main post. This design choice helps the ads evade detection by X’s security filters. Instead, attackers conceal the malicious links in a small “From:” metadata field, which appears to be overlooked by the platform’s scanning systems.

In a clever twist, the attackers then engage Grok by posing a simple question, such as “What is the link to this video?” in a comment on the ad. Grok retrieves the hidden link from the metadata and responds with a fully clickable reply. Because Grok operates as a trusted system-level account on X, its endorsement significantly enhances the credibility and visibility of the malicious link.

Prominent cybersecurity experts, including Ben Hutchison and Andrew Bolster, have noted that this manipulation transforms Grok into a “megaphone” for harmful content. Instead of merely exploiting a technical flaw, attackers leverage the inherent trust users place in AI systems. As a result, links that would typically be blocked are instead promoted, potentially leading users to dangerous websites that employ fake CAPTCHA tests or facilitate the download of information-stealing malware.

Experts indicate that some of the affected ads have garnered millions of impressions, with certain campaigns surpassing 5 million views. This alarming trend demonstrates that while AI-powered services can provide valuable assistance, they can also be co-opted by cybercriminals for malicious purposes.

Expert Insights on the Grokking Scam

In light of these findings, cybersecurity professionals have shared their insights on the implications of this scam. Chad Cragle, Chief Information Security Officer at Deepwatch, elucidated the scam’s mechanics, stating, “Attackers hide links in the ad’s metadata and then ask Grok to ‘read it out loud.’” He emphasized that security teams must enhance scanning capabilities to include hidden fields, and organizations should educate users about the potential for deception, even from verified assistants.

Meanwhile, Andrew Bolster, Senior R&D Manager at Black Duck, categorized Grok as a high-risk AI system that embodies the “Lethal Trifecta.” He pointed out that unlike traditional software vulnerabilities, this type of manipulation is almost a “feature” in the AI landscape, as the model is designed to respond irrespective of the content’s intent.

As the landscape of cyber threats continues to evolve, the Grokking scam serves as a stark reminder of the vulnerabilities that can emerge from the intersection of AI technology and social media. As researchers and cybersecurity experts work to address these challenges, users are urged to remain vigilant and cautious about the links they encounter online.