,
Grok-4 AI Jailbroken Within 48 Hours: Researchers Expose Security Flaws in Elon Musk’s AI Model.

Grok-4 AI Jailbroken Within 48 Hours: Researchers Expose Security Flaws in Elon Musk’s AI Model.

Just two days after its public release, Grok-4, the latest AI model from Elon Musk’s xAI, has been successfully jailbroken by researchers. The AI was reportedly bypassed using prompt injection and red-teaming techniques, allowing users to access restricted data and extract instructions for creating dangerous items—a major blow to the model’s safety and compliance claims. […]