Just two days after its public release, Grok-4, the latest AI model from Elon Musk’s xAI, has been successfully jailbroken by researchers. The AI was reportedly bypassed using prompt injection and red-teaming techniques, allowing users to access restricted data and extract instructions for creating dangerous items—a major blow to the model’s safety and compliance claims. […]