New Study Shows 78% of Multimodal AI Safety Guards Fail Against Contextual Image Attacks
A groundbreaking research paper reveals how subtle visual context can bypass the safety mechanisms of leading multimodal AI systems. The Contextual Image Attack method demonstrates that images, not just text, can be weaponized to exploit fundamental vulnerabilities in today's most advanced models.