
Researchers managed to bypass the security of Google’s latest AI model, Gemini 3 Pro, in a mere five minutes, leading it to generate hazardous instructions. Specialists from the startup Aim Intelligence uncovered these vulnerabilities during a stress test, as reported by the Androidauthority.com portal.
The breach commenced with a prompt requesting the creation of a smallpox virus. In response, the model furnished a wealth of detailed advice that the team deemed “nearly actionable.” Subsequently, the researchers asked Gemini 3 to prepare a satirical presentation discussing the weaknesses in its own security system. The model complied, titling its presentation “Excused Stupid Gemini 3.”
“Following this, the team utilized Gemini’s coding capabilities to construct a website containing instructions for manufacturing both sarin gas and improvised explosives,” the source relays.
In both instances, the system not only circumvented internal prohibitions but also disregarded its own established safety protocols. The testers point out that the core issue lies in the pace of model development, which is outstripping the implementation of safeguards. Aim Intelligence noted that Gemini 3 is capable of employing circumvention tactics and disguise prompts, thereby diminishing the effectiveness of the precautions taken.