“In Antigravity,” Mindgard argues, “’trust’ is effectively the entry point to the product rather than a conferral of privileges.” The problem, it pointed out, is that a compromised workspace becomes a long-term backdoor into every new session. “Even after a complete uninstall and re-install of Antigravity,” says Mindgard, “the backdoor remains in effect. Because Antigravity’s core intended design requires trusted workspace access, the vulnerability translates into cross-workspace risk, meaning one tainted workspace can impact all subsequent usage of Antigravity regardless of trust settings.”
For anyone responsible for AI cybersecurity, says Mindguard, this highlights the need to treat AI development environments as sensitive infrastructure, and to closely control what content, files, and configurations are allowed into them.
Process ‘perplexing’
In his email, Portnoy acknowledged that Google is now taking some action. “Google is moving through their established process, although it was a bit perplexing on the stop-and-start nature. First [the reported vulnerability] was flagged as not an issue. Then it was re-opened. Then the Known Issues page was altered in stealth to be more all encompassing. It’s good that the vulnerability will be reviewed by their security team to ascertain its severity, although in the meantime we would recommend all Antigravity users to seriously consider the vulnerability found and means for mitigation.”



