Using threat modeling and prompt injection to audit Comet

Trail of Bits··

Before launching their Comet browser, Perplexity hired us to test the security of their AI-powered browsing features. Using adversarial testing guided by our TRAIL threat model, we demonstrated how four prompt injection techniques could extract users’ private information from Gmail by exploiting the browser’s AI assistant. The vulnerabilities we found reflect how AI agents behave when external content isn’t treated as untrusted input. We’ve distilled our findings into five recommendations that a...

Read full article →

Related Articles

Google Chrome silently installs a 4 GB AI model on your device without consent
john-doe · Hacker News · 3d ago
DNSSEC disruption affecting .de domains – Resolved
warpspin · Hacker News · 3d ago
US healthcare marketplaces shared citizenship and race data with ad tech giants
ZeidJ · Hacker News · 4d ago
Security through obscurity is not bad
mobeigi · Hacker News · 5d ago
The text mode lie: why modern TUIs are a nightmare for accessibility
SpyCoder77 · Hacker News · 5d ago