L'attaque par injection de prompt : un document (page web, PDF...) contient du texte caché (derrière une balise spoiler, en blanc sur fond blanc etc.). Si on utilise la fonction « fais-moi un résumé du doc » du navigateur/visualiseur, l'IA peut interpréter le prompt. Pour peu que l'agent IA ait un peu de pouvoir (le genre à qui on peut dire « vazy réserve-moi un avion pour Londres » et qui va le faire), ça peut poser problème.
« Agentic Browser Security: Indirect Prompt Injection in Perplexity Comet »,
Brave.com, 20 août 2025
https://brave.com/blog/comet-prompt-injection/
The AI doesn’t just read, it browses and completes transactions autonomously. [...]
This kind of agentic browsing is incredibly powerful, but it also presents significant security and privacy challenges. As users grow comfortable with AI browsers and begin trusting them with sensitive data in logged in sessions—such as banking, healthcare, and other critical websites—the risks multiply. What if the model hallucinates and performs actions you didn’t request? Or worse, what if a benign-looking website or a comment left on a social media site could steal your login credentials or other sensitive data by adding invisible instructions for the AI assistant?
[...]
The attack demonstrates how easy it is to manipulate AI assistants into performing actions that were prevented by long-standing Web security techniques, and how users need new security and privacy protections in agentic browsers.
[...]
When an AI assistant follows malicious instructions from untrusted webpage content, traditional protections such as same-origin policy (SOP) or cross-origin resource sharing (CORS) are all effectively useless. The AI operates with the user’s full privileges across authenticated sessions, providing potential access to banking accounts, corporate systems, private emails, cloud storage, and other services.
Pour en savoir une peu plus sur Brave.com :
https://fr.wikipedia.org/wiki/Brave_(navigateur_web)