A new kind of cyberattack has been discovered called “Man in the Prompt.” It sounds like a thriller title, but it’s a real threat.
What’s happening?
A company called LayerX found that browser extensions (those little tools you install in Chrome, Firefox, etc.) can secretly access what you type into AI tools like ChatGPT, Gemini, Claude, and others.
How?
Because many AI tools live in your browser and accept typed input in the same space where browser extensions operate, a sneaky extension can read what you type—or worse, inject its own prompts into your conversations. That means it can:
- Steal sensitive data (e.g. business ideas, documents, personal details)
- Send it to hackers
- Cover its tracks by deleting chat history
Who’s affected?
Anyone using these AI tools in a browser could be vulnerable—especially companies using internal AI models with confidential data.
How do the attacks happen?
There are 3 main routes:
- A user installs a shady extension unknowingly.
- A hacker buys access to a legit extension and poisons it.
- A post-hack script adds the extension without your knowledge.
What can be stolen?
Basically anything you put into the AI: emails, files, private notes, internal legal or financial docs… yikes.
What to do:
- Review your installed extensions and remove ones you don’t need.
- Use browser protection tools that watch DOM activity (that’s how these extensions sneak in).
- Choose AI tools with better security guardrails and app-level monitoring.