cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

A question about agent identity and safety boundaries in AI-powered browsing

Sidddd
Making moves

Hi everyone,

I’ve been following Mozilla’s recent discussions around adding AI-driven features to Firefox, especially the emphasis on user control, privacy, and safety as browsers become more agent-like.

I’ve been working on autonomous agent workflows outside the browser, and one question keeps coming up that feels very relevant to AI-powered browsing:

Where should the identity and execution boundary for an AI agent live?

In many agent systems today, things become risky not just because of prompt injection or model behavior, but because agents end up operating directly with a user’s real credentials or accounts. Once that happens, the blast radius is hard to reason about, and workarounds like shared inboxes or temporary accounts tend to appear.

One observation we’ve made is that if agents operate with their own delegated, scoped identities and assets, instead of touching real user credentials, a lot of these risks become easier to contain by design. Even if an agent is manipulated, its actions are limited to what that delegated identity is allowed to do.

I’m curious how folks thinking about AI features in Firefox view this boundary:

  • Should agents in the browser ever act directly as the user?

  • Or should they operate with separate, constrained identities that represent delegated intent?

  • How does this intersect with prompt injection, account safety, and user trust?

Not pitching anything here, just genuinely interested in how the Firefox community and Mozilla engineers think about this problem space as AI capabilities in the browser deepen.

Looking forward to learning from the discussion.

Thanks,
Sid

0 REPLIES 0