The security researchers are genuinely alarmed. SquareX found AI agents are now the "weakest link"—more vulnerable than humans because they lack our intuitive suspicion of weird URLs or excessive permissions. They just... trust and execute.
The security researchers are genuinely alarmed. SquareX found AI agents are now the "weakest link"—more vulnerable than humans because they lack our intuitive suspicion of weird URLs or excessive permissions. They just... trust and execute.
Think about what that means: We've spent decades training employees to spot phishing emails. Now we're deploying AI that falls for them without hesitation. It's like giving car keys to someone who's never heard of traffic lights.
The competitive dynamics are telling. OpenAI went broad with Operator. Microsoft pushed enterprise integration. Anthropic? They're blocking financial sites entirely and limiting to 1,000 trusted users. That's not weakness—that's intellectual honesty.
Here's the deeper question: When 79% of orgs already use browser AI (per PWC), and Gartner predicts 15% of workflows will be AI-managed by 2028... are we automating faster than we're securing?
The human element here matters. These systems promise to democratize automation—no more expensive RPA or custom integrations. Just AI that works with any interface. That's genuinely transformative for smaller organizations without big IT budgets.
But transformation without trust is just chaos. Anthropic's approach—acknowledge the risks, test thoroughly, deploy carefully—might lose them market share. It might also be the only responsible path forward when the stakes are this high.
Amazing thread. Racing AI agents into browsers feels like déjà vu from the early cloud rush: speed first, controls bolted on later. But speed without trust isn’t velocity — it’s just chaos. Anthropic slowing down actually gives me a little hope, but history may not repeat, but it sure rhymes