On the same day last week, two announcements landed that together mark a shift most organisations have not yet priced in.
- Visa published its When AI Becomes the Customer report, introducing what it calls B2AI: business-to-AI commerce. 53% of surveyed US businesses said they would allow AI agents to negotiate directly with other AI agents. 71% said they are willing to optimise products and offers specifically for AI agents. This is a global payments company is no longer treating agentic commerce as a thought experiment. It is positioning trust, override controls and machine-to-machine negotiation as infrastructure.
- Microsoft released its Agent Governance Toolkit, an open-source project built to secure autonomous agents at runtime rather than at build time. The toolkit addresses the ten risks in the OWASP Agentic AI Top 10, includes identity controls, policy enforcement, circuit breakers and a kill switch for emergency termination. The market signal is clear: The question has moved from "how do we build agents" to "how do we safely govern fleets of them once they act inside live systems."
For organisations, the translation is uncomfortable but necessary. In your organisation today, who is accountable for what an agent does on your behalf?
If an agent negotiates a poor contract at 2am on a Sunday, authorises a refund it should not have or misrepresents your brand in a customer interaction, who owns that outcome?