The Competition to Prevent AI Agents from Misusing Your Credit Cards Is Heating Up

With threats like malware, online impersonation, and account takeovers, digital security challenges abound. The emergence of agentic AI introduces another layer of complexity, as activities are increasingly performed by agents on behalf of humans, raising new risks for potential mismanagement.
On Tuesday, the FIDO Alliance, with initial backing from Google and Mastercard, announced its plan to establish two working groups aimed at creating industry standards for authenticating and safeguarding payments and transactions conducted by AI agents.
The initiative aims to create a protective framework that can be universally adopted. This would enable users to authorize agent activities through mechanisms resistant to phishing or takeover attempts by malicious actors, preventing agents from receiving inappropriate instructions. The forthcoming standards will incorporate cryptographic solutions to allow digital services to verify that agents are correctly executing the authenticated user’s directives, as well as privacy-preserving systems that empower users, merchants, and service providers to confirm transactions initiated by agents. Essentially, the mission is to develop protections against agent hijacking and other nefarious actions, while ensuring transparency and accountability for resolving disputes.
“Agents are becoming increasingly common and are entering mainstream usage, yet existing models may not be suited for this new paradigm—they weren’t designed to account for actions taken on a user’s behalf,” Andrew Shikiar, CEO of the FIDO Alliance, shared with WIRED.
He added, “Reflecting on our past efforts addressing the significant issues surrounding passwords, which originated decades ago, it’s clear that the security infrastructure of what evolved into our connected economy was inadequate. We find ourselves at a similar crossroads with agentic agents and interactions, along with agentic commerce, where we have a chance to establish foundational principles for more reliable interactions, avoiding previous pitfalls.”
Creating widely applicable technical standards that enable interoperability is a labor-intensive endeavor that often spans years. However, given the swift progress and adoption of agentic AI, representatives from the FIDO Alliance, Google, and Mastercard all insisted on accelerating this process. To support this, both firms are contributing open-source tools to the initiative. Google’s Agent Payments Protocol (AP2) provides a method for cryptographically confirming that a user truly intended for a specific agent-led transaction to occur. Mastercard’s Verifiable Intent framework, co-developed with Google for compatibility with AP2, offers a secure avenue for user authorization and control over agent activities.
“Our goal is to deliver cryptographic proof that a transaction was authorized by the user while maintaining privacy through selective disclosure,” explains Stavan Parikh, Google’s vice president and general manager of payments. “Different stakeholders within the ecosystem—platforms, merchants, payment processors, networks—will only access information pertinent to them, ensuring that the correct action is executed promptly. The payments landscape is a multifaceted challenge.”
Parikh illustrates with the scenario of a shopper wanting to buy sneakers that are out of stock. The buyer can instruct an AI agent to automatically purchase the sneakers if they return to stock priced at $100 or below. The aim is to ensure authentication and transparency in this transaction, so that when the ideal sneaker drop occurs, the consumer secures the desired shoes at their intended price.
According to Parikh, establishing these essential protections is crucial for fostering confidence in agentic AI and encouraging the adoption of AI-driven tools. Regardless of whether users wish to embrace AI capabilities, the reality of their growing presence necessitates basic safeguards.
