OpenAI Introduces Enhanced Security Feature for Vulnerable Accounts

For those who worry about their ChatGPT and Codex accounts being targeted by hackers, OpenAI announced on Thursday the introduction of a new optional level of account protection. Named Advanced Account Security, this feature implements strict access controls that make account takeover attempts significantly more challenging.
While such measures aren’t entirely new in the area of account security—Google, for instance, has provided its Advanced Protection account security tier for almost ten years—the rapid rise of mainstream AI services globally has highlighted the urgent need for essential protections. OpenAI states that this rollout is part of its wider cybersecurity strategy, which was revealed earlier this month.
“As people increasingly turn to AI for personal inquiries and high-stakes tasks,” the company noted in a blog post on Thursday, “a ChatGPT account can accumulate sensitive personal and professional information, serving as a hub for interconnected tools and workflows. For specific users—like journalists, elected officials, political dissidents, researchers, and those particularly concerned about security—the risks are heightened.”
Users who activate Advanced Account Security will no longer be able to rely on traditional passwords. Instead, they must set up two physical security keys or passkeys, substantially lowering the chances of successful phishing attempts. This feature also removes email and SMS routes for account recovery, mandating the use of recovery keys, backup passkeys, or physical security keys. OpenAI has partnered with Yubico to provide cost-effective YubiKey bundles for users with Advanced Account Security.
Courtesy of OpenAI
Importantly, once a user enables Advanced Account Security, they will not be able to request assistance from OpenAI’s support team for account recovery, as the support team will no longer have access to any recovery methods. This prevents attackers from attempting to infiltrate accounts through social engineering attacks targeting support portals.
Advanced Account Security also mandates shorter sign-in durations and sessions before users must log back in on a device. Additionally, it generates alerts each time someone signs into the secured account, directing users to the dashboard for monitoring active ChatGPT and Codex sessions. Moreover, while OpenAI allows any user to opt out of having their ChatGPT conversations utilized for model training, this exclusion is automatically set for Advanced Account Security users.
Participants in OpenAI’s Trusted Access for Cyber program, which provides cybersecurity professionals, researchers, and others with early access to new models, will be required to enable Advanced Account Security starting June 1 or provide an alternative certification confirming they implement phishing-resistant authentication via an enterprise single sign-on system.

