A New AI Documentary Challenges CEOs—Yet Lacks Tough Critique

Securing an interview with Sam Altman is no small feat—just ask filmmaker Adam Bhala Lough, who recently released the documentary Deepfaking Sam Altman.
Lough initially aimed to create a feature that would delve into the promises and dangers of AI, fronted by a conversation with the OpenAI CEO. However, after months of unanswered requests, he decided to use a chatbot designed to imitate Altman’s speech and facial expressions through a digital avatar.
However, the real Altman did agree to participate in the upcoming feature The AI Doc: Or How I Became an Apocaloptimist, scheduled for a March 27 release. Alongside him are Dario Amodei, CEO of Anthropic, and Demis Hassabis, cofounder and CEO of Google’s DeepMind Technologies. (The filmmakers did seek interviews with Meta’s Mark Zuckerberg and X’s Elon Musk, but neither joined the conversation.)
This marks a significant level of access for codirector and documentary protagonist Daniel Roher, whose 2022 film Navalny, about Russian opposition leader Alexei Navalny, received an Academy Award. The catch is that once in front of the camera, Altman and others offer little new insight, often deflecting questions regarding their obligations to society. For instance, when Roher asks Altman why anyone should trust him with steering the rapid growth of AI, Altman simply responds, “You shouldn’t.” The questioning ends there.
The AI Doc is contextualized by Roher’s anxiety about the impending arrival of his first child with his wife, filmmaker Caroline Lindy. He contemplates the world his son will inherit, questioning whether the rise of AI will hinder the experiences fundamental to becoming self-sufficient adults. In his initial interviews, Roher’s deepest fears appear validated. Tristan Harris, cofounder of the nonprofit Center for Humane Technology, delivers a harsh reality check: “I know people who work on AI risk who don’t expect their children to make it to high school,” alluding to a future where technology may dismantle traditional education.
Despite a growing sense of unease, Roher and codirector Charlie Tyrell provide a solid overview of AI and its pressing questions, aided by Roher’s commitment to using clear language over jargon. Visually, the film is delightfully human, featuring colorful drawings and paintings by Roher, while whimsical stop-motion animations suggest the influence of producer Daniel Kwan, the Oscar-winning co-director of Everything Everywhere All at Once. This vibrant creativity amidst ominous themes offers some of the hope Roher seeks.
Yet, later interviews with Silicon Valley techno-optimists touting AI’s potential to solve diseases and climate change—alongside the CEOs balancing hype with cautionary tones—pass without thorough scrutiny of their grand claims. There is scarcely any time devoted to questioning why or how we might expect today’s flawed large language models to evolve into the legendary “artificial general intelligence” (AGI) that would surpass human thought. At best, there are vague admissions (for instance, from venture capitalist Reid Hoffman) that any advantages could come with unspecified risks.
Even as industry leaders assert that the near-term effects of AI are as critical as those of nuclear weapons, they resort to a familiar strategy, portraying their products as uniquely significant—implying that only they can be trusted to further develop them.
