Perspectives on the Future of AI: Insights from Tech Leaders and Students

The future is never entirely predictable. In this era of swift and intense change—across political, technological, cultural, and scientific realms—it’s more challenging than ever to grasp what lies ahead.
At WIRED, our fascination with the future drives us. Our quest for what’s next often manifests as thoroughly researched articles, detailed videos, and conversations with the individuals shaping it. This is why we recently adopted the tagline: For Future Reference. We aim to tell stories that not only clarify what’s to come but also influence it.
In this context, we engaged with a variety of prominent figures from the diverse sectors WIRED encompasses—participants in our recent Big Interview event in San Francisco—as well as students who have been surrounded by technologies likely to disrupt their futures and careers. While artificial intelligence was the key focus, our discussions also covered various aspects of culture, technology, and politics. Consider it a reflection of current perspectives on the future—and perhaps a preliminary guide to our trajectory.
AI Everywhere, All the Time
One thing is evident: AI is now as embedded in daily life as search has been since the days of Alta Vista. Similar to search, the applications tend to be practical or everyday. “I frequently use LLMs to answer questions that come up during my day,” shares Angel Tramontin, a student at UC Berkeley’s Haas School of Business.
Several respondents mentioned using AI in the recent past, some even just moments before our conversation. Recently, Anthropic cofounder and president Daniela Amodei has utilized her company’s chatbot for parenting assistance. “Claude actually helped my husband and me potty-train our older son,” she notes. “And I’ve also used Claude to rapidly search for symptoms regarding my daughter.”
She’s not alone. Wicked director Jon M. Chu has consulted LLMs “for advice on my children’s health, which may not be ideal,” he admits. “But it serves as a decent starting reference.”
AI firms acknowledge health as a promising growth segment. OpenAI revealed ChatGPT Health earlier this month, stating that “hundreds of millions of individuals” utilize the chatbot for health-related inquiries weekly. (ChatGPT Health includes enhanced privacy protections, recognizing the sensitivity of such interactions.) Anthropic’s Claude for Healthcare is aimed at hospitals and various healthcare providers.
However, not all participants took such an engaged stance. “I try to avoid using it entirely,” comments UC Berkeley undergraduate Sienna Villalobos. “When it comes to individual work, it’s easy to form your own opinions. AI shouldn’t dictate those. It’s essential to develop your own viewpoints.”
That perspective may be increasingly rare. Nearly two-thirds of U.S. teens report using chatbots, according to a recent Pew Research study, with around 30% using them daily. (Given how integrated Google Gemini is with search now, many may be using AI without even realizing it.)
Ready to Launch?
The speed of AI progression is staggering, despite worries regarding its effects on mental health, the environment, and society in general. In this largely unrestricted regulatory landscape, companies largely self-regulate. So, what inquiries should AI companies contemplate before every launch, in the absence of guidelines from legislators?
“‘What could potentially go wrong?’ is a crucial and insightful question that I wish more firms would consider,” states Mike Masnick, founder of the tech and policy news outlet Techdirt.
