Exploring the Man vs. Machine Hackathon

Then there’s Eric Chong, a 37-year-old with a dentistry background who co-founded a startup to streamline medical billing for dentists. He ended up on the “machine” team.
“To be honest, I’m extremely relieved to be part of the machine team,” Chong says.
During the hackathon, Chong was developing software that employs voice and facial recognition to identify autism. Naturally, my first question was: Wouldn’t there be a wealth of concerns, such as biased data resulting in false positives?
“Short answer, yes,” Chong acknowledges. “There may be some false positives, but I believe that using voice and facial expressions could actually enhance the accuracy of early detection.”
The AGI ‘Tacover’
The coworking space, like many AI-related venues in San Francisco, is linked to effective altruism.
If you’re unfamiliar with the movement beyond the sensational fraud headlines, it aims to maximize the good that can be achieved using participants’ time, money, and resources. The day after the event, the space hosted a discussion on leveraging YouTube “to communicate important ideas like why people should eat less meat.”
On the fourth floor, flyers adorned the walls—“AI 2027: Will AGI Tacover” advertised a recently held taco party, while another titled “Pro-Animal Coworking” offered no additional context.
Half an hour before the submission deadline, coders enjoyed vegan meatball subs from Ike’s as they hurried to finalize their projects. One floor below, the judges arrived: Brian Fioca and Shyamal Hitesh Anadkat from OpenAI’s Applied AI team, Marius Buleandra from Anthropic’s Applied AI team, and Varin Nair, an engineer from the AI startup Factory, which co-hosted the event.
As the judging commenced, a member of the METR team, Nate Rush, showed me an Excel table tracking contestant scores, with AI-powered groups highlighted in green and human projects in red. Each group fluctuated up and down the list as judges made their decisions. “Do you see it?” he asked. I didn’t—the mixed colors revealed no clear winner even half an hour into the judging. That was his intent. To everyone’s surprise, the contest between man and machine was tightly contested.
Show Time
In the end, the finalists were evenly divided: three from the “man” category and three from the “machine.” After each demonstration, the audience was prompted to raise their hands to guess whether the team incorporated AI.
First was ViewSense, a tool aimed at helping visually impaired individuals navigate their environments by converting live video feeds into text for a screen reader. Considering the brief build time, it was technically remarkable, with 60 percent of the audience (according to the emcee) believing it utilized AI. It didn’t.
Next, a team presented a platform for designing websites using pen and paper, employing a camera to track sketches in real-time—no AI used in the coding process. The pianist project progressed to the finals with a system that allowed users to upload piano sessions for AI-generated feedback; it was on the machine side. Another team displayed a tool that produces heat maps of code changes: critical security issues appear in red, while routine edits show in green. This one did use AI.
