An East Bay man says OpenAI pressured him into dropping two AI ballot measures he introduced that would’ve created stricter regulations after it was revealed he’s the stepbrother of a senior Anthropic employee, but he says he wrote the proposals himself — with legal help from chatbots.

Alexander Oldham, who was born and raised in the East Bay, gave an interview with Politico last week detailing how OpenAI’s intimidation led him to abandon two recent state AI ballot initiatives he authored, along with his motivation for creating the measures, despite having no experience with political campaigns.

As Politico reports, Oldhams’s proposals would’ve established new state entities to act as watchdogs ensuring that all AI companies — not just OpenAI — stay true to their commitments, which include developing tools that are beneficial to the public’s wellbeing, releasing advanced models securely, and maintaining a human workforce.

Oldham’s proposals received approval by the state attorney to move on to the petitioning phase, as the California ballot requires hundreds of thousands of signatures, typically costing millions of dollars, per Politico.

Once the proposals were made public, Oldham was flooded with inquiries from people in the field curious about his lack of background in campaigning, with many advocates skeptically dismissing the proposals. Rightfully so, considering Oldham told Politico it was never intended as a serious campaign, solely to raise awareness.

“I thought basically, it gets seen by people, and they’d like it, or it just wouldn’t … and it’d just be whatever,” he said. “My main thing is, I’m afraid that a big world of AI is a big world of zero accountability,” Oldham told Politico.

He proved to be successful in raising awareness, as OpenAI began probing into his identity, which revealed his stepsister, Zoe Blumenfeld, is a senior employee at Anthropic, and Oldham’s mom is a friend and past investor of tech entrepreneur Guy Ravine, who lost a trademark case against OpenAI.

Oldham explained to Politico that he and his stepsister haven’t been close since his stepfather died 20 years ago, and he met Ravine a few times ten years ago.

“I didn’t even think of her,” he said. “It is just a pure coincidence that she works for Anthropic, like I honestly didn’t even clock that.”

He told Politico he wrote the proposals himself without any lawyers, consultants, or professionals, but he did seek assistance from chatbots regarding legal specifications. He emphasized that he’s a “nobody” who worked at his family’s small boat chartering business for years and was previously an aspiring filmmaker.

Regardless, as the New York Post reported last week, OpenAI filed a complaint with California watchdog group, the Fair Political Practices Commission, asking them to investigate Oldham’s tech connections further.

OpenAI also called into question Oldham’s connection to the Coalition for AI Nonprofit Integrity (CANI), the anonymous group behind a separate AI ballot measure, which was filed by Poornima Ramarao, the mother of deceased OpenAI whistleblower Suchir Balaji, on the same day as Oldham’s measures. Oldham said he had never heard of CANI.

Anthropic released a statement saying they were not associated with the ballot measures, which they also opposed, saying they rejected “what appears to be a personal attack on one of our employees.”

Per Politico, Oldham withdrew the measures last Tuesday “due to threats and intimidation from primarily OpenAI.”

“I was naive,” he said. “I don’t want any more negative consequences because I was stupid enough to think that I could just put an idea out for people to look at in today’s world.

Image: Thai Liang Lim/Getty Images

Related: Parents of OpenAI Whistleblower Don't Believe He Died By Suicide, Order Second Autopsy