Not only did the unstable Green Beret with PTSD who blew up his Tesla outside the Trump International Hotel in Las Vegas last year use ChatGPT to plan his attack, the Canadian woman who committed a mass school shooting earlier this month also asked the chatbot for help.
In the case of Master Sergeant Matthew Alan Livelsberger, an OpenAI employee did not check logs to see if Livelsberger had used ChatGPT for any of his planning until after the New Year's Day 2025 attack occurred. The Trump and Elon Musk supporter who nonetheless, confusingly, decided to blow himself up in a rented Cybertruck in front of Trump's Las Vegas hotel, as a "wake-up call" to the country, had used ChatGPT to find out how much of the explosive Tannerite he would need to buy. He also asked the bot what caliber weapon he would need to detonate it, and where to find his supplies on his route from Colorado to Las Vegas.
It was perhaps after this incident that OpenAI established an internal flagging module for such queries, as the New York Times reports this week. And it least once since then, their own judgment or internal protocols failed them, and they neglected to alert Canadian authorities when 18-year-old Jesse Van Rootselaar began discussing gun violence with ChatGPT.
The Wall Street Journal was the first to report on the alarm bells about Van Rootselaar within OpenAI, which apparently led to one or more employees urging company leaders to alert Canadian law enforcement about the young woman's online activity — the exact text of which has not been publicized.
It turns out that Van Rootselaar was already known to local law enforcement, and police had visited her home to address mental-health concerns, and to temporarily take guns out of the home.
But the fact that OpenAI had information about her violent ideations, months before Van Rootselaar would go on to kill eight people, including children, at a school in rural British Columbia, raises some serious legal and safety questions.
As Tim Marple, a former OpenAI employee who worked on its investigations team, tells the Times, it should be police who make the call about whether threats are credible or not, not OpenAI executives or staffers. And, he says, it's clear that OpenAI has been reluctant to come forward with too many of these disturbing findings for fear that it will shift the larger conversation about AI safety.
"It forces them to share information about how their product is potentially exacerbating the threat environment,” Marple says, speaking to the Times, and he suggests that a sophisticated chatbot could "be providing strategically valuable, illustrative scenarios" to potential mass shooters or other violent criminals.
Marple now runs Maiden Labs, a nonprofit that studies AI risk, and he tells the Times that AI companies should probably begin submitting suspicious activity reports to federal investigators, the same way banks do.
As for the legal and privacy issues at play — OpenAI has said they don't want to unnecessarily escalate situations where people are simply fantasizing or speaking as if to a therapist — we are in brand new territory.
Ryan Calo, a law professor at the University of Washington, tells the Times, as for the therapist argument, "If you are a therapist and you know someone will get hurt, you have an obligation to warn them."
It is only a matter of time before we see the first case of a tragic and violent act in which OpenAI employees are getting called to testify about why they failed to recognize or flag a credible threat, after that threat has become all too real.
Related: AI Insiders Are Sounding Alarms, and the Guy Who Wrote That Viral Post Says He's Not Being Alarmist
Top image: A woman with a dog lies down to record images next to a Las Vegas Metropolitan Police Department vehicle blocking the road near the Trump International Hotel & Tower Las Vegas after a Tesla Cybertruck exploded in front of the entrance on January 01, 2025 in Las Vegas, Nevada. A person who was in the vehicle died and seven people were injured. Authorities are investigating the incident as a possible terrorist attack and are looking for a possible connection to a deadly crash in New Orleans. (Photo by Ethan Miller/Getty Images)
