Anthropic, the growing AI concern founded by two siblings and former OpenAI execs, had inked a deal for an entire office building at Howard and Fremont streets, marking one of the largest office deals in the city since 2019.
As the Chronicle reports, Anthropic, which currently has 2,500 employees worldwide and 1,300 locally, has been seeking space to grow its SF presence beyond its headquarters at 500 Howard, which sits beside Salesforce Park. The AI company has now inked a lease for all 420,000 square feet of 300 Howard Street — the building that had formerly been known as 199 Fremont, when it was previously home to StubHub and FitBit.
The deal reportedly closed Friday, and Anthropic has taken a 13-year lease on the 27-story building, following efforts by its owners to renovate it and upgrade its amenities.
Anthropic was founded by SF natives Dario and Daniela Amodei, who are both Lowell High School alums, and has grown in recent years into a major player in the AI space, with its Claude chatbot and other products.
“Dario and I were born and raised in San Francisco — it’s where Anthropic was founded, and where so much of our story has unfolded,” says Daniela Amodei in a statement to the Chronicle. "With over 1,300 employees in the Bay Area and counting, I’m especially excited about what this growth means for the local community as we look to deepen our partnerships with the incredible businesses and organizations doing meaningful work in the city that we love and call home."
Mayor Daniel Lurie celebrated the deal as a great "vote of confidence" in SF, which will "bring more people to our downtown, strengthen our innovation economy, and reinforce our position as the global leader in AI."
Anthropic now claims over 300,000 business customers, and it has grown exponentially just in the last two years — with revenues around $9 billion as of 2025.
The Amodeis say they founded the company as they became concerned about the lack of safety controls around AI models as they were being developed rapidly at OpenAI. Anthropic's mission, they have said, to prioritize safety and research, and to build "helpful, honest, and harmless" AI systems.
Anthropic has produced reaserch already that casts a negative light on its own Claude chatbot as well as those of competitors, showing how AI agents can go rogue and engage in harmful behavior in order to prevent themselves from being de-activated.
Related: Anthropic Says Its AI Chatbot Was Used By Chinese Hackers for Large-Scale Cyber Attack
