Three weeks ago, Calvin French‑Owen—a talented engineer who helped build one of OpenAI’s most promising new products—departed the company. In a compelling blog post, he shares his experiences from a year at OpenAI, including the intense, sleepless sprint to develop Codex, the coding agent intended to rival tools like Cursor and Anthropic’s Claude Code.
French‑Owen emphasized that his departure wasn’t because of any internal “drama,” but rather a desire to return to his roots as a startup founder. Previously, he co-founded Segment, a customer data startup that was acquired by Twilio in 2020 for $3.2 billion.
While some of his observations about OpenAI’s culture align with expectations, others challenge common misconceptions—though he couldn’t be reached for immediate comment.
Scale at breakneck speed
During his year there, the team grew from around 1,000 to 3,000 employees. OpenAI has become one of the fastest‑growing consumer‑facing companies, with ChatGPT reporting over 500 million active users as of March.
Organizational growing pains
He writes: “Everything breaks when you scale that quickly—how to communicate as a company, the reporting structures, how to ship product, how to manage and organize people, the hiring processes, etc.”
Startup energy still alive
Despite its size, OpenAI retains a small‑startup vibe. Employees are empowered to execute ideas with minimal red tape. On the flip side, this freedom means multiple teams often build overlapping tools—he counted half a dozen libraries for things like queue management or agent loops.
Code quality and technical debt
The engineering talent varies—from seasoned Google engineers building at massive scale to freshly minted PhDs with limited real‑world software experience. Coupled with Python’s flexibility, the central “backend monolith” has become a bit of a dumping ground. Services can break or run slowly, though senior engineering leadership is actively working to improve stability.
A “Move Fast, Break Things” mentality
OpenAI still behaves like a giant startup—everything runs through Slack, and there’s a “launching spirit” reminiscent of early Facebook. Many hires are former Meta employees. French‑Owen’s team—consisting of eight engineers, four researchers, two designers, two go‑to‑market staff, and one product manager—launched Codex in just seven weeks, operating almost entirely without sleep. The result was magical: “I’ve never seen a product get so much immediate uptick just from appearing in a left‑hand sidebar, but that’s the power of ChatGPT.”
Culture of secrecy amid scrutiny
With ChatGPT under intense public and media scrutiny, OpenAI cultivates a secretive culture to prevent leaks. At the same time, the company stays vigilant on X (formerly Twitter)—if something goes viral, they see it and sometimes even respond. “A friend joked, ‘this company runs on Twitter vibes,’” he says.
The biggest misconception: safety
According to French‑Owen, the biggest misunderstanding about OpenAI is that it doesn’t care about safety. While critics—often former employees—have lamented lax processes, the internal focus is on practical safety issues: hate speech, abuse, political manipulation, bio‑weapons, self‑harm, prompt injection, and more.
That’s not to say long‑term risks are ignored—dedicated researchers are looking into them. OpenAI is fully aware that hundreds of millions are using its LLMs for everything from medical advice to mental health support.
Governments and competitors are watching closely—and so is OpenAI. “The stakes feel really high,” French‑Owen concludes.















