- May 08, 2026
Staff Report: PNN
A legal dispute involving artificial intelligence company OpenAI has raised new questions about its safety policies and internal governance. In a hearing in Elon Musk’s lawsuit, former employees and board members claimed that the organization has gradually shifted from research and safety priorities to a product-focused company.
In a federal court in Oakland, California, former OpenAI employee Rosie Campbell said she joined the company’s AGI preparation team in 2021. At that time, there were deep discussions on artificial general intelligence (AGI) and safety within the organization. However, over time, the company became more commercially and product-oriented.
She stated that her team was disbanded in 2024. Around the same time, another safety-focused team called “Superalignment” was also shut down. According to her, developing highly powerful AI without adequate safety mechanisms is not aligned with the company’s original mission and principles.
Campbell also mentioned a specific incident, stating that Microsoft used GPT-4 technology through its Bing search engine in India without approval from OpenAI’s safety review board. Although the model did not create major risks directly, she said strict safety processes are necessary for future more powerful technologies.
On the other hand, former OpenAI board member Tasha McCauley alleged in the hearing that CEO Sam Altman did not present all information transparently to the board. She said the nonprofit board was responsible for overseeing the for-profit arm, but lack of transparency prevented effective governance.
She also claimed that ChatGPT was released without informing the board. Additionally, concerns over potential conflicts of interest were raised within the board.
The core issue of the lawsuit is whether OpenAI deviated from its founding mission and prioritized commercial interests. Elon Musk’s legal team argues that although the company began as a research and human welfare-focused organization, it has now become one of the world’s largest private technology companies.
Expert witness David Shizer, former dean of Columbia Law School, said that safety commitments must not remain only on paper; they must be reflected in real decision-making.
OpenAI has not provided detailed comments on its current safety policies but is known to regularly publish evaluations and safety frameworks for its AI models.
Analysts believe this case is not just an internal dispute of a company but may further intensify global debates on AI safety, accountability, and regulation.