By WIRED
In its investigation, which involved reviewing more than 30,000 documents and interviewing dozens of people, WilmerHale found that the four board members had accurately portrayed their reasoning when they fired Altman citing his lack of candor with the board. The report found that they had not expected firing Altman “would destabilize the company,” according to OpenAI’s blog post on Friday. OpenAI released only a summary of the findings, not a complete report.
“WilmerHale also found that the prior board’s decision did not arise out of concerns regarding product safety or security, the pace of development, OpenAI’s finances, or its statements to investors, customers, or business partners,” OpenAI’s summary says. “Instead, it was a consequence of a breakdown in the relationship and loss of trust between the prior board and Altman.”
OpenAI said the investigation found the board acted on its concerns with an “abridged timeframe, without advance notice to key stakeholders, and without a full inquiry or an opportunity for Altman to address” them.
New Oversight
Desmond-Hellmann served on Facebook’s board from 2013 through 2019, stepping down to focus on her role at the Gates Foundation. Her connection to Microsoft cofounder Bill Gates could help the company that has pledged $13 billion to OpenAI steer the partnership. After Altman’s ejection last year, Microsoft CEO Satya Nadella complained of being surprised by the move. Microsoft did not immediately respond to a request for comment.
Seligman brings to OpenAI’s board experience operating within the media and entertainment industries, which could prove beneficial as OpenAI battles numerous lawsuits from content publishers alleging that it has ripped off their content to develop systems such as ChatGPT.
Simo’s previous work at Facebook included overseeing its video projects and managing the company’s primary mobile app for a couple of years. She also sits on the board of Shopify, one of the leading providers of ecommerce software.
Bret Taylor, who is chair of OpenAI’s board and recently launched his own generative AI company, said during the press call on Friday that additional governance changes made alongside the expansion of the board to seven members would help better manage the nonprofit. He said they included new corporate governance guidelines, a new and enhanced conflict of interest policy, and a whistleblower hotline. He added that the board would continue to expand and that it had created some new committees, including one named Mission & Strategy.
OpenAI’s nonprofit arm had for years noted in regulatory filings that its governance documents and conflict rules were open to public inspection, but said that policy had changed when WIRED asked to see the documents in the wake of last year’s drama. That policy change was cited by Elon Musk, who helped found OpenAI but is no longer involved with it, when he sued the ChatGPT maker last week for allegedly breaching its mission.
On Friday, OpenAI did not immediately respond to a request for comment on whether it would be publishing the new and updated policies. Taylor told reporters when asked for more details on the updates, “I’m not an expert in this place. A lot of our policies are public documents. I’m not sure what is and what isn’t? So I apologize, I don’t have a great answer for you right now.”
Additional reporting by Steven Levy.
Updated March 6, 2024, 7:20 pm EST: This article was updated with additional material from OpenAI’s news briefing.
Updated March 6, 2024, 6:40 pm EST: This article was updated with new details about OpenAI’s new board members and the company’s announcements today.
Discussion about this post