Ex-OpenAI and Google DeepMind Employees Demand “Right to Warn” About AI Risks

On Tuesday, a coalition of ex-OpenAI and Google DeepMind staff released an open letter urging AI companies to adopt policies allowing employees to voice concerns about AI risks without fear of retaliation. Titled “A Right to Warn about Advanced Artificial Intelligence,” the letter has garnered 13 signatures, with some signatories choosing anonymity due to fear of repercussions.

The letter highlights the dual nature of AI, acknowledging its potential benefits while also warning of severe risks, including exacerbating existing inequalities, spreading misinformation, and losing control over autonomous AI systems, which could potentially lead to human extinction.

The signatories argue that AI companies hold significant non-public information regarding their systems’ capabilities, limitations, and risks but currently have minimal obligations to share this data with governments and none with civil society.

Among the non-anonymous signatories are former OpenAI employees Jacob Hilton, Daniel Kokotajlo, William Saunders, Carroll Wainwright, and Daniel Ziegler, as well as former Google DeepMind employees Ramana Kumar and Neel Nanda.

The group calls on AI companies to adhere to four principles: avoiding enforcement of agreements that prevent criticism for risk-related issues, enabling an anonymous process for raising concerns, fostering a culture of open criticism, and refraining from retaliating against employees who publicly disclose risk-related confidential information when other avenues have failed.

In May, Vox’s Kelsey Piper reported on OpenAI’s restrictive non-disclosure agreements for departing employees, which threatened to revoke vested equity if they criticized the company. OpenAI CEO Sam Altman responded, clarifying that the company had never reclaimed vested equity and would not do so if employees refused to sign the separation agreement or non-disparagement clause.

However, critics remained dissatisfied, prompting OpenAI to publicly announce the removal of the non-disparagement clause and equity clawback provisions from its separation agreements. This change was seen as aligning with the company’s stated values of transparency and accountability, likely facilitating the current open letter.

Dr. Margaret Mitchell, an AI ethics researcher at Hugging Face, who was dismissed from Google in 2021 after raising concerns about diversity and censorship, discussed the challenges faced by whistleblowers in the tech industry with Ars Technica. “Legally, retaliation for whistleblowing shouldn’t happen, but in practice, it often does,” Mitchell said. “The laws tend to favor large companies over workers. Pursuing justice against a powerful corporation can be psychologically devastating and career-ending.” She emphasized the daunting task faced by fired employees in proving retaliation against well-resourced corporations.

The open letter has received support from prominent AI figures, including Yoshua Bengio, Geoffrey Hinton, and Stuart J. Russell. It is notable that some AI experts, such as Meta’s Yann LeCun, dispute the existential risk posed by AI, arguing that current AI harms like bias and dangerous hallucinations are more pressing issues.

Despite differing opinions on AI risks, Mitchell believes the letter emphasizes the urgent need for increased transparency, oversight, and protection for employees who speak out about potential dangers. “While I support this letter,” she said, “significant legal changes are necessary to prevent large corporations from unjustly punishing workers who act ethically.”

4o