Following the revelation of OpenAI’s use of restrictive non-disclosure and non-disparagement agreements, a group of thirteen AI workers – eleven current and former OpenAI employees and two current and former employees from Google DeepMind – penned an open letter on June 4, underscoring their concerns about rapid development in the artificial intelligence industry. They argue that the AI industry lacks adequate oversight mechanisms and whistleblower protections for those who speak up. In publishing the letter, the group sought to bring immediate attention and action to their concerns.
Titled “A Right to Warn about Advanced Artificial Intelligence,” the letter emphasizes that the ability of current and former employees of AI companies to blow the whistle is critical to oversight of AI and the development and implementation of new technology in ways that directly benefit the public.
In the letter, the AI workers state, “AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this.”
They further explain that under current internal and governmental oversight regimes, AI companies have “weak obligations to share some of this information with governments, and none with civil society.” Thus, the group does not believe that the AI companies can be relied upon to share this information voluntarily.
Asserting their pivotal role, the AI workers declare that “So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public.”
The AI workers claim, however, that OpenAI and other AI companies prevent this accountability via non-disclosure and non-disparagement agreements, creating a chilling effect on employee culture. These agreements block them from publicly voicing their concerns outside of the same internal channels that fail to meaningfully address the issues.
They also claim that existing whistleblower protections focus on illegal activity, whereas many risks that concern them have yet to be regulated. Thus, they think they will not be granted the protections afforded by those laws.
However, according to whistleblower attorney Stephen M. Kohn, under certain state laws, AI whistleblowers can in fact raise concerns to authorities even if there was no violation of law. In about 45 states, California included, there are public policy exceptions that cover a range of issues, including the potential impacts and ramifications of AI technology.
If an AI developer has “a valid concern that something could have a catastrophic impact,” they should be covered under this public policy exception, Kohn, a founding partner at Kohn, Kohn & Colapinto and Chairman of the Board and National Whistleblower Center told Patrick Thibodeau at TechTarget.
The authors of the letter urge AI companies to:
- End the practice of using non-disparagement agreements that silence potential whistleblowers and hinder accountability.
- Establish anonymous reporting channels to allow current and former employees to confidentially raise concerns about potential wrongdoing, reaching the company board, regulators, or other relevant entities.
- Foster a culture that embraces open criticism to effectively identify and address issues.
- Avoid retaliation against whistleblowers who go public, ceasing punishment for seeking external avenues to report problems.
The letter’s message is clear: a dominant chilling effect exists on raising serious safety concerns within the AI industry. These employees are subject matter experts on highly complex technology. Blowing the whistle about its safety to the public, they have observed that the company ignores their concerns to protect bottom-line profits.