Need for Whistleblower Protections in Artificial Intelligence Industry Discussed in Senate Judiciary Hearing

Senate

On September 17, the Senate Committee on the Judiciary’s Subcommittee on Privacy, Technology, and the Law held a hearing titled “Oversight of AI: Insiders’ Perspective.” Led by committee chair Senator Richard Blumenthal (D-CT), the hearing sought to understand how and why the government can and should regulate the burgeoning industry from “experts of conviction.” 

The witnesses included Helen Toner, former member of OpenAI’s nonprofit board, William Saunders, former member of Technical Staff at OpenAI, David Evan Harris, Senior Policy Advisor for California Initiative for Technology and Democracy, and Margaret Mitchell, current Researcher and Chief Ethics Scientist at Hugging Face and a Former Staff Research Scientist at Google AI. 

Senators who conducted questioning were Hawley, Durbin, Kennedy, Klobuchar, Padilla, and Blackburn. 

In their opening statements, Toner, Saunders, and Mitchell mentioned the need for government-led whistleblower protections in the AI Industry.  

  1. Toner suggested that the government “Bolster whistleblower protections for employees of AI companies, to ensure that they have clear legal channels to raise safety concerns that are not covered by existing whistleblower law.” 
  2. Saunders told the committee that “If you want insiders to communicate about problems within AI companies, you need to make such communication safe and easy: That means a clear point of contact, and legal protections for whistleblowing employees.” 
  3. Mitchell suggested policy to “Support whistleblowing by creating stronger protections and more available resources to help workers responsibly disclose the development of high risk systems without appropriate transparency.” 

The subject was raised again in the hearing by Senator Blumenthal, who opened the floor to all witnesses to elaborate on the need for whistleblower protections. See below a transcription from the hearing, edited for clarity. 

Saunders: There are several important things that employees want in this [legal] situation, such as knowing who you can talk to. It is very unclear what parts of the government may have expertise in specific kinds of issues, and you want to know that you are going to talk to somebody who understands the issues that you have and has some ability to act on them. You also want to know the legal protections. 

This is where it is important to define protections that do not just apply when there is a suspected violation of law, but when there is a suspected like harm or risk imposed on society. I think the legislation needs to include establishing whistleblower points of contact and these protections. 

Mitchell: Mr. Saunders is making one of the important points here – there just isn’t a lot of knowledge about when and how to whistle blow. As part of my sort of ethics studies, I tried to familiarize myself with the situation where you would whistle blow versus whether this would be like breaking your NDA. This is something that I had to learn myself, and ideally, I would have had some sort of resource. 

Ideally there’s some agency you could call, and say hey, theoretically if I think there’s an issue now what do I do, but essentially if you’re considering whistleblowing, it’s you and you alone against a company making a ton of money with lawyers who are set up to harm you if you make the smallest move incorrectly.    

I think that it needs to be very clear to people working internally when and how to whistle blow and it needs to be very clear at the highest levels of the company that that is supported. I could even imagine having orientations where you’re required to provide information on whistleblowing. Currently there’s no information internally and you’re very much on your own in a situation where you might lose your job.    

Toner: To put a finer point on something that I think both Mr. Saunders and Dr. Mitchell are describing, I think a core to the problem here is that the lack of regulation on tech means that many of these concerning practices are not illegal, and so it is unclear whether existing whistleblower protections apply at all. If you’re a whistleblower or potential whistleblower sitting inside one of these companies, you don’t really want to go out on a limb and take a guess at whether this enough of a financial issue that the SEC would cover, or do I have something that’s kind of novel related to AI development or other technology development where you have serious concerns but there’s not a clear statute on the books saying the company is breaking the law. In that case, your options are limited. I think the need for whistleblower protections goes hand in hand with the lack of other rules. 

Blumenthal: I think the point that you that you’ve just made is important, that the failure to develop safety and control features in a product is not illegal perhaps and therefore may not be covered by a strict reading of whistleblower laws even if it is the practice which is unethical and harmful. 

Given the risks associated with the advancement of AI, advocates and lawmakers suggest that there is an urgent need to ensure that employees understand that they can raise concerns or address concerns to federal regulatory or law enforcement authorities to ensure that the technology is developed and deployed safely.

Alongside comprehensive regulation of the tech and AI industries, the witnesses highlighted the need for understanding of whistleblower law, suggesting training in onboarding processes as a solution, where employees can learn the when’s and how’s of whistleblowing.  

Exit mobile version