New York’s Proposed RAISE Act Includes Employee Protections for AI Whistleblowers

Deterrence Effect Whistleblower

A bill moving through the New York State legislature addressing the training and use of artificial intelligence frontier models contains anti-retaliation protections for whistleblowers who report activities which pose “substantial risk of critical harm.” 

Dubbed the “RAISE Act,” standing for “Responsible AI safety and education act,” the bill outlines and regulates transparency requirements for frontier models and large developers, exempting academic research from accredited colleges and universities. The bill is co-sponsored by State Representatives Micah Lasher, Rebecca Seawright, Amy Paulin, and Yudelka Tapia and is currently in the Assembly Science And Technology Committee.

“Frontier models” is defined as AI models trained using more than 10^26 computational operations, with compute costs exceeding $100 million, and “large developers” are defined as entities that have trained at least one frontier model costing over $5 million, having spent over aggregate $100 million on frontier model training. It also covers individuals who are not a large developer but are working to train frontier models that, if completed as planned, would qualify such a person as a large developer.

The RAISE Act would make knowingly making false or materially misleading statements or omissions in or regarding documents pursuant to the bill a violation. 

Scope of the Law

The bill outlines a number of safety and transparency requirements that developers must adhere to. It requires that large developers implement written safety and security protocols before deploying frontier models. Large developers would be prohibited from deploying frontier models that create “unreasonable risk of critical harm” (defined as death/injury to 100+ people or $1+ billion in damages). 

Developers must conduct annual reviews of all safety and security protocols to account for any changes made to the capabilities of their frontier models and industry best practices, modifying such protocols if necessary and maintaining detailed records of testing procedures and results. Large developers must publish redacted versions of safety protocols and provide unredacted versions to the Attorney General upon request. 

When said modifications are made, the developer must publish the safety and security protocols as required by (c) (1) of the bill. Alongside annual reviews of protocol, developers must retain an independent auditor to review their compliance with the requirements of the bill. They must report annual total compute costs used to train their models to the Attorney General with the audit report. 

Developers must disclose every safety incident affecting their frontier model to the Attorney General within 72 hours of the developer learning of the incident. These requirements do not apply to products and services that would conflict with the terms of a contract between the federal government and a large developer.

The act authorizes the Attorney General to bring civil actions for violations. Penalties for safety violations can reach 5-15% of total compute costs, and penalties for employee retaliation can reach $10,000 per violation per employee. It also voids contract provisions that attempt to waive liability.

Whistleblower Protections

Section § 1422 of the bill prohibits large developers from retaliating against employees who disclose information about AI risks to their employer or the attorney general when whistleblowers have reasonable cause to believe that the activities they witnessed or partook in pose “substantial risk of critical harm.” The bill also prohibits employers from preventing employees from blowing the whistle on such issues. It requires that employers post notices informing employees of their rights and preserve employees’ rights under other laws and contracts. 

Whistleblowers harmed by a violation of this statute may petition to court for temporary or preliminary injunctive relief.

The language of the bill notably does not use the word “whistleblower.” 

Advocates Push for an AI Whistleblower Bill

Given the risks associated with the advancement of AI, advocates and lawmakers alike have propounded the urgent need for employees to be able to raise concerns to regulatory or law enforcement authorities to ensure that the technology is developed and deployed safely. 

While the RAISE Act would advance protections for a specific range of AI developers within New York state, advocates believe that a federal AI whistleblower bill is necessary to protect insiders looking to raise safety concerns about the technology. 

The National Whistleblower Center is calling on Congress to implement a best-practice AI whistleblower bill. It has set up an Action Alert allowing individuals to write to Congress urging the passage of such a bill.

Join NWC in Taking Action:

Demand Protections for Artificial Intelligence Whistleblowers

Exit mobile version