On September 12, 2023, the U.S. Senate Committee on the Judiciary held a hearing in which Senate Judiciary Chair Richard Blumenthal and Ranking Member Josh Hawley sought input from experts on their framework for legislation on Artificial Intelligence (AI). The hearing featured testimonies from Boston University professor Woodrow Hartzog and Microsoft General Counsel Brad Smith, both of whom touched on the importance of regulating AI to prevent fraudsters, money-launderers, and other criminals from exploiting the technology and harming the public.
In his remarks to the Senate, Professor Hartzog underscored the importance of whistleblower protections in regulating Artificial Intelligence:
“Lawmakers must protect researchers and whistleblowers. They should refine the Computer Fraud and Abuse Act and create an avenue for researchers to discover abuses of and within AI systems while preserving the trust of people exposed to those systems. They should expand public policy exceptions to NDAs for whistleblowers to report those abuses as in California and Washington’s Silenced No More Acts.”
Hartzog’s comment suggests that AI systems need to be regulated with the same diligent oversight that any system subject to fraud and abuse is in the United States. Oversight agencies – with whistleblower programs – exist for markets (the SEC and CFTC Dodd-Frank Whistleblower Program), for our (the IRS Whistleblower Program), and for banks (the Anti-Money Laundering Act Whistleblower Program). Shouldn’t we also, then, have an agency whose mission is to oversee abuse of AI and which has its own whistleblower program to fulfill that mission?
The comments from Microsoft General Counsel Brad Smith likewise lauded the Blumenthal-Hawley framework for seeking to guard against the misuse of AI and suggested using “Know your customer” (KYC) obligations that exist for financial institutions as a blueprint for what he calls “Know your customers, your cloud, and your content” (KY3C) obligations for AI.
KYC was originally established as part of the Patriot Act to safeguard financial institutions from terrorism and money laundering threats. It requires financial institutions to authenticate the identity of their clients, evaluating their risk profile for suspicious account activity, and collecting additional information for customers deemed to be at higher risk for money laundering or terrorist financing.
Adapted to AI, “Know Your Customer” rules would require operators of cloud infrastructure on which an AI model is running to know who is accessing the model and to manage access for sensitive users. “Know Your Cloud” laws would require developers to use licensed AI cloud infrastructure. To be licensed, cloud providers would have to comply with certain regulatory requirements to protect their infrastructure against malicious attacks. “Know your content” laws would require those who deploy AI systems to be able to recognize when deep fakes and other fraudulent content was created through their models.
Smith relatedly suggested that the Blumenthal-Hawley framework seek to limit “Section 230 Immunity” for AI. Section 230 of the Communications Decency Act of 1996 protects online platforms from being held liable for content posted by third-parties using the platform. Section 230 has become contentious as bad-actors increasingly use internet platforms to engage in criminal activity such as election interference, hate crimes and terrorism, and money laundering. If Section 230 were limited for AI providers, Smith suggests that these providers would have a responsibility to monitor the use of their models and report when they are used for criminal purposes or risk being held liable for their negligence.
“As with all new technologies, there are clear ways that AI can be used for fraud and abuse,”says Benjamin Calitri, whistleblower attorney at Kohn, Kohn & Colapinto. “New regulations will help the public by enabling whistleblowers to expose fraud in this new arena.”