As artificial intelligence reshapes global industries, whistleblower advocates are warning that the technology’s rapid growth has far outpaced the development of legal protections for whistleblowers. Without proper protections, they say, employees face mounting pressure deciding to speak out to expose potential harm tied to unregulated AI.
These issues took center stage during a webinar hosted by Americans for Responsible Innovation (ARI) on October 16th. Sophie Luskin, Senior Research Specialist at Princeton Center for Information Technology Policy and Senior Tech Policy Analyst at Kohn, Kohn & Colapinto LLP, moderated the event.
Luskin was joined by Holly and Will Alpine, former Microsoft employees and Co-founders of Enabled Emissions Campaign, Jacob Hilton, a former OpenAI employee, and Jennifer Gibson, Psst.org Director and Co-Founder. Together, the panel shed light on how existing laws fail to protect AI whistleblowers and what must change to ensure accountability in the industry.
Panelists pointed to the flaws in current legislation. Gibson cited California’s Transparency in Frontier Artificial Intelligence Act (SB53) as an example, calling its protections “limited” and “ambiguous,” as it covers only those “working on catastrophic risk,” while excluding “subcontractors or contractors.”
She also pointed to the use of unethical and repressive non-disclosure agreements (NDAs) in the tech sector. Upwards of 70% of tech industry workers have to sign NDAs, changing their original purpose of protecting trade secrets to now “effectively gagging someone to protect the brand,” according to Gibson. She reiterated that the insufficient protections for AI whistleblowers and the unchecked growth of AI would lead to more serious problems, including environmental damage and safety hazards. Gibson’s point was emphasized in Microsoft’s 2024 Sustainability Report as its emissions increased by 29% since 2020, due to “the construction of more datacenters” built to “support AI workloads.”
While at Microsoft, Holly and Will Alpine sought to disclose these detrimental environmental impacts of AI. Will had been the product manager of the team that built the technology on Microsoft’s AI platform, where he led the GreenAI software initiative, helping developers physically consume cleaner energy. Holly worked at Microsoft for a decade, leading global sustainability work. She launched the company’s first Community Environmental Sustainability Program, scaling nature-based investments across global communities. Yet, no matter what the two of them did, they realized the use of AI for fossil fuels “dwarfed all the positive use-cases.”
AI technology works like a “Swiss army knife,” Will stated. It has many beneficial functions and is easily accessible, but it can also be dangerous without proper precautions. A recent Wood Mackenzie article reports on how AI can help produce an extra trillion barrels of oil, potentially at the expense of the environment. Holly had previously seen Microsoft’s troubling contracts with large oil companies, which, in part, sparked her desire to leave the company.
In 2024, the two stepped away from Microsoft and co-founded the Enabled Emissions campaign. Demanding accountability for the use of technology, the campaign’s resolution and advocacy efforts urge Microsoft to report on the risks of working with the fossil fuel industry.
Holly also discussed her experience and the challenges that whistleblowers face, addressing retaliation, peer alienation, impact on future career prospects, legal uncertainties, and the emotional toll. As she sought advice from previous whistleblowers, those who spoke up as individuals describe speaking out as the thing that “ruined their lives.” In contrast, others who collectivized reported to her that it was the “most important thing they’ve ever done.”
Jacob Hilton, a former OpenAI employee, highlighted this power of collective action. He joined 11 other former OpenAI employees to file an amicus brief in the Musk v. Altman case, urging the court to halt OpenAI’s reconstruction efforts. He further emphasized that, rather than coming forward in a siloed environment, it’s easier on employees mentally to organize with coworkers, plan before speaking, and seek legal advice to mitigate risk when blowing the whistle.
Change may be coming from Capitol Hill. Senator Chuck Grassley introduced the Artificial Intelligence Whistleblower Protection Act, with bipartisan support and companion legislation in the House. The bill is endorsed by leading whistleblower NGOs, including the National Whistleblower Center, Center for AI Policy, and The Anti-Fraud Coalition.
“Protecting whistleblowers who report AI security vulnerabilities isn’t just about workplace fairnessー it’s a matter of national security,” said Rep. Jay Obernolte (R-Calif.), a co-sponsor of the House companion bill.
The proposed legislation offers protection safeguards. It shields those who report dangerous schemes that are not yet considered illegal, such as national security vulnerabilities or specific AI-related violations. As Gibson noted, “If nothing is currently illegal in AI, how do you know you’re covered? How do you know when to report or who to report to?”
The bill clarifies those reporting channels and promises reinstatement, back pay, and compensation for damages—while addressing restrictive NDAs that often silence employees.
“Whistleblowers are one of the best ways to ensure Congress keeps pace as the AI industry rapidly develops,” Grassley said. Luskin echoed: “In an age where AI systems influence everything from elections to defense systems, their protections are more urgent than ever.”
As artificial intelligence continues to reshape industries and challenge regulatory frameworks, whistleblowers are emerging as the front line of ethical oversight. The stories shared during the panel, combined with legislative efforts like Grassley’s AI Whistleblower Protection Act, signal a shift: innovation can no longer come at the cost of accountability.

