On March 8, a group of Google employees published an open letter demanding that state and national legislatures strengthen whistleblower protections for artificial intelligence (AI) researchers and other tech workers. The demand comes in the wake of Google’s recent firings of the two Ethical AI research team co-leads.
The open letter states that “[a]s others have written, the existing legal infrastructure for whistleblowing at corporations developing technologies is wholly insufficient. Researchers and other tech workers need protections which allow them to call out harmful technology when they see it, and whistleblower protection can be a powerful tool for guarding against the worst abuses of the private entities which create these technologies.”
Google fired Dr. Margaret Mitchell on February 19, 2021. Dr. Mitchell’s termination followed the departure of Dr. Timnit Gebru from the company in December 2020. Dr. Gebru claims that “Google fired her after she questioned an order not to publish a study saying AI that mimics language could hurt marginalized populations,” according to Reuters. Dr. Mitchell, who co-authored the study with Dr. Gebru, was openly critical of Google’s dismissal of Dr. Gebru.
Dr. Mitchell and Dr. Gebru were the co-leads of Google’s Ethical AI team, which Dr. Mitchell founded. Both researchers were critical of Google even before their departures, calling for more diversity in Google’s research staff and expressing concerns that the company was censoring research that was critical of its products.
Dr. Gebru gained prominence for exposing bias in facial analysis systems. Other examples of racial biases found in AI and algorithms include health-care algorithms which were found to have affected millions of Black Americans. Relatedly, an algorithm was found to have prevented Black people from receiving kidney transplants. According to VentureBeat, “[i]n the past year or so, Google’s Ethical AI team has explored a range of topics, including the need for a culture change in machine learning and an internal algorithm auditing framework, algorithmic fairness issues specific to India, the application of critical race theory and sociology, and the perils of scale.”
The open letter calling for increased whistleblower protections was published by Google Walkout for Real Change, a group of Google employees formed in 2018 focused on advocating for change within the company. The letter states: “Google has chosen to dismantle its AI ethics work, making it clear that the company will only tolerate research that supports its bottom line. This is a matter of urgent public concern.”
In a paper published in the UCLA Law Review, UC Berkeley Center for Law and Technology co-director Sonia Katyal argues that whistleblower protections are a valuable tool in ensuring accountability in AI. “Given the issues of opacity, inscrutability, and the potential role of both trade secrecy and copyright law in serving as obstacles to disclosure, whistleblowing might be an appropriate avenue to consider in AI,” she wrote.
“I would say very strongly that existing law is totally insufficient,” Katyal told VentureBeat for an article. “What we should be concerned about is a world where all of the most talented researchers like [Gebru] get hired at these places and then effectively muzzled from speaking. And when that happens, whistleblower protections become essential.”