Following Whistleblower Disclosure, Senators Demand Answers from OpenAI

OpenAI Whistleblower

On July 22, five senators sent a letter to CEO Sam Altman demanding that OpenAI turn over information about its efforts to build safe and secure artificial intelligence. Specifically, the senators requested information regarding what the company is doing to meet its public commitments on safety, how it is internally evaluating their progress on said commitments and details on the identification and mitigation of cybersecurity threats.

Led by Senator Brian Schatz (D-HI), the letter comes following employee warnings that the company rushed the safety-testing processes of its latest model, as reported by The Washington Post on July 12. It also comes in the wake of reporting that anonymous OpenAI whistleblowers had filed a complaint against the company with the Securities and Exchange Commission (SEC).

Given that the company now has partnerships with the U.S. government and national security and defense agencies, the five lawmakers outline that “unsecure or otherwise vulnerable” Al systems are unacceptable.

The senators also asked Altman for information about employee agreements, which may have suppressed the speech of employees who wished to report risks to regulators. These restrictive agreements were initially obtained and published in a May Vox article.

The letter states, “Given OpenAI’s position as a leading AI company, it is important that the public can trust in the safety and security of its systems. This includes the integrity of the company’s governance structure and safety testing, its employment practices, its fidelity to its public promises and mission, and its cybersecurity policies.”

A July 13 story by The Washington Post revealed that OpenAI whistleblowers filed a complaint alleging the illegal use of restrictive severance, nondisclosure, and employee agreements, potentially penalizing workers who wished to raise concerns to federal regulators.

OpenAI spokesperson Hannah Wong told The Post that the company has “made important changes to our departure process to remove nondisparagement terms” from their agreements.

In the letter, senators asked that OpenAI commit to not enforcing their nondisparagement agreements and “removing any other provisions” from employee agreements that could be used to penalize employees who raise concerns about company practices.

The letter also addresses the rising concerns that OpenAI is prioritizing profit over safety in development of its technology, specifically citing a July report in The Washington Post that described the rushed release of GPT-4 Omni, their latest model, to meet a set May release date, speeding through comprehensive safety testing and worrying employees with the limited time frame. This rushed process undermined a July 2023 safety pledge to the White House.

Lawmakers, including Senator Chuck Grassley (R-IA), have said that the firsthand knowledge AI employees have provides Congress with a clear understanding of the technology as it attempts to regulate it — including concerns and risks. The nondisclosure and nondisparagement agreements impede the ability of employees to disclose concerns and provide their full wealth of knowledge to regulators, creating a chilling effect on employee culture.

Stephen M. Kohn, the attorney representing the OpenAI whistleblowers, told The Washington Post that the senators’ requests are “not sufficient” to cure the chilling effect of preventing employees from speaking about company practices. “What steps are they taking to cure that cultural message,” he said, “to make OpenAI an organization that welcomes oversight?”

“Congressional oversight on this is badly needed,” Kohn said. “It’s essential that when you have a technology that has the potential risks of artificial intelligence that the government get in front of it.”

The senators asked that OpenAI report back and fulfill their requests by August 13, including documentation on how it plans to meet its July 2023 voluntary pledge to the Biden administration to protect the public from the potential harms and abuses of generative AI.

Exit mobile version