According to the letter, OpenAI had its employees sign employee agreements that required them to waive federal rights to whistleblower compensation. These agreements also required OpenAI employees to obtain consent from the company before disclosing information to federal authorities. OpenAI did not include an exemption in its employees’ non-disparagement clauses for disclosing securities violations to the Securities and Exchange Commission.
The letter noted that these overly broad agreements violate long-standing federal laws and regulations designed to protect whistleblowers who want to anonymously disclose damaging information about their companies without fear of retaliation.
“These contracts sent the message that we don’t want our employees to talk to federal regulators,” said one of the whistleblowers, who spoke on the condition of anonymity for fear of retaliation. “I don’t think AI companies can build technology that is safe and in the public interest if they protect themselves from oversight and dissent.”
Get caught up in
Stories to keep you up to date
“Our whistleblower policy protects employees’ rights to make protected disclosures. Moreover, we believe a rigorous debate about this technology is essential, and have already made significant changes to our exit process to remove non-disparagement clauses,” OpenAI spokesperson Hannah Wong said in a statement.
The whistleblower letter states that OpenAI Nonprofits with altruistic missions prioritize profit over safety when developing technology. Report on Friday OpenAI rushed to release the latest AI model that powers ChatGPT to meet a May release date set by its leaders, despite employee concerns that the company had “failed to adhere” to its security testing protocols that it says protect the AI from catastrophic harms, such as teaching users how to make biological weapons or helping hackers develop new kinds of cyberattacks. “We didn’t skip any safety processes, but we recognize the release was stressful for the team,” OpenAI spokeswoman Lindsay Held said in a statement.
Tech companies’ strict non-disclosure agreements have long been a thorn in employees’ and regulators’ sides, especially during the #MeToo movement and nationwide protests over the killing of George Floyd. Warn workers Officials worry that such legal agreements limit people’s ability to report sexual misconduct or racism, while regulators worry that they could silence tech employees who could flag misconduct in an opaque environment. The tech industry has been particularly tough amid allegations that companies’ algorithms promote content that undermines elections, public health and child safety.
Rapid advances in artificial intelligence Policymakers have expressed concern about the tech industry’s power and have been inundated with calls for regulation. In the U.S., AI companies operate largely in a legal vacuum, and policymakers say they cannot effectively craft new AI policies without the help of whistleblowers who can help explain the potential threats posed by the rapidly advancing technology.
“OpenAI’s policies and practices appear to have a chilling effect on the right of whistleblowers to speak out and receive fair compensation for protected disclosures,” Sen. Chuck Grassley (R-Iowa) said in a statement to The Washington Post. “If the federal government is to stay one step ahead in artificial intelligence, OpenAI’s non-disclosure agreements must be changed.”
A copy of the letter, addressed to SEC Chairman Gary Gensler, was sent to Congress. The Washington Post obtained a copy of the whistleblower letter from Grassley’s office.
Formal complaints The details of the letter were submitted to the SEC in June. Steven Kohn, an attorney representing the OpenAI whistleblowers, said the SEC had responded to the complaint.
It is unclear whether the SEC has opened an investigation. The SEC did not respond to a request for comment.
The SEC must take “prompt and aggressive” action to address these illegal contracts, the letter said. These contracts could relate to the entire AI industry, and the October White House Executive Order We are requiring AI companies to develop their technology safely.
“At the heart of these enforcement efforts is the recognition that insiders must be able to freely report concerns to federal authorities,” the letter said. “Employees are best positioned to detect and warn of the types of dangers referenced in the Executive Order, and they are also best positioned to help ensure that AI benefits humanity, rather than works against it.”
Those agreements threatened employees with criminal prosecution if they reported violations of trade secret laws to federal authorities, Cohn said. Employees were instructed to keep corporate information secret and were threatened with “severe sanctions” without being granted the right to report such information to the government, Cohn said.
“We’re just getting started when it comes to AI oversight,” Kohn said. “We need employees to step up and we need OpenAI to be open.”
The SEC should require OpenAI to submit all employment, severance and investor agreements, including confidentiality clauses, to ensure it has not violated federal law, the letter said. Federal regulators should require OpenAI to notify all past and present employees of the violations the company has committed and inform them of their right to report violations of the law confidentially and anonymously to the SEC. The SEC should fine OpenAI for each “improper agreement” under the SEC Act and direct OpenAI to correct the “chilling effect” of its past practices, according to the whistleblower letter.
Several technical employees, Facebook whistleblower Frances Haugenfiled a complaint with the SEC, which established a whistleblower program in the wake of the 2008 financial crisis.
The fight back against Silicon Valley’s use of non-disclosure agreements as an “information monopoly” is a long game, said Chris Baker, a San Francisco lawyer who last December won a $27 million settlement for Google employees who alleged that the company used onerous non-disclosure agreements to thwart whistleblowing and other protected activities. Now, tech companies are increasingly fighting back with subtle ways to thwart speech, he said.
“Employers are willing to take the risk because they’re learning that the costs of a breach can be much higher than the costs of litigation,” Baker said.