OpenAI’s public image is mixed at the minute. To some, it is a pioneering company unlocking the future! To others, it is a profoundly unethical corporation driving a worryingly vast economic bubble and bringing about the social-media billionaire tech-bro lead end of days. While OpenAI has done a relatively good P.R. job in recent years, public sentiment is definitely swinging to the latter, and it is only getting worse. Recent OpenAI whistleblowers have raised some seriously questionable practices at the heart of OpenAI, and what on Earth is going on within the company?!
Let’s start off with the recent revelations. Reuters recently reported that OpenAI whistleblowers have filed a complaint with the U.S. Securities and Exchange Commission, calling for an investigation over the company’s allegedly restrictive non-disclosure agreements (NDAs). Apparently, OpenAI has made their employees sign NDAs that waive their federal rights to whistleblower compensation!
The U.S. has legal protection against whistleblowers. Legally, employers can’t retaliate against a whistleblower within their company, either through firing or laying off, demoting, denying overtime or promotion, or reducing pay or hours. What’s more, if it turns out a company has violated these conditions, the whistleblower is entitled to unlimited compensation, which is determined by how much the whistleblowing has damaged their earning capacity.
Considering how tightly-knit the A.I. industry is, how small many of the teams at OpenAI are and how well-paid many of these A.I. engineers and executives are, removing this compensation has some potentially horrific consequences. Many of these people wouldn’t just lose their jobs, but their entire lucrative careers if they whistle blow under these NDAs. As such, these NDAs severely limit the transparency and accountability within OpenAI.
This would be a problem on its own. However, OpenAI’s history of shady practices, such as breaching copyright laws and potentially hugely damaging products, makes whistleblowing a crucial mechanism to hold the company accountable.
What if an engineer notices that OpenAI secretly enables bad actors to use ChatGPT to create and disseminate vast amounts of damaging disinformation? Such actions could literally crush Western democracy. That engineer should be able to whistleblow without risking their livelihood.
As such, Senator Chuck Grassley has urged the Commissioners to immediately approve an investigation into OpenAI’s prior NDAs and to review current efforts apparently being undertaken by the company to ensure full compliance with SEC rules.
But what’s really worrying is when you frame OpenAI’s NDAs with their recent questionable actions.
OpenAI former board member Leopold Aschenbrenner was recently fired for sending an internal memo detailing a “major security incident” as well as how the company’s security is “egregiously insufficient” to protect against theft by foreign actors. Shortly after, OpenAI’s superalignment team, which was focused on developing A.I. systems compatible with human interests, disbanded after two prominent safety researchers quit, with one of them expressing that within OpenAI, “safety culture and processes have taken a backseat to shiny products.” The company has also appointed a previous head of the NSA to their board! Considering that OpenAI’s mission is to gather as much data, no matter how unethical the methods may be, to train their A.I. models on, this is deeply concerning.
As such, OpenAI seems to be setting itself up to do some seriously nefarious actions with zero mechanics for transparency or accountability. The question has to be asked: what is Sam Altman up to that requires such harsh and potentially illegal secrecy? Is he planning on gathering data in highly illegal ways? Appointing a previous head of the NSA massively suggests that is the case. Is he planning on letting people use ChatGPT and their other products in profoundly damaging ways? It would be consistent with the company’s direction.
Either way, OpenAI’s attempt to shroud itself in secrecy is yet another glaring example of the need to rapidly introduce policies, laws, and governmental bodies to ensure the A.I. industry acts ethically. After all, they have demonstrated that they can’t do that on their own.
Thanks for reading! Content like this doesn’t happen without your support. So, if you want to see more like this, don’t forget to Subscribe and help get the word out by hitting the share button below.
Sources: Reuters, DOL, Planet Earth & Beyond