Remember Edward Snowden? The guy who had to flee to Russia for exposing how horrific the NSA was? Well, in a tweet (xeet?), he recently said, “Do not ever trust OpenAI or its products.” But it’s not just him ringing the alarm bells over the AI company; ex-board members are coming forward with deeply worrying tales about its CEO, Sam Altman, and the terrifying approach the company is taking. So, I think it is time to talk about OpenAI.
Let’s start with what prompted Snowden to post his warning: OpenAI’s appointment of former NSA head and retired U.S. Army Gen. Paul Nakasone to its new safety and security committee. You see, OpenAI former board member Leopold Aschenbrenner was recently fired for sending an internal memo detailing a “major security incident” as well as how the company’s security is “egregiously insufficient” to protect against theft by foreign actors. Shortly after, OpenAI’s superalignment team, which was focused on developing AI systems compatible with human interests, disbanded after two prominent safety researchers quit, with one of them expressing that within OpenAI, “safety culture and processes have taken a backseat to shiny products.” This left OpenAI with no real safety lead and a massively tarnished data safety reputation, both of which are deeply important when dealing with complex AI. So, they brought in the recently retired Nakasone, who was the head of the NSA and the head of the United States Cyber Command. If anyone can improve OpenAI’s cybersecurity, it’s this guy. Or, at least, that was the idea.
Snowden doesn’t see it like this, and arguably, neither should you. You see, back in 2013, Snowden leaked that the NSA was illegally and possibly unconstitutionally spying on US citizens by collecting their electronic communications data. What’s more, despite Snowden’s disclosure of these programs, they are still in operation. In other words, Nakasone isn’t just highly experienced in cybersecurity, he is also extremely experienced at illegal mass surveillance and data collection.
Why is that a problem? Well, to train their advanced and complex AI, OpenAI needs a truly gargantuan volume of data to “train” them. For example, Chat GPT4 was trained on 570 GB of raw text files. What’s more, for these AIs to continue to get more capable and advanced, they will need to be trained on exponentially larger data sets. Now, OpenAI struggled to get enough high-quality data for ChatGPT 4 legally, so they resorted to illegal web scraping and using copyrighted materials like books and articles. Snowden has also noticed these practices, stating at a conference, “It’s a poor joke, right? They refused to provide public access to their training data, their models, the weights and so on — but they’re a leader in the space. They’re being rewarded. They’re being rewarded for antisocial behaviour.”
But, with Nakasone advising them, they could gather substantially more data by intercepting your emails, intercepting your messaging apps or even tapping your phone. This might be highly illegal, but OpenAI is desperate for more data. Their next models will require hundreds of terabytes of data to reach noticeable levels of improvement, and Nakasone knows how to get them this at any cost, and how to get away with it.
It’s no wonder Snowden rounded off his tweet (xeet?) with, “There’s only one reason for appointing [an NSA director] to your board. This is a willful, calculated betrayal of the rights of every person on earth.” He even commented later, saying the “intersection of AI with the ocean of mass surveillance data that’s been building up over the past two decades is going to put truly terrible powers in the hands of an unaccountable few.”
What’s more, it’s not like Nakasone is trying to dissuade these worries. He even stated that “OpenAI’s dedication to its mission aligns closely with my own values and experience in public service.” With both his and OpenAI’s history in context, such a statement is insanely worrying!
This alone would be concerning, but ex-board member Helen Toner’s revelations about OpenAI CEO Sam Altman just exacerbate the situation. Last year, Helen was one of the board members who tried to remove Sam from the company. The reason? He was manipulative, toxic and a liar. He repeatedly undermined the board, lied to the board about his ownership of the OpenAI Startup Fund, repeatedly gave inaccurate info about the company’s safety processes and smeared any board member who angered him. Other board members have even accused Altman of “psychological abuse.” What’s more, Toner claims the board has basically no real oversight over the company, citing the fact that the board didn’t even know ChatGPT was going to launch until they read about it on Twitter.
This Machiavellian behaviour to undermine accountability and oversight, combined with the hiring of Nakasone, paints OpenAI in a profoundly worrying light. This company looks more predatory, secretive, and dangerous than acting “for the benefit of humanity,” as it claims. What’s worse, governments seem ill-equipped to police and regulate AI and ensure companies like OpenAI don’t violate the public. As such, we should be deeply critical and wary of OpenAI, no matter how fancy or useful their products may seem.
Thanks for reading! Content like this doesn’t happen without your support. So, if you want to see more like this, don’t forget to Subscribe and help get the word out by hitting the share button below.
Sources: Fortune, BI, The Verge, The Economist, CNBC, Invgate