
There are many risks to AI. In fact, there isn’t a week that goes by that some AI CEO doesn’t warn the world that a Skynet-esque AI takeover is on the horizon. Thankfully, these warnings are more of a marketing ploy than a genuine concern. The real risks of AI are far more insidious. You see, it takes an ungodly amount of data to train advanced AIs, and as AIs get more capable, the amount of training data they need increases exponentially. Acquiring this vast amount of data ethically is far too expensive and far too much of a hassle for AI companies, particularly if they want to turn a profit. So, they become data pirates, stealing your personal data for their gain. Sadly, our legal system has no real mechanism to stop this new-age threat. Or at least, that’s what we thought. You see, the EU has just stopped Meta (Facebook) from rolling out a profoundly worrying plan to train its AI.
Let’s start by explaining why training AIs on people’s personal data is terrible. Firstly, every time data is transferred, or a lot is stored in one place, it is a security risk. What’s more, when a lot of sensitive data is stored in a single location, particularly when it is categorised and coherently interlinked, as it is with AI training data, it is of extreme interest to hackers. To top this off, as developing AI is extremely expensive, costs have to be saved, and data security typically takes the brunt of these cost cuts. We have seen these dynamics play out with many AI companies as the industry is plagued with data leaks, posing a huge security concern. You also don’t want to give these companies your data because it means their generative AIs can replicate you. If they have images of your face, recordings of your voice or samples of your writing, their AI has the ability to mimic you worryingly well. This poses a huge copyright risk for creatives, but it also poses a huge fraud risk for everyone, as it could enable others to impersonate you. While these generative AIs have blocks to ensure no malicious content can be produced, these can be easily bypassed.
As a side note, this is why I don’t post any pictures of my family on social media. There is a chance images of my child will be used to train generative AI (Meta’s or others), and as malicious actors can get around their prompt blockers, be used to create, for example, indecent content in their likeness, whether it was meant to look like them or not.
So, yeah, we do not want AI companies using our personal data!
Which is why Meta’s recently announced plans are alarming. It wants to use public posts and comments of anyone over the age of 18 on its platforms (such as Instagram and Facebook) to train its AI models.
Firstly, Meta doesn’t verify the ages of its users, and I know of many accounts with incorrect ages. What’s more, as these are social media platforms, many of the posts by people over 18 contain content from people under 18. One of the main uses of Facebook these days is to share family photos or insights into family life with friends and family, after all. So, this age limit is a deeply hollow gesture.
But secondly, just because this data is public doesn’t mean people consent to it being used this way. Many consumers don’t know the risks of letting their data be used for AI training. What’s more, it seems Meta is classifying accounts that are locked to only be visible to friends as public, meaning their content can be used despite the user obviously wanting data privacy.
But as I covered, there is no legal mechanism to stop Meta here. Most legal systems don’t know how to interpret this data use, so they are paralysed and can’t protect people.
However, that isn’t true in the EU, which has strict data privacy laws known as GDPR.
Advocacy group NOYB (None of Your Business) filed 11 complaints against Meta in several European countries. NOYB founder Max Schrems told the Irish Independent that “Meta is basically saying that it can use any data from any source for any purpose and make it available to anyone in the world, as long as it’s done via AI technology” and that this is “clearly the opposite of GDPR compliance.” Meta tried to cut this off at the pass by giving Europeans the option to opt out, something which they haven’t offered the rest of the world. However, GDPR’s legal framework operates strictly on an opt-in basis. This spurred the Irish Data Protection Commission (DPC) to crack down on Meta and force them to pause their proposed actions, at least in Europe.
Needless to say, Meta was not happy, lambasting the EU for delaying the advancement of AI.
However, it is important to remember that Meta’s business model is selling your personal data. It has been from day one. That’s why Facebook did nothing to stop Cambridge Analytics, which illegally took data from the site to “microtargeted” people with political marketing, leading to democratic chaos. Meta views AI as another extremely profitable way to package and sell your data and is doggedly pursuing it, no matter the cost to you. Luckily, the EU seems to have stopped this, at least for now. But for anyone outside the EU, maybe you should think twice about how you use Meta’s products.
Thanks for reading! Content like this doesn’t happen without your support. So, if you want to see more like this, don’t forget to Subscribe and help get the word out by hitting the share button below.
Sources: The Register, Reuters, BPC, The Verge