
Meta, the social media company formerly known as Facebook, is pushing to become an AI leader. This move makes way more sense than their ‘metaverse’ idea. After all, they have copious amounts of people’s personal data on which to train AI models, giving them an advantage. But, there has been this looming question of whether Meta can be trusted with AI technology. After all, they have a rich history of purposely profiting off vastly damaging content. Remember Cambridge Analytica? However, a recent investigation and lawsuit have firmly answered this question. You see, Instagram, a part of Meta’s sprawling empire, is currently profiting from and promoting AI child abuse images. So, does this mean Meta can’t be trusted? And how can we hold them accountable?
This lawsuit comes from the law firm Schillings, which is launching a ‘groundbreaking legal challenge’ against Meta on behalf of the children’s charity 5Rights, and with evidence from a UK Police investigation. Schillings claims Instagram is ‘complicit in the exploitation of children online’ by hosting ‘content which puts both adult and child users at risk’. How? Well, that UK Police investigation found that AI-generated sexualised images of children are widespread on Instagram and that Instagram is connecting users with illegal content. Undercover officers found dozens of accounts with names like ‘pervy kinks’, which promoted links to sites selling AI-generated sexualised images of young children. These weren’t just a few images. Police found these paedophiles had used AI to generate thousands of indecent images of children, and they openly used Instagram to flaunt and sell them. In fact, the Police found you could go from Instagram to these horrific images in just two clicks.
Normally, content like this is banned or blocked straight away. But, these profiles are out in the open, and Police found that, in fact, Instagram is promoting these profiles to other users. Undercover officers conducting the investigation were actually recommended other accounts that had similar content and links to similar sites.
Some have questioned whether AI-generated indecent images of children are actually illegal, as many don’t actually depict actual abuse. However, this argument falls completely flat. For one, most laws around such material are worded to make the creation, ownership and distribution of indecent content of minors illegal, so it doesn’t matter whether these images depict reality or not.
But, these images, no matter how fake they are, also create genuine abuse.
Reportedly, many of these images use AI to sexualise a pre-existing image of a minor. As such, it is very much abuse of that person, even if they no longer a minor. But even if the images are entirely AI-generated, that doesn’t mean there isn’t a victim. AI can only produce output similar to things it has been trained on. As such, if you ask any image generator AI to produce a photorealistic picture of a child, it will produce one that looks similar to a real child it has seen extensively in its training data. I recently had this problem when using an AI image generator for an upcoming video project, and the AI kept creating images that looked identical to F1 driver Charles Leclerc. As such, even purely AI-created content like this still has a victim.
But more worryingly, this content seems to be a gateway. The Police investigation found the sites hosting these images also had links to pay-per-view websites and encrypted telegram channels that featured videos of real children being raped. There were also concerns children would stumble across these accounts, especially as they are being recommended to other users, potentially exposing them to severe abuse, potentially in the real world as well as online. Moreover, Schillings has said that this material’s openness and the fact that Instagram, a mainstream app, is actively recommending it to other users, legitimises this deeply illegal and severely damaging behaviour.
The fact the Police have easily found these accounts and that Meta didn’t close them down and still hasn’t acted despite a looming damning lawsuit is telling. If Meta cannot even manage AI content on its own platform, how can it be trusted to build its own AI?
This is where the threat of AI starts to expose itself. It won’t lead to a Skynet-esque apocalypse, and it won’t outsmart humans. Instead, AI is incredibly good at creating and disseminating utterly vast amounts of insanely damaging content, such as CP, political misinformation, corporate misinformation, health misinformation, terrorism propaganda, spam or hate speech. We are unprepared for the tidal wave of this devastating content that will flood the internet in the coming years. Not only will it make the internet a far worse place to be, but it will also cause immeasurable damage to our society, politics, and lives. This is why we need proper AI regulation ASAP; otherwise we will face an information apocalypse.
Thanks for reading! Content like this doesn’t happen without your support. So, if you want to see more like this, don’t forget to Subscribe and help get the word out by hitting the share button below.
Sources: Daily Mail, 5Rights, Eureka Alert, Planet Earth & Beyond, Gov Tech