Big tech and porn platforms are the real perpetrators behind 'sickening' deepfakes

June 14, 2024

By Dr Gemma McKibbin

Published on June 14, 2024

Last week, a teenage boy was arrested and released without charge for allegedly using artificial intelligence (AI) to create and distribute explicit deepfake images of around 50 female students at Bacchus Marsh Grammar in Victoria.

While the images have been described as “incredibly graphic” and “sickening” – they are also child sexual abuse.

Newspaper detail with deepfakes headline
AI-generated deepfakes are a growing problem. Picture: Shutterstock

These events may highlight the growing problem of AI-generated child sexual abuse material (usually created and distributed by males), but it also serves as a wake-up call for us to hold to account the tech companies and AI tools enabling the creation of child sexual abuse material in the first place.

Last year’s Australian Childhood Maltreatment Study found that the fastest growing form of child sexual abuse involves adolescent boys abusing peers or intimate partners.

And recent findings from the National Community Attitudes Survey tell us this is not happening in a vacuum – young people are significantly less likely than people aged over 25 to recognise technology-facilitated abuse as a form of violence against women and girls.

The Bacchus Marsh Grammar case is an example of this at scale.

Parents have expressed outrage and disgust, victims are traumatised, senior politicians and educators have declared the behaviour unacceptable and misogynistic, and principals have announced a redoubling of efforts to teach Respectful Relationship and consent education.

But is this really the best we can do? Doesn’t it place responsibility for online safety on children themselves – whether they are the victims or young people carrying out the abusive behaviour? Are we scapegoating our children and letting the 'real perpetrators' off the hook?

Social media logo blocks and a mobile phone
Big tech companies have a responsibility to protect children from abuse. Picture: Getty Images

Understanding “harmful sexual behaviour”

As researchers in this area, we call the form of sexual abuse carried out at Bacchus Marsh Grammar 'harmful sexual behaviour' or 'child-on-child sexual abuse'.

The impact of this kind of abuse on victims is often devastating and lifelong.

But we also have to look at the person causing harm who, in this situation, is another child or young person. And we know that there are a lot of reasons why a child or young person might have developed this behaviour.

Some of our team are involved in a collaborative research program called the Worried About Sex And Pornography Project (WASAPP), an online early intervention for children and young people concerned about their sexual behaviours.

WASAPP identified 10 pathways that can lead to the onset of 'harmful sexual behaviour' – many are driven by their own childhood trauma.

But there are others that appear to be driven by different factors, like the 'hypermasculinity' or 'pornography' pathways.

Let’s look at hypermasculinity first. This involves a child living in an environment dominated by an ideology that privileges men and boys over women and girls and supports rigid gender stereotypes.

If a child’s environment is saturated by this ideology, it can set them – usually an adolescent boy – on a path towards 'harmful sexual behaviour' onset. If that boy is then exposed to factors like 'having misogynistic fantasies' or 'feeling deeply rejected', then he can be pushed towards 'harmful sexual behaviour' more intensely.

A young male looking at his phone amongst a group of other young people
Adolescent boys influenced by ‘hypermasculinity’ may be susceptible to harmful sexual behaviour. Picture: Getty Images

This onset can be triggered by all sorts of different experiences, but one is the boy feeling his hypermasculinity is threatened by girls not adhering to the misogynistic roles assigned to them within this ideological framework.

The pornography pathway is set in motion when a child – usually a boy but not always – accesses pornography. Mainstream pornography often depicts violence against women, setting boys up to see sexual violence as normal.

That boy may then encounter 'amplifiers', which may include living in a sexualised environment or having a father figure who validates pornography use. As a result, the boy’s risk of developing 'harmful sexual behaviour' increases.

The onset can be triggered by the boy making a conscious or unconscious decision to act out the pornography on another child, or to seek out child sexual abuse material.

One 17-year-old boy who participated in WASAPP and accessed child sexual abuse material described his journey on the pornography pathway:

“I had way too much time and used that to get to places on the internet that made me feel like I was important or that I had a special skill or that I had a place there that I was respected at a certain part of the internet, just because I had the skills to get there. So, that’s what really led me to going down that path.

“Coupled with the fact that I had been introduced to legal pornography quite a bit earlier than that. I would've been maybe eight years old. I think that was simply due to the fact that I had way too much interest in technology and there wasn’t really enough barriers in place that my parents could use. Because they knew about it, but they couldn't – no matter how many child locks or – they would put security measures on every item in the house, I would still find a way to get past it, go on a different device or something. So, they knew about it.

“But once I was in that space it was just an infinite amount of new things that I could learn. So, that started a sort of interest or addiction.”

Computer screen with Warning! Adult Content written on it
Ready access to pornography can amplify harmful thoughts and lead to harmful behaviours. Picture: Getty Images

Who’s accountable for deepfake child sexual abuse material?

Our WASAPP participant also says:

“There’s a lot of porn that even encourages pushing the limits in the most borderline legal way ever. It pushes it so far then when it’s actually illegal it’s only this much [shows a small space between thumb and forefinger] different than what you've actually been watching legally.

“It’s almost like a seamless transition from legal to non-legal. That’s what the law is, there’s a solid line between what’s legal and what’s not legal in most cases. Legal porn almost encourages you to cross that line without noticing.”

These comments from a 17-year-old child suggest that pornography platforms use algorithms that lead users from legal to illegal content almost seamlessly, including child sexual abuse material. So, is the pornography industry the real perpetrator here? It may be no accident that the pornography industry is pouring money into AI.

The use of generative AI and online tools both reflects and amplifies broader social issues of gender inequality, disrespect and abuse. The experiences of our WASSAP participants also reveals the dark paths that exist in pornography and online spaces, especially where amplifiers pave the way for normalising misogynistic, objectifying and harmful behaviours.

Too often, the onus is placed solely on users to navigate these spaces safely. Just as we need multi-level and collaborative prevention and response in the physical world, online safety and accountability must be scaffolded by more than just individual responsibility.

This is particularly true of children and young people.

The government’s Safety by Design principles already provide a framework for the technology industry to move away from retrofitting safeguards after harmful incidents occur, to proactively building products and tools with safety measures at their core. These principles include service provider responsibility, user empowerment and autonomy, as well as transparency and accountability.  

Computer screen with an advertisement for 'Create AI girls'
The pornography industry is pouring money into artificial intelligence. Picture: Getty Images

If these were more strongly implemented in the design and creation of generative AI products and platforms, some of the harms like those we have seen at Bacchus Marsh might be better mitigated in the first place.

We already know that the creation of child abuse material is a risk, and that young people are navigating complex online spaces where the line between legal and illegal content is blurred at best.

The technology industry must proactively build stronger safeguards and accountability into products like generative AI so that they can less easily be used to amplify the harms already present in our communities and relationships.

A step like this would help to better create shared accountability and safety.  

Understanding who the 'real perpetrators' are helps to inform how we can best respond and move towards more effectively preventing harm in the first place.

Although incredibly important, educating children about deepfake child sexual abuse material is not enough, and banning children from social media just isn’t realistic for most people.

But there are two things we could do better: early intervention and regulation.

Our team has developed two programs designed to intervene early in 'harmful sexual behaviour'. Power to Kids is a collaboration between the University of Melbourne and MacKillop Family Services that aims to upskill parents or carers and educators in identifying 'harmful sexual behaviour'.

Computer screen with 'Actually this video is FAKE' written on it
Educating children about deepfake material is not enough – early intervention and regulation are essential. Picture: Getty Images

By taking the program into Australian schools, our teachers can learn how to identify the signs of 'harmful sexual behaviour' and intervene early before it escalates and intensifies.

We are also developing a WASAPP early intervention online service for children and young people worried about their sexual thoughts and behaviours, which includes problematic pornography use. Much like similar international services – including Shore in the UK and What’s OK? in the US – this collaboration between Jesuit Social Services and the University of Melbourne is building and piloting a program for Australian children and young people.

But once again, targeting children, parents and educators is just one piece of the puzzle. To see real change, we must regulate.

Targeting the real perpetrators

Regulation should target the 'real perpetrators' – the pornography platforms and applications that are enabling the victimisation of our children and grooming boys to become abusive.

Australia’s e-Safety Commissioner is doing excellent work in this area, including supporting the tech industry to adopt and implement Safety by Design principles, but she can’t do it alone. The power wielded by the pornography industry and major tech platforms, as well as the global interjurisdictional context, create major challenges to regulation.

Regulators, not just here in Australia but around the world, must get involved in solving the problem of deepfake child sexual abuse material. They should be investigating the pornography industry and major tech platforms for peddling this material. They must negotiate international legislation that empowers us to challenge big tech and its complete disregard for child safety.

It seems like the 'real perpetrators' that lead these children into 'harmful sexual behaviour' are getting away with it time after time. And now is the moment to start holding big tech and pornography platforms to account.

This article was first published on Pursuit. Read the original article.