Excerpts From a Conversation with Julie Inman Grant, Australia’s eSafety Commissioner
from Women Around the World, Women and Foreign Policy Program, Diamonstein-Spielvogel Project on the Future of Democracy, and Women’s Political Leadership
from Women Around the World, Women and Foreign Policy Program, Diamonstein-Spielvogel Project on the Future of Democracy, and Women’s Political Leadership

Excerpts From a Conversation with Julie Inman Grant, Australia’s eSafety Commissioner

A high school student poses with her mobile showing her social media applications in Melbourne, Australia, November 28, 2024.
A high school student poses with her mobile showing her social media applications in Melbourne, Australia, November 28, 2024. REUTERS/Asanka Brendon Ratnayake

A discussion with Commissioner Inman Grant, moderated by Senior Fellow Linda Robinson, on Australia’s landmark Online Safety Act and global efforts to advance digital safety. 

December 20, 2024 10:40 am (EST)

A high school student poses with her mobile showing her social media applications in Melbourne, Australia, November 28, 2024.
A high school student poses with her mobile showing her social media applications in Melbourne, Australia, November 28, 2024. REUTERS/Asanka Brendon Ratnayake
Post
Blog posts represent the views of CFR fellows and staff and not those of CFR, which takes no institutional positions.

As the first country to pass an Online Safety Act and set up an independent regulator for online safety, can you provide an overview of Australia’s approach to ensuring a safe online environment for its citizens?   

When I came to eSafety, as what was then the children's eSafety Commissioner—and that was formed in 2015 after a tragic death by a female celebrity who took her life after being terribly trolled on Twitter. Our Minister for Communications, Malcolm Turnbull, who eventually became our Prime Minister, knew that we really had to start with children first, because you can't really argue that children aren't vulnerable. He took a number of functions from across government and put an education and communications function. We serve as the hotline to take reports on child and sexual abuse material and created a novel cyberbullying scheme. It works so that if a child receives any content online and particularly on social media that is threatening, harassing, humiliating, or intimidating, and it doesn't come down, then we serve as that safety net. So, they or their parent can report to us, and we have a 90 percent success rate in terms of getting that content down. We know what's often peer-to-peer, so we end up doing a lot of case management work with the children themselves, the parents, and the school community. 

We have started to see much more dangerous language. Much more violent rape and threats amongst those who are thirteen, fourteen, or fifteen, so we have started to gingerly use what we call our “end user notice powers,” which basically is a cease and desist. We use it through the principles that say you need to take this content down, we're watching, you need to send an apology to the target. This sends a deterrent message that you can’t abuse with impunity. So, then I was asked to set up a new novel program on revenge porn. I said I'll set it up, but I won't call it revenge porn. Revenge for what? We're talking about intimate images being shared without consent, and just like child sexual abuse, is a very gendered crime impacting girls much more than boys, up 90 percent. About 75 percent of all image-based abuse affects girls and that includes deep fakes. The only anomaly there is that we're seeing a 1300 percent increase in financial sexual extortion targeting young men between the ages of eighteen and twenty-four.  

More on:

Technology and Innovation

Women's Political Leadership

Democratization

After the Christchurch tragedy, we were given more powers around terrorism and abhorrent violent conduct. We had a review which gave us more systemic powers, but what I just like to say is we've got these complaint schemes, which are the first in the world, and this allows us to remediate harm in real time. It helps us bridge that inherent power imbalance that exists between the great tech behemoths and the everyday person.  

What's really valuable about our complaint schemes is we see the trends that are happening in real time, how technology is being weaponized, where the abuse is happening, and how the companies are actually dealing with them. So we can see the systemic faults and then tailor our systemic powers accordingly. But that has to be bolstered by prevention and education at the front end and the gathering of a strong evidence base, which is covered through the Online Safety Act. But what is not covered is any kind of scheme to threaten to flatten and harden the threat surface for the future. And that is what I call proactive and systemic change. That is where our safety by design initiative sits. Really forecasting eighteen to twenty-four months out, what are the tech trends that are coming? How do we harness benefits? How do we mitigate the risks? How do we bring the populace along and how do we shape the laws to make sure that they're accommodating these changes? Because we've done that, we now have powers to deal with Synthetic CSAM and deep fakes. So all of that is really important to really anticipate the harm.  

Can you discuss both the cooperation and pushback from the private sector as Australia has implemented its Online Safety Act. What successful results would you like to highlight?  

We would not have the success that we do through our complaint schemes if we didn't have proactive relationships with the companies. It's always a little bit of a dance. If we can get that content down much, much more expeditiously and it's simply an issue of volume. So often, it's a terms of service violation that we notify to them, and a lot of the companies already have trusted flagger programs, so we actually haven't received a lot of resistance, yet.  

I think many of you would be familiar with what we refer to as the Wakely incident when we had a tragic mass stabbing at a major shopping mall at Bondi where seven individuals were killed. Two days later, a Syrian Bishop was giving a live stream service, and a radicalized teenager came up and very violently stabbed him in what was later determined to be a terrorist act. So my investigators have to galvanize whenever or wherever there is a terrorist incident or a beheading or a bludgeoning or bombing to make sure that it's not going viral. We tend to work cooperatively with the companies, so we notify the URLs to them, and, in this case, everyone complied.  

We were concerned that it was going a bit too viral on both X Corp and Meta, so we issued removal notices to both of them. Meta complied, but X Corp said they would see us in court. We asked them to take reasonable steps to prevent people from being able to see interstitials and labels. There was a debate about whether using country-withheld content was sufficient. We were not successful in that court case but now have six against them. We also had issued a transparency notice around child sexual abuse material to more than thirteen companies, including X Corp. X Corp didn't comply so we sent them an infringement notice, and we have a fairness and due diligence process where we do our best to get the information we could. It was surprising because the CEO at the time had said very publicly that child safety was his top priority, so wouldn't you want to talk about what you're doing? That ended up in the judicial review, and just to show you how this jurisdictional arbitrage works, we went to court, and they argued that when we issued the transparency notice, they were Twitter based in Delaware but then merged under Nevada law to become X so none of the liabilities or responsibilities transferred. My team had written when that change happened to make sure that the infrastructure of the people was the same, so the court threw that out with costs, and they're appealing. There are other tools we're starting to see that are tying regulators up in red tape and lawsuits—we have seven lawsuits. There are front groups; one is called the Free Speech Union of Australia, and they've run a campaign to increase the number of FOIs by 3000 percent. So I think we'll see a number of different things employed, but beyond that, we haven't had significant pushback from the major companies.  

More on:

Technology and Innovation

Women's Political Leadership

Democratization

Really the challenges we have are with kind of the rogue porn sites that were set up for the purposes of humiliating women or those sites that were set up with a lot of gore content in countries that have permissive hosting environments, of which the United States is one. 

Can you describe the state of international collaboration on digital safety? You formed the Global Online Safety Regulators Network, and the European Union is initiating implementation of its Digital Services Act. Do you think that the size of these combined markets may lead companies to adopt standardized safety practices? 

When we were formed in 2015, we were the only online safety regulator in the world for almost seventy years, so we had to write the playbook as we went along. We thought once we started to see that the online safety bill in the UK was going to move forward, which took about six years, Ireland was also coming on board, the DSA was happening, we formed something called the Global Online Safety Regulators Network roughly based on what the Global Privacy Assembly does and the ICPEN for the consumer and competition authorities. We launched here in DC two years ago with Fiji, the UK, and Ireland. We now have nine regulators in in the network and eighteen observers from around the world, and a lot of the observer organizations, NGOs, or civil society organizations are for instance, Canada Heritage, the department trying to put forward C63, the online harms bill in Canada so that they get the benefit of understanding how we work and what works and what doesn't. Separately, we also have a Memorandum of Understanding with the European Union, and I had the opportunity to address all twenty-seven of the digital services coordinators. In less than a decade, we've gone from zero online safety regulators to about thirty. From the nine regulators we have now, it is roughly equal to the population of the United States. Obviously, the online world is global; the internet is global, and we have to work together.  

The first year our focus was on really governance and setting things up, but also stating very clearly all of the myriad ways in which online safety is compatible with a range of human rights, including freedom of expression. Our Parliament decided when freedom of expression veers into the lane of online harm and people are targeted, that suppresses speech and they will have an independent statutory authority in eSafety to investigate and have thresholds and be transparent and accountable, and that citizens could have recourse through multiple ways through the courts, through the Tribunal, through internal review. So there are lots of checks and balances in place there.  

This year, we focused on what we call regulatory coherence because just like privacy is driven by cultural mores, you can say the same with online safety. If you look at the debates in the U.S. Congress around the Kids Online Safety Act (KOSA) and around Section 230, it's really been about conservative voices and progressive voices rather than how do we mitigate the harm, which is a very bipartisan issue in Australia. I think companies around the world increasingly feel that these countries are extractive. In many ways, they're extracting our data, children's data, and extracting revenue. My Prime Minister was very clear when he put forward the age restriction social media bill that he believes social media companies have a social responsibility. What we'll be looking at in terms of the next version of the Online Safety Act, is what does that social contract look like? To the extent we are concerned about more push back, what do business disruption powers look like? What do significant penalties look like and how do we approach this in a way that's going to work for everyone? 

Can you explain the Safety by Design initiative you launched in 2019 to help organizations who want to embed the rights of users and user safety into the design and functionality of products and services? Can you explain the measures and whether you see major private sector players embracing this proactive approach? 

I'm sure many of you who've worked in this field know that SD3, security by default, design, and deployment has been in place for a long time. Privacy by design was announced by Ann Kovukian, the former Ontario Data Privacy Commissioner in 2009.  

When I became eSafety Commissioner, I thought, we do need to put the burden back on the platforms just like Ralph Nader did when he brought in all the data around traffic fatalities that would be reduced by the embedding of the humble seat belt into the car. It took Congress and parliaments around the around the world to mandate that and the car manufacturers pushed back. Today, we get into our cars, and we take for granted that there are all these life-saving technologies, and the car companies compete on that. I thought about that in the context of this technological exceptionalism. Why aren't they assessing the risk, understanding the harms, and embedding the safety protections at the front end rather than retrofitting after the damage has been done? Of course it has to do with the business model and we see more stickiness-by-design or surveillance-by-design, but I want to do this with industry rather than to them. We started with four principles and after nine months, we whittled it down to three that were agreed to: service provider responsibility, user empowerment and autonomy, and meaningful transparency and accountability.  

The latest paper we just put out was around technology-facilitated gender-based violence and the extent to which there are companies that are doing beneficial things, such as blurring imagery so that you don't have to receive unsolicited images. There is so much more to be done here. If you look at the demographics of Silicon Valley leadership, 80 percent are men in the product engineering and the leadership teams, they don't have lived experience. But if you can target advertising with the deadly precision that we see today, you can certainly target misogynistic, racial, homophobic, any kind of hate speech. We have the way, we haven't seen the will. So, we think venture capital and investment companies should be huge levers. They're often the adults in the room, so we developed a checklist so when they're going through conversations with these new companies get them to think about how their platforms could be misused and engineered out. We give them some due diligence clauses so that then you can be actually investing and managing your own risk and preventing possibly the next tech wreck.  

But we've also incorporated it into education, so it's taught now with IT in the high schools. We've got a twelve-hour Massive Open Online Courses (MOOC) for ecosystem engineers of the future so that they can be designing and coding with a conscience. What I would say is that Safety by Design can be applied to every single technology that's out there. We're also applying it to government processes. We had this terrible automated system called Robo Debt that was calling people when they were failing, and they got it wrong, and it led to suicides and terrible situations. So now governments are starting to realize we do need to understand the risks and the harms and prevent that totally misfiring. So it can apply to anything. 

Check out the rest of the conversation here.  

Noël James is the research associate for the Women and Foreign Policy program and assisted in preparing this post. 

Creative Commons
Creative Commons: Some rights reserved.
Close
This work is licensed under Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) License.
View License Detail
Close