Your monthly round-up on Trust & Safety
View in browser
Safety_Space_Header

Hi Name,  

 

With a wide array of T&S updates in the past few weeks, it is safe to say that April brought some major developments on child safety and age verification both from regulators and industry... But there’s a lot more to dig into this new edition of Safety Space. Here's a sneak peek:

 

+ In the 🇬🇧 UK, Ofcom published its new Protection of Children Codes and Practice, setting July 24th as deadline for the completion of children’s risk assessments. Across the Channel, the 🇪🇺 European Commission is hiring to boost its DSA enforcement capacity. Meanwhile, on the other side of the Atlantic 🇺🇸, the Take It Down bill has been passed by Congress.

 

+ In the 🏭 industry, Meta rolls out AI tools to detect underage users, Google launches a trust report for Maps and upgrades ad fraud defenses, and Discord tests facial recognition for age checks. TikTok experiments with “footnotes” for added context, Bluesky develops its own blue check, and new parental controls land on Roblox.

 

+ 🔵 And some updates from Tremau: we are now confirmed as SOC 2 Type II compliant and Justin joined us as our new Head of Sales! 

🇮🇪 And we're heading to Dublin for the TSPA EMEA Summit! If you are there too, get in touch with our team...

Screenshot 2025-05-04 at 15.37.08

... and come join us in our panels!

 

+ Unpacking the DSA Audits: Takeaways for T&S Professionals - 10:45 AM with Toshali Sengupta.

 

+ Securing the Digital Playground: Between Protecting and Empowering Children Online - 2:20 PM with Jess Lishak (Tech Coalition), Silvia Fukuoka (Ofcom), Karen McAuley (Comisiún na Meán), Agne Kaarlep (Tremau)

 

+ Balancing Regulation and Innovation: The Future of Age Assurance in the Tech Stack - 1:30 PM with Julie Dawson (Yoti), Martin Drechsler (FSM.de), Michel Murray (Information Commissioner’s Office), Pal Boza (Tremau)

 

+ Is the DSA Changing Platforms' Relationship with Users? Lessons Learned From Year 1 of New Transparency & Redress Regulation - 4:40 PM with Kevin Koehler (Independent Consultant), Niklas Eder (User Rights), Steve Blythe (Automattic), Richard Earley (Meta), Agne Kaarlep (Tremau)

1-1

🇬🇧📄 Ofcom published its new Protection of Children Codes and Guidance, giving online services until 24 July to complete and record their first children’s risk assessments. The Codes outline over 40 measures that services likely to be accessed by children can adopt starting 25 July 2025, including: safer feeds, robust age checks, swift action on harmful content, better support and choice for children, easier reporting and complaints, and stronger governance across the board.

 

🇬🇧❗Ofcom has opened an investigation under the OSA, targeting the provider of an online suicide discussion forum. The investigation will assess whether the provider had appropriate safety measures to protect users from illegal content, completed and maintained a suitable and sufficient illegal harm risk assessment, and adequately responded to an information request.

 

🇬🇧👥 Ofcom established a new Online Information Advisory Committee under the OSA to provide guidance on how services should deal with disinformation and misinformation. The committee members include: Elisabeth Costa, Jeffrey Howard, Will Moy, Mark Scott, and Devika Shanker-Grandpierre.

 

🇪🇺👥 The European Commission is enhancing its capacity to enforce the Digital Services Act, opening 60 different positions for the DSA enforcement unit. Applications close on 10 May. 

 

🇪🇺🤝 Seven out-of-court dispute resolution bodies have joined forces to launch the ODS Network, a new coalition aimed at strengthening the enforcement of users’ rights under the DSA. The network includes User Rights, the Appeals Centre, ADROIT, among others. At Tremau, we keep track of who these bodies are, where they operate, and how to get in touch with them, in our DSA Database 📊.

 

🇪🇺🤖 Executive Vice-President for Tech Sovereignty, Security and Democracy of the EC, Henna Virkkunen, said integrating DeepSeek into platforms might result in breaching the DSA as “DeepSeek models are subject to ideological censorship and are therefore in conflict with the EU’s principles”.

 

🇺🇸🏛️ The Take It Down Act, a bill requiring online platforms to remove nonconsensual intimate images (including AI-generated ones), has now cleared Congress and is awaiting presidential sign-off. Once enacted, online services operating in the U.S. will have one year to roll out systems that let users report NCII, ensure its prompt removal, and put safeguards in place to prevent abuse of the reporting process.

 

🇺🇸👶 The Federal Trade Commission has announced an update to the Children’s Online Privacy Protection Act (COPPA), which will take effect on June 23, 2025. Companies will have until April 2026 to achieve full compliance. The regulation introduces stricter requirements for privacy transparency: organizations must now clearly outline their data practices, including what personal information is collected, the entities with whom it is shared, and the purposes behind its collection. A central focus of the update is limiting the reach of digital advertising by restricting the flow of children’s data to third parties.

 

Plus, 2️⃣ key age verification updates from 🇫🇷 & 🇮🇹:

 

🇫🇷🔞 France’s new age verification law for adult websites is being phased in, with domestic platforms and non-EU services already subject to the requirements. Starting June 7, sites based in other EU countries will also need to comply. Under the new framework, France’s digital regulator Arcom will be empowered to block access to any platform that fails to implement effective age verification measures within the required timeline. 

 

🇮🇹🛡️ Italy’s telecommunications regulator, Agcom, has approved a resolution mandating to implement age verification measures within six months to online services hosting content deemed harmful to minors - such as adult material or gambling. The regulation introduces a two-step system requiring both identification and authentication of users for every session in which the service is accessed.

2

🧒📲 Meta announced new measures for safeguarding children online:

 

+ AI to proactively look for underage users and automatically change their account settings. 

+ Facebook and Messenger will have “teen accounts” in the US, UK, Australia, and Canada. The feature has been available on Instagram since September and includes restrictions on messaging and interactions with strangers, as well as tighter controls on viewing sensitive content.

 

📍📃 Google Maps has published its first Content Trust & Safety Report. In 2024, nearly one billion reviews were posted on the platform; 245 million were removed, and 949,000 accounts were restricted. According to the report, most fake reviews are detected by machine learning systems that analyze account behavior, identify links to suspicious networks, and flag anomalies such as sudden spikes in five-star ratings.

 

📲🤖 Google's latest Ads Safety Report highlighted a key new trend: AI is improving the ability to prevent fraudsters from ever showing ads to people. Over the past year, Google implemented 50 enhancements to its large language models, enabling the platform to proactively prevent abuse and speed up complex investigations.

 

💬🚩 Google is rolling out sensitive content warnings in its Messages app for Android to flag nudity. Powered by an on-device classifier to address privacy concerns, the feature blurs incoming images identified as explicit and prompts users with a reminder about the risks of sharing nude imagery when sending such content. The warnings are enabled by default for minors, and supervised users won’t be able to turn them off.

 

📝📱 TikTok is testing a new feature called “footnotes”, echoing the community notes found on X and Meta. Currently being trialed in the U.S., the feature lets users add extra context or relevant information to videos, aiming to provide more clarity around the content shared on the platform.

 

👶🎮 Roblox launched new tools for parental control. These will allow adults to block specific experiences and people on their child’s friends list, and view which experiences their kid is spending the most time playing with.

 

🧑‍💻🔞 Discord is testing a facial recognition age verification process in the UK and Australia for users who encounter flagged explicit content or who try to change age-related settings. 

 

✅🤳 Bluesky is introducing a new layer of verification: a user-friendly blue check that will be displayed next to the names of authentic and notable accounts, as well as accounts verified by select independent "Trusted Verifiers" organizations.

 

🤖🧩 Mistral has launched Classifier Factory, a toolkit designed for organizations to build their own custom classifiers. It can be used across a range of applications, from content moderation and sentiment analysis to fraud detection and spam filtering.

3
Screenshot 2025-05-05 at 10.35.03

🔐🔵 We achieved SOC 2 Type II compliance - an essential step that reinforces our commitment to data security and scaling Nima, our AI-powered content moderation platform.

 

As our Co-founder & COO, Pal Boza, puts it, “SOC 2 compliance is key in our continuous effort to build Nima as a secure, trusted system that empowers client platforms to tackle harmful and illegal content effectively”.

Tremau_SMT-03

🤝🥳 We welcomed Justin Samuel, our new Head of Sales, bringing his experience in Enterprise Saas, AI, Compliance Teach, and Digital Transformation. In his new role, Justin will be driving our growth in T&S and bringing Nima, our T&S platform, to the forefront of the market.

Meet Justin now

👶🔍 Online child safety is front and center in Trust & Safety, with Ofcom releasing its Protection of Children Codes of Practice and other regions moving toward new regulations; our latest blog breaks down the evolving landscape in the EU, UK, Australia, and the US, and the challenges platforms face.

Need help with child safety? Contact us
4

🧒👥 The Children’s Rights Alliance is monitoring online harms and assessing the progress made by Ireland and the tech industry in protecting children. Recommendations focus on online safety initiatives that account for marginalized children, including those with disabilities, LGBTQI+ youth, and children from low-income backgrounds.

 

🇪🇺👶 How to protect children under the DSA? The Centre on Regulation in Europe (CERRE) researched age assurance and age-appropriate design and the ecosystem of rules at the EU and non-EU levels. 

 

🧑‍💻The Internet Watch Foundation published its annual report, highlighting that 91% of the reports assessed as criminal contained ‘self-generated’ imagery. In total, the Foundation had 424,047 reports of content that were suspected of containing child sexual abuse imagery. 

 

🗺️🏛️ How does the online safety regulation compare in 19 jurisdictions? NYU analyzed 26 laws from different countries, offering a comparative overview, outlining the advantages and disadvantages of every approach, and giving recommendations to develop regulations consistent with international human rights standards. 

 

📚🔎 Are you a researcher interested in the DSA? The DSA 40 Data Access Collaboratory provides key information and resources to understand the Act, allowing investigators to produce insights and knowledge about its application and enforcement. 

 

📎📑 The Stanford Cyber Policy Center published its tenth issue of the Journal of Online Trust & Safety, featuring three peer-reviewed articles discussing the regulation of image-based abuse, delisting public interest information and public opinion about it, and the dynamics of engagement and content removal on Facebook. 

 

🔐🧑‍💻 “There is an urgent need to combine existing content moderation techniques with more innovative methods, to combat evolving online threats”: The Centre for Emerging Technology and Security examined promising content moderation solutions that can help social media platforms and end-to-end encrypted (E2EE) services fulfil their new legal duties to remove illegal online content under the UK OSA. 

 

🧒📄 Want to know more about how online services are combating online child sexual exploitation and abuse? The Tech Coalition’s 2024 report highlights the 10 new members and how the Coalition is helping 51 non-member companies strengthen their child safety programs.

    5

    Theory and Practice of Social Media’s Content Moderation by Artificial Intelligence in Light of European Union’s AI Act and Digital Services Act by: Gergely Gosztonyi, Dorina Gyetván and Andrea Kovács. 

      Policy-as-Prompt: Rethinking Content Moderation in the Age of Large Language Models by Konstantina Palla et al. 

        Tremau-Logo_wTagline-DeepBlue (1)

        Tremau, 5 rue Eugène Freyssinet F, Paris, France

        Unsubscribe Manage preferences