Your monthly round-up on Trust & Safety
Safety_Space_Header

Hi Name,  

 

While many took advantage of August to take some days off, the Trust & Safety world did not really slow down, with new regulations taking hold, important legal battles heating up, and platforms rolling out features to improve safety and compliance.

As usual, weโ€™ve got you covered in Safety Space. Here is a sneak peek:

+ ๐Ÿ‡ช๐Ÿ‡บ In the EU, the European Media Freedom Act is now live, while the EU General Court is preparing for important rulings on the DSA. ๐Ÿ‡ฌ๐Ÿ‡ง Ofcom issued a provisional decision on the investigation into 4chan, which, in response, sued the UK regulator in the US. ๐Ÿ‡ซ๐Ÿ‡ท French prosecutors and ๐Ÿ‡ฆ๐Ÿ‡บ Australian regulators are investigating Kik, following the death of a French streamer during a livestream. And the eSafety Commissioner issued an enforcement action against OmeTV.

 

+ ๐Ÿ—๏ธ INSIDE THE TECH: TikTok updated its community guidelines, rolled out new parental controls, and introduced a feature for creators to filter out toxic comments. Bluesky and Roblox strengthened their safety rules, YouTube is testing AI to spot underage users, and WhatsApp now shows a quick safety summary whenever users are added to a new group.

+๐Ÿ“‘ IN CASE YOU MISSED IT: Read a new guide to help families and platforms navigate age checks online, a major report on DSA data access, and a sharp look at how AI is changing content moderation.

+๐ŸŒŽ Finally, LETโ€™S CONNECT as we travel the globe for multiple events - check it out below and come meet us if youโ€™ll be there too!

7

๐Ÿ‡ช๐Ÿ‡บ๐Ÿ“Œ The European Commission reports that, thanks to the DSA, users challenged 16 million content removals by TikTok and Meta in the second half of 2024. As a result, 35% of these posts were reinstated after the platformsโ€™ decisions were found to be unjustified.

 

๐Ÿ‡ช๐Ÿ‡บ๐Ÿ” The European Commission has opened a public consultation and a call for evidence to shape its Action Plan against cyberbullying, building on DSA guidelines on protection of minors and broader efforts like the Better Internet for Kids Strategy. Stakeholders are encouraged to share insights and best practices on protecting minors and vulnerable youth.

๐Ÿ‡ช๐Ÿ‡บ๐Ÿ—ž๏ธ The European Media Freedom Act entered started applying on 8 August. Among other obligations, it mandates VLOPs to notify media service providers (MSPs) before restricting or removing content in the EU, providing them time to respond. VLOPs are also required to publish annual reports on MSP declarations and disputes, and to engage in dialogue for unjustified takedowns.


DSA Court battles looming: The EU General Court is preparing to issue key rulings on the DSA:

  • ๐Ÿ‡ช๐Ÿ‡บ๐Ÿง‘โ€โš–๏ธ Zalando's VLOP Status: On 3 September, the Court will rule on Zalandoโ€™s challenge to its VLOP categorisation, potentially setting a precedent for how monthly active users are defined and counted.
  • ๐Ÿ‡ช๐Ÿ‡บ๐Ÿ’ป Supervisory Fees: On 10 September, the Court will hear two cases from Meta and TikTok contesting the EU Commissionโ€™s methodology for calculating their supervisory fees.

๐Ÿ‡ฌ๐Ÿ‡งโ›” Ofcom issued a provisional decision to the online discussion board โ€˜4chanโ€™ for breaching the Online Safety Act (OSA). According to the UK regulator, the provider has contravened its duties to comply with two requests for information. The investigation continues into 4chan's compliance with its risk assessment and safety duties. Meanwhile, 4chan has filed a lawsuit against Ofcom in the US.

๐Ÿ‡ฌ๐Ÿ‡ง๐Ÿ” Ofcom also opened an investigation against Duplanto Ltd, an adult content site provider, in relation to compliance with new age-check requirements under the OSA. Ofcom will examine whether the provider complied with duties to prevent children from encountering pornographic content through the use of effective age assurance.

๐Ÿ‡ซ๐Ÿ‡ท๐Ÿ‡ฆ๐Ÿ‡บ๐Ÿ“ธ The live streaming platform Kick is being investigated by Australiaโ€™s eSafety Commissioner and French prosecutors after the on-air death of a French streamer. The Australian regulator is citing failures to enforce safety standards and faces a potential $49 million penalty for not protecting users from abuse. French prosecutors stated that they would investigate whether Kick knowingly provided illegal services and complied with the Digital Services Act's obligation to remove illegal content.

๐Ÿ‡ฆ๐Ÿ‡บ๐ŸŽฐ Australiaโ€™s eSafety Commissioner issued a formal warning under the 'Online Safety Industry Codes and Standards' against OmeTV
, accused of exposing children to predators and failing to implement required safety features. As part of this case, Apple and Google were also reminded of their obligations under the 'App Store Code', which include reviewing third-party apps for risks, providing clear age ratings, and enforcing developer compliance with Australian safety laws.

8

๐Ÿ“ฑ TikTok rolled out several key safety updates:

  • Community Guidelines: New rules and standards to tackle misinformation, refine bullying policies, and consolidate sensitive topics such as gambling, alcohol, tobacco, drugs, firearms, and dangerous weapons under a single 'regulated goods and services' policy.

  • Family Pairing: Parents can now link their accounts with their teens to customise safety settings.

  • Creator Care Mode: Comments flagged as offensive, inappropriate, or disliked by users are automatically filtered.

  • Footnotes: TikTokโ€™s version of community notes, currently available only in the US, appears under video captions and links to source material.

๐ŸŽฅ๐Ÿค– YouTube will use AI-powered age estimation to identify under-18 users in the US, based on account activity. Accounts flagged as minors will automatically receive protections such as blocks on age-restricted videos, non-personalized ads, 'take a break' notifications, and limits on repeated recommendations about sensitive topics.

๐Ÿ’ฌโ›” WhatsApp launched new features to protect users from group chat scams: when an unknown contact adds a user to a group, a โ€œsafety overviewโ€ now appears before messages are visible, showing details such as the group creation date, the creator, and the number of members.

๐ŸŒ€๐Ÿ’ป Bluesky updated its Terms and Policies to align with the Digital Services Act, UK Online Safety Act, and the US "Take It Down" Act. Changes include clearer Community Guidelines, new enforcement and appeals processes, updates on age assurance and dispute resolution, expanded privacy rights, and a streamlined copyright takedown procedure. The new policies will take effect on 15 September, with updated Community Guidelines effective from 15 October.

๐ŸŽฎ๐Ÿ‘ถ Roblox introduced new changes to its T&S policies, including a ban on content and behaviours that suggest sexual activities, age verification requirements for users and creators, and the implementation of systems to detect inappropriate user behaviour.

๐Ÿ’ฌ๐Ÿ“ฒ Claude by Anthropic now ends chats if users persistently push for harmful content, shutting down conversations as a last resort. This update aims to protect model safety and prevent distress patterns, while still allowing users to retry or start new threads.

9

๐Ÿ‚ As summer winds down, weโ€™re preparing for the events weโ€™ll be part of this autumn! Check out where you can meet us next - reach out to book a meeting with us. 

Collab logos card (11)
Let's meet!

๐Ÿ”๐ŸŒ VLOPSEs are releasing their cycle of Transparency Reports! Most of these reports cover the period from January to June 2025. Want to stay updated on all of them? We've got you covered! Our DSA Database links to this round and previous rounds of transparency reports from all VLOPSEs.

8b8af1e2-747e-4543-a3c6-e103bb243bfe11-ezgif.com-video-to-gif-converter (1)
Get here all the DSA Transparency Reports

๐Ÿ‡ช๐Ÿ‡บ One of this summerโ€™s key developments: since July, all services publishing transparency reports under the DSA must use a new template to disclose data on their moderation efforts.

 

While this is a complex requirement, weโ€™ve automated the process through Nima, our T&S platform, and created a comprehensive checklist with everything you need to stay compliant.

Screenshot 2025-07-03 at 13.22.21
โœ… Get your DSA Transparency Checklist!
10

๐Ÿ‘ฅ๐ŸŽ“ The Trust & Safety Research Conference at Stanford is taking place on 25 and 26 September. Check out the agenda and purchase your tickets.

๐Ÿ“–๐Ÿ”ž Age assurance can seem complex: How does it work? Why is it in place? What are the implications for the personal data? Better Internet for Kids created a Guide to age assurance with clearer explanations, a practical toolkit for families, and detailed resources for digital providers. 

๐Ÿ‡ช๐Ÿ‡บ๐Ÿง‘โ€๐Ÿซ Are you a researcher interested in the DSA? The European Digital Media Observatory published their report Platform Datasets, with challenges, insights and examples for researchers under article 40 of the DSA.

๐Ÿค–๐Ÿ›‘ Bloomberg reported that major platforms are leaning more heavily on AI for content moderation, often cutting human roles in the process. Yet, according to moderators interviewed, AI may frequently misclassify harmful material and create extra work, raising concerns that an only AI-led approach could leave users more exposed to abuse, grooming, and disinformation. 

๐Ÿ“ฐ๐Ÿ”ป The Take it Down Act is a law in the US, but how can platforms do more than the bare minimum? In this article published by Tech Policy, Becca Branum, the Deputy Director of CDTโ€™s Free Expression Project, explores the importance of building Nonconsensual Distribution of Intimate Imagery reporting systems that are intuitive, empathetic, and effective. 

๐Ÿค–โ— What Are the Most Important Issues with AI Companions? Following community roundtables and debates, All Tech is Human published a piece highlighting six categories of concern with AI Companions: emotional and psychological impact, human relationships and social skills,  privacy and data security,  ethical and business model conflicts, credibility, trust, and transparency,  and safety and user vulnerability. 

๐Ÿ“‘๐Ÿ“ฒ The Oversight Boardโ€™s 2024 Annual Report is out, detailing that Meta acted on 74% of 317 recommendations. Notably, over 7 million users engaged with educational exercises to avoid account strikes and response times to trusted partner escalations within five days went up from 69% to 81%.

    11

    Measurement and Metrics for Content Moderation: The Multi-Dimensional Dynamics of Engagement and Content Removal on Facebook by Laura Edelson, Borys Kovba, Hanna Yershova, Austin Botelho, and Damon McCoy.

    How do the effects of toxicity in competitive online video games vary by source and match outcome? by Jacob Morrier, Amine Mahmassani, and Michael Alvarez.

      Tremau-Logo_wEnd2EndTagline-DeepBlue

      Tremau, 5 rue Eugรจne Freyssinet F, Paris, France

      Unsubscribe Manage preferences