With a wide array of T&S updates in the past few weeks, it is safe to say that April brought some major developments on child safety and age verification both from regulators and industry...Butthere’s a lot more to dig into this new edition of Safety Space. Here's a sneak peek:
+ In the 🇬🇧 UK, Ofcom published its new Protection of Children Codes and Practice, setting July 24th as deadline for the completion of children’s risk assessments.Across the Channel, the 🇪🇺 European Commission is hiring to boost its DSA enforcement capacity. Meanwhile, on the other side of the Atlantic 🇺🇸, the Take It Down bill has been passed by Congress.
+ In the 🏭 industry,Meta rolls out AI tools to detect underage users,Googlelaunches a trust report for Maps and upgrades ad fraud defenses, andDiscord tests facial recognition for age checks.TikTokexperiments with “footnotes” for added context, Blueskydevelops its own blue check,and new parental controls land on Roblox.
+ 🔵 Andsome updates from Tremau: we are now confirmed as SOC 2 Type II compliant and Justin joined us as our new Head of Sales!
🇮🇪 And we're heading to Dublin for the TSPA EMEA Summit! If you are there too, get in touch with our team...
... and come join us in our panels!
+ Unpacking the DSA Audits: Takeaways for T&S Professionals - 10:45 AM with Toshali Sengupta.
+ Securing the Digital Playground: Between Protecting and Empowering Children Online - 2:20 PM with Jess Lishak (Tech Coalition), Silvia Fukuoka (Ofcom), Karen McAuley (Comisiún na Meán), Agne Kaarlep (Tremau)
+ Balancing Regulation and Innovation: The Future of Age Assurance in the Tech Stack - 1:30 PM with Julie Dawson (Yoti), Martin Drechsler (FSM.de), Michel Murray (Information Commissioner’s Office), Pal Boza (Tremau)
+ Is the DSA Changing Platforms' Relationship with Users? Lessons Learned From Year 1 of New Transparency & Redress Regulation - 4:40 PM with Kevin Koehler (Independent Consultant), Niklas Eder (User Rights), Steve Blythe (Automattic), Richard Earley (Meta), Agne Kaarlep (Tremau)
🇬🇧👥 Ofcom established a new Online Information Advisory Committee under the OSA to provide guidance on how services should deal with disinformation and misinformation. The committee members include: Elisabeth Costa, Jeffrey Howard, Will Moy, Mark Scott, and Devika Shanker-Grandpierre.
🇺🇸👶 The Federal Trade Commission has announced an update to the Children’s Online Privacy Protection Act (COPPA), which will take effect on June 23, 2025.Companies will have until April 2026 to achieve full compliance. The regulation introduces stricter requirements for privacy transparency: organizations must now clearly outline their data practices, including what personal information is collected, the entities with whom it is shared, and the purposes behind its collection. A central focus of the update is limiting the reach of digital advertising by restricting the flow of children’s data to third parties.
Plus, 2️⃣ key age verification updates from 🇫🇷 & 🇮🇹:
+ Facebook and Messenger will have “teen accounts”in the US, UK, Australia, and Canada. The feature has been available on Instagram since September and includes restrictions on messaging and interactions with strangers, as well as tighter controls on viewing sensitive content.
📍📃 Google Maps has published its first Content Trust & Safety Report.In 2024, nearly one billion reviews were posted on the platform; 245 million were removed, and 949,000 accounts were restricted. According to the report, most fake reviews are detected by machine learning systems that analyze account behavior, identify links to suspicious networks, and flag anomalies such as sudden spikes in five-star ratings.
📲🤖 Google's latest Ads Safety Report highlighted a key new trend:AI is improving the ability to prevent fraudsters from ever showing ads to people. Over the past year, Google implemented 50 enhancements to its large language models, enabling the platform to proactively prevent abuse and speed up complex investigations.
💬🚩 Google is rolling out sensitive content warnings in its Messages app for Android to flag nudity.Powered by an on-device classifier to address privacy concerns, the feature blurs incoming images identified as explicit and prompts users with a reminder about the risks of sharing nude imagery when sending such content. The warnings are enabled by default for minors, and supervised users won’t be able to turn them off.
📝📱 TikTok is testing a new feature called “footnotes”,echoing the community notes found on X and Meta. Currently being trialed in the U.S., the feature lets users add extra context or relevant information to videos, aiming to provide more clarity around the content shared on the platform.
👶🎮Roblox launched new tools for parental control.These will allow adults to block specific experiences and people on their child’s friends list, and view which experiences their kid is spending the most time playing with.
✅🤳 Bluesky is introducing a new layer of verification:a user-friendly blue check that will be displayed next to the names of authentic and notable accounts, as well as accounts verified by select independent "Trusted Verifiers" organizations.
🤖🧩 Mistral has launched Classifier Factory,a toolkit designed for organizations to build their own custom classifiers. It can be used across a range of applications, from content moderation and sentiment analysis to fraud detection and spam filtering.
🔐🔵 We achieved SOC 2 Type II compliance - an essential step that reinforces our commitment to data security and scaling Nima, our AI-powered content moderation platform.
As our Co-founder & COO, Pal Boza, puts it, “SOC 2 compliance is key in our continuous effort to build Nima as a secure, trusted system that empowers client platforms to tackle harmful and illegal content effectively”.
🤝🥳 We welcomed Justin Samuel, our new Head of Sales, bringing his experience in Enterprise Saas, AI, Compliance Teach, and Digital Transformation. In his new role, Justin will be driving our growth in T&S and bringing Nima, ourT&S platform, to the forefront of the market.
🧒👥 The Children’s Rights Alliance is monitoring online harmsand assessing the progress made by Ireland and the tech industry in protecting children. Recommendations focus on online safety initiatives that account for marginalized children, including those with disabilities, LGBTQI+ youth, and children from low-income backgrounds.
🧑💻The Internet Watch Foundation published its annual report,highlighting that 91% of the reports assessed as criminal contained ‘self-generated’ imagery. In total, the Foundation had 424,047 reports of content that were suspected of containing child sexual abuse imagery.
🔐🧑💻 “There is an urgent need to combine existing content moderation techniques with more innovative methods, to combat evolving online threats”: The Centre for Emerging Technology and Security examined promising content moderation solutions that can help social media platforms and end-to-end encrypted (E2EE) services fulfil their new legal duties to remove illegal online content under the UK OSA.
🧒📄 Want to know more about how online services are combating online child sexual exploitation and abuse?The Tech Coalition’s 2024 report highlights the 10 new members and how the Coalition is helping 51 non-member companies strengthen their child safety programs.