‘Irresponsible failure’: Google, Meta, Snap and Microsoft slam EU over child sexual abuse law lapse
Experts warn lapse could sharply reduce reports of abuse, echoing a 58% drop during a similar legal gap in 2021
www.silverguide.site –
The European parliament has blocked the extension of a law that permits big tech firms to scan for child sexual exploitation on their platforms, creating a legal gap that child safety experts say will lead to crimes going undetected.
The law, which was a carve-out of the EU Privacy Act, was put in place in 2021 as a temporary measure allowing companies to use automated detection technologies to scan messages for harms, including child sexual abuse material (CSAM), grooming and sextortion. However, it expired on 3 April, and the EU parliament decided not to vote to extend it, amid privacy concerns from some lawmakers.
The regulatory gap has created uncertainty for big tech companies, because while scanning for harms on their platforms is now illegal, they still remain liable to remove any illegal content hosted on their platforms under a different law, the Digital Services Act. Google, Meta, Snap and Microsoft said they would continue to voluntarily scan their platforms for CSAM, in a joint statement posted on a Google blog.
“We are disappointed by this irresponsible failure to reach an agreement to maintain established efforts to protect children online,” the statement said.
The European parliament said in a statement that it was prioritizing its work on legislation to prevent and combat child sexual abuse online, and that negotiations on a permanent legal framework were ongoing, though the body had offered no timeline for agreements or implementation.
Child protection advocates had warned that allowing the legislation to lapse would probably trigger a steep fall in reports of child sexual abuse. They point to a similar legal gap that occurred in 2021, when reports of such material from EU-based accounts to the National Center for Missing and Exploited Children (NCMEC) fell by 58% over a period of 18 weeks.
“When detection tools are disrupted, we lose visibility that directly impacts our ability to find and protect child sexual abuse victims,” said John Shehan, vice-president at NCMEC, a US-based organisation that acts as a clearinghouse for child abuse reports, which it forwards to relevant law enforcement agencies around the world. “When detection goes dark, the abuse doesn’t stop.”
In 2025, NCMEC received 21.3m reports that included more than 61.8m images, videos and other files suspected of being related to child abuse, from around the world. About 90% of these reports are related to countries outside the US.
A spokesperson for the EU parliament declined to comment on whether the legislative body had conducted any assessments to determine the consequences of the lapse of the law.
The EU’s decision to prohibit scanning will have ripple effects in other regions around the world, child safety experts said. Many internet crimes are cross-border, with perpetrators sending illegal images to people or targeting children in other countries. “Sextortionists”, who pose as romantic interests to trick people into sending intimate photographs before making blackmail attempts, may also capitalize on the law change, Shehan said.
“The offender can be anywhere in the world, but they could have unfettered access to minors in Europe now that there’s legal uncertainty around those safeguards and protections to identify when a child is being groomed,” Shehan said.
Years of tense negotiations lead to lapse of vital carve-out law
For the past four years, the proposed child sexual abuse regulation has been under negotiation, with contention arising because it would obligate companies to take measures to minimise risks on their platforms, said Hannah Swirsky, head of policy and public affairs at the Internet Watch Foundation, a UK-based child safety non-profit.
Privacy advocates argue that big tech scanning messages for child abuse threatens fundamental privacy rights and data security for EU citizens, equating these measures to “chat control” that could lead to mass surveillance and false positives.
“There are claims of surveillance or infringement of privacy,” Swirsky said. “Blocking CSAM is not an evasion of privacy. Free speech does not include sexual abuse of children.”
The scanning technology uses machine learning that performs pattern detection to identify known images or videos of abuse, as well as language associated with child exploitation, and does not store any data, said Emily Slifer, director of policy at Thorn, a non-profit that builds technology to detect online child abuse, which is commonly used by companies and law enforcement.
The system works by having trained analysts review known CSAM obtained from external sources, such as reports from police, the public or investigations into websites known for hosting child abuse material. When analysts confirm that content is illegal child sexual abuse, they generate a unique digital fingerprint – known as a hash value – that identifies that exact image. Lists of hash values are then shared with platforms, which use automated systems to scan uploads and block matching content instantly, without the need for a human to view it.
“The technology doesn’t find babies in bathtubs and things like that. If you just think of what an image of abuse would look like versus what consensual content would look like: those are two very different pieces of material, and technology can determine those patterns between them,” Slifer said.
While the EU has blocked scanning for child abuse, it has allowed tech companies to voluntary scan messages for the detection of terrorist content under legislation adopted in 2021, she said.
“The EU is effectively risking open doors for predators,” Swirsky said. “If the EU is serious about protecting children online, then it needs to agree on a permanent legislative framework for safeguarding children and for enabling detection.”

Comment