Tuesday, January 13, 2026

UK Child Protections and Messaging Backdoor

Tim Hardwick:

Apple and Google will soon be “encouraged” to build nudity-detection algorithms into their software by default, as part of the UK government’s strategy to tackle violence against women and girls, reports the Financial Times.

Jon Brodkin:

If the UK gets its way, operating systems like iOS and Android would “prevent any nudity being displayed on screen unless the user has verified they are an adult through methods such as biometric checks or official ID. Child sex offenders would be required to keep such blockers enabled.” The Home Office “has initially focused on mobile devices,” but the push could be expanded to desktops, the FT said. Government officials point out that Microsoft can already scan for “inappropriate content” in Microsoft Teams, the report said.

[…]

The push for device-level blocking comes after the UK implemented the Online Safety Act, a law requiring porn platforms and social media firms to verify users’ ages before letting them view adult content. The law can’t fully prevent minors from viewing porn, as many people use VPN services to get around the UK age checks. Government officials may view device-level detection of nudity as a solution to that problem, but such systems would raise concerns about user rights and the accuracy of the nudity detection.

Dare Obasanjo:

Maybe this explains why Apple is hesitant to add age verification at the OS level if it opens the door to requests like these.

Paige Collings:

In his initial announcement, Starmer stated: “You will not be able to work in the United Kingdom if you do not have digital ID. It’s as simple as that.” Since then, the government has been forced to clarify those remarks: digital ID will be mandatory to prove the right to work, and will only take effect after the scheme’s proposed introduction in 2028, rather than retrospectively.

The government has also confirmed that digital ID will not be required for pensioners, students, and those not seeking employment, and will also not be mandatory for accessing medical services, such as visiting hospitals. But as civil society organizations are warning, it’s possible that the required use of digital ID will not end here. Once this data is collected and stored, it provides a multitude of opportunities for government agencies to expand the scenarios where they demand that you prove your identity before entering physical and digital spaces or accessing goods and services.

[…]

Digital ID systems expand the number of entities that may access personal information and consequently use it to track and surveil. The UK government has nodded to this threat. Starmer stated that the technology would “absolutely have very strong encryption” and wouldn’t be used as a surveillance tool. Moreover, junior Cabinet Office Minister Josh Simons told Parliament that “data associated with the digital ID system will be held and kept safe in secure cloud environments hosted in the United Kingdom” and that “the government will work closely with expert stakeholders to make the programme effective, secure and inclusive.”

But if digital ID is needed to verify people’s identities multiple times per day or week, ensuring end-to-encryption is the bare minimum the government should require. Unlike sharing a National Insurance Number, a digital ID will show an array of personal information that would otherwise not be available or exchanged.

Cam Wakefield (Hacker News):

Under the Online Safety Act, Ofcom has been handed something called Section 121, which sounds like a tax loophole but is actually a legal crowbar for prying open encrypted messages.

It allows the regulator to compel any online service that lets people talk to each other, Facebook Messenger, Signal, iMessage, etc to install “accredited technology” to scan for terrorism or child abuse material.

The way this works is by scanning all your messages. Not just the suspicious ones. Not just the flagged ones. Every single message. On your device. Before they’re encrypted.

[…]

“We have set a date of April 2026,” [Lord Hanson] said, presumably while polishing his best ‘nothing to see here’ smile, “and we expect to act extremely speedily once we have had the report back.”

Cindy Harper (Hacker News):

The government’s new Online Safety Act 2023 (Priority Offenses) (Amendment) Regulations 2025, which came into force on January 8, 2026, designates “cyberflashing” and “encouraging or assisting serious self-harm” as priority offenses, categories that trigger the strictest compliance duties under the OSA.

This marks a decisive move toward preemptive censorship. Services that allow user interaction, including messaging apps, forums, and search engines, must now monitor communications at scale to ensure that prohibited content is automatically filtered or suppressed before users can even encounter it.

Previously:

2 Comments RSS · Twitter · Mastodon


Child protection is a pretty obvious smokescreen to use verification as a means of censorship and silencing government critics. Any smart person could see it's a bad idea because *right now* governments (like America for example) are increasing their authoritarianism.

A cautious approach would be to avoid doing anything that enables this lest it be abused

The fact that they're still pushing for it tells me *they want to abuse it*


Most users will welcome this decision, because for them, no government, state or the EU is the problem. The problem is Apple, because unfortunately, most live in a bubble and are unaware of the real world, or they desire a world of control and surveillance where they will be a puny ID with certain permissions and potential penalties.

Leave a Comment