UK Leads Charge Against End-To-End Encryption, Calls on Tech Companies to ‘Nerd Harder’
As Privacy News Online reported, for years governments around the world have pursued a constant assault on end-to-end encryption. One of the leaders in this attempt to demonize critical privacy technology is the UK. Wired reported in April that the UK was trying to block Facebook from adding end-to-end encryption to all of its messaging platforms.
More generally, the UK is working on what was originally called the ‘Online Harms Bill’, now renamed the ‘Online Safety Bill’, which aims to regulate online content and speech, and to force platforms digital technologies to control their users more strictly. A key element of this new bill is to make children safer online. This is obviously a laudable goal, but one of the main ideas for achieving it is to weaken end-to-end encryption. In this, the UK government has been aided by the National Society for the Prevention of Cruelty to Children (NSPCC), a charity which “has cared for children for 130 years” as it describes it. Unfortunately, he shares the view of many governments that end-to-end encryption is an obstacle to achieving this goal. Recently, the NSPCC released not one, but two documents that implicitly seek to undermine support for strong and efficient end-to-end encryption. In its discussion paper on the subject, the NSPCC calls for “a balanced settlement that reflects all the complexity of the issues”:
Our poll data shows that there is strong public support for a balanced settlement that reflects the full complexity of the issues, and that does not reduce the contours of decision-making to an unnecessary zero-sum game.
The public wants tech companies to introduce end-to-end encryption in a way that maximizes user privacy and the security of vulnerable users. Indeed, while platforms can demonstrate that children’s safety will be protected, there is significant support for end-to-end encryption – a clear incentive for tech companies to invest the engineering resources necessary to ensure that Responses to threats of child abuse may continue to work. in end-to-end encrypted products.
It seems reasonable at first glance. But further inspection reveals that it demands the impossible: end-to-end encryption that somehow allows businesses and therefore authorities to inspect all messages sent using it. Likewise, it is impossible to “demonstrate that the safety of children will be protected” when it comes to undermining end-to-end encryption, which protected children. There’s even a call from the NSPCC to “nerd harder” – or, as he puts it, “for tech companies to invest the necessary engineering resources.” But as readers of this blog well know, it doesn’t work that way. Either you have true end-to-end encryption, in which case you can’t, by definition, inspect what’s encrypted, or you don’t.
Unfortunately, it’s not just the NSPCC that is fighting end-to-end encryption in the UK. The National Crime Agency, the UK equivalent of the FBI, recently claimed that Facebook’s plans to introduce end-to-end encryption to its messaging platforms “could prevent the detection of up to 20 million images of child abuse every year ”. It is interesting to note the use of two misleading expressions: “might” is not the same as “will”, and “up to 20 million” includes much smaller numbers. It is simply alarmist.
Likewise, the London Police Chief recently wrote: “The current focus on encryption by many large tech companies only serves to make our job of identifying and stopping. [sophisticated terrorist cells] even more difficult, if not impossible in some cases. Another call for a ‘tougher nerd’ came from UK Home Secretary (Home Secretary) Priti Patel. She even offered money to organizations in the form of a new Safety Tech Challenge fund:
The Safety Tech Challenge Fund will foster the development of innovative technologies that help keep children safe in end-to-end encrypted environments, without compromising user privacy.
UK government grants five organizations up to £ 85,000 through the fund [around $118,000] each to prototype and evaluate innovative ways to detect and process sexually explicit images or videos of children in end-to-end encrypted environments, while ensuring user privacy is respected.
Given the high stakes involved in solving a problem that has been recognized for years, it seems unlikely that $ 118,000 will be enough to produce a viable breakthrough solution. The only advanced technique that seems to work, even vaguely, is homomorphic encryption, which would allow image analysis without decrypting message streams. But as the NSPCC admits in its report:
Homomorphic encryption technology is one possible way to protect the privacy of data while scanning its content, but there is debate over its ability to detect [child sexual abuse material], the robustness of its privacy measures and the extent to which it slows down communications.
An article that appeared on the ProPublica site, arousing great interest and much outrage, provides useful context. Initially, the story seemed to claim that WhatsApp had broken its end-to-end encryption to allow its monitoring service to assess whether the messages were abusive or illegal. In fact, WhatsApp’s 1,000 contract workers only have access to messages that have been forwarded to the company by users as potentially problematic. In other words, the end-to-end encryption is intact, but those who are legitimately able to read the messages can report them if they appear to be against the law or the terms of service.
While this is neither a perfect nor a complete solution, it at least reconciles strong encryption with the ability to identify many of those sending abusive or illegal messages. Governments and organizations like the NSPCC would do well to spend more time developing this type of approach, rather than demanding impossible technical solutions that are unlikely to ever come.
Image presented by Gordon Leggett.