Ghostwritten

This week’s deliberate leak of 100,000 WhatsApp messages sent between the retiring MP Matt Hancock (Con-West Suffolk) and his cabinet colleagues and scientific advisers offers several lessons for the future. Hancock was the health minister during the first year of the covid-19 pandemic, but forced to resign in June 2021, when he was caught on a security camera snogging an adviser in contravention of the social distancing rules.

The most ignored lesson relates to cybersecurity, and is simple: electronic messages are always at risk of copying and disclosure.

This leak happened to coincide with the revival of debates around the future of strong encryption in the UK. First, the pending Online Safety bill has provisions that taken together would undermine all encrypted communications. Simultaneously, a consultation on serious and organized crime proposes to criminalize “custom” encryption devices. A “dictionary attack”, Tim Cushing calls this idea at Techdirt, in that the government will get to define the crime at will.

The Online Safety Bill is the more imminent problem; it has already passed the House of Commons and is at the committee stage in the House of Lords. The bill requires service providers to protect children by proactively removing harmful content, whether public or private, and threatens criminal liability for executives of companies that fail to comply.

Signal, which is basically the same as WhatsApp without the Facebook ownership, has already said it will leave the country if the Online Safety bill passes with the provisions undermining encryption intact.

It’s hard to see what else Signal could do. It’s not a company that has to weigh its principles against the loss of revenue. Instead, as a California non-profit, its biggest asset is the trust of its user base, and staying in a country that has outlawed private communications would kill that off at speed. In threatening to leave it has company: the British secure communications company Element, which said the provisions would taint any secure communications product coming out of the UK – presumably even for its UK customers, such as the Ministry of Defence.

What the Hancock leak reminds us, however, is that encryption, even when appropriately strong and applied end-to-end, is not enough by itself to protect security. You must also be able to trust everyone in the chain to store the messages safely and respect their confidentiality. The biggest threat is careless or malicious insiders, who can undermine security in all sorts of ways. Signal (as an example) provides the ability to encrypt the message database, to disappear messages on an automated schedule, password protection, and so on. If you’re an activist in a hostile area, you may be diligent about turning all these on. But you have no way of knowing if your correspondents are just as careful.

In the case at hand, Hancock gave the messages to the ghost writer for his December 2022 book Pandemic Diaries, Isabel Oakeshott, after requiring her to sign a non-disclosure agreement that he must have thought would protect him, if not his colleagues, from unwanted disclosures. Oakeshott, who claims she acted in the public interest, decided to give the messages to the Daily Telegraph, which is now mining them for stories.

Digression: whatever Oakeshott’s personal motives, there is certainly public interest in these messages. The tone of many quoted exchanges confirms the public perception of the elitism and fecklessness of many of those in government. More interesting is the close-up look at decision making in conditions of uncertainty, which to some filled with hindsight looks like ignorance and impatience. It’s astonishing how quickly people have forgotten how much we didn’t know. As mathematician Christina Pagel told the BBC’s Newsnight, you can’t wait for more evidence when the infection rate is doubling every four days.

What they didn’t know and when they didn’t know it will be an important part of piecing together what actually happened. The mathematician Kit Yates has dissected another exchange, in which Boris Johnson queries his scientific advisers about fatality rates. Yates argues that in assessing this exchange timing ise everything. Had it been in early 2020, it would be understandable to confuse infection fatality rates and case fatality rates, though less so to confuse fractions (0.04) and percentages (4%). Yates pillories Johnson because in fact that exchange took place in August 2020, by which time greater knowledge should have conferred greater clarity. That said, security people might find familiar Johnson’s behavior in this exchange, where he appears to see the Financial Times as a greater authority than the scientists. Isn’t that just like every company CEO?

Exchanges like that are no doubt why the participants wanted the messages kept private. In a crisis, you need to be able to ask stupid questions. It would be better to have a prime minister who can do math and who sweats the details, but if that’s not what we’ve got I’d rather he at least asked for clarification.

Still, as we head into yet another round of the crypto wars, the bottom line is this: neither technology nor law prevented these messages from leaking out some 30 years early. We need the technology. We need the law on our side. But even then, your confidences are only ever as private as your correspondent(s) and their trust network(s) will allow.

Illustrations: The soon-to-be-former-MP Matt Hancock, on I’m a Celebrity.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Mastodon or Twitter.

A world of lawsuits

In the US this week the Supreme Court heard arguments in two cases centered on Section 230, the US law that shields online platforms from liability for third-party content. In Paris, UNESCO convened Internet for Trust to bring together governments and civil society to contemplate global solutions to the persistent problems of Internet regulation. And in the business of cyberspace, in what looks like desperation to stay afloat Twitter began barring non-paying users (that is, the 99.8% of its user base that *doesn’t* subscribe to Twitter Blue) from using two-factor authentication via SMS and Meta announced plans for a Twitter Blue-like subscription service for its Facebook, Instagram, and WhatsApp platforms.

In other words, the above policy discussions are happening exactly at the moment when, for the first time in nearly two decades, two of the platforms whose influence everyone is most worried about may be beginning to implode. Twitter’s issues are well-known. Meta’s revenues are big enough that there’s a long way for them to fall…but the company is spending large fortunes on developing the Metaverse, which no one may want, and watching its ad sales shrink and data protection fines rise.

The SCOTUS hearings – Gonzalez v. Google, experts’ live blog, Twitter v. Taamneh – have been widely covered in detail. In most cases, writers note that trying to discern the court’s eventual ruling from the justices’ questions is about as accurate as reading tea leaves. Nonetheless, Columbia professor Tim Wu predicts that Gonzalez will lose but that Taamneh could be very close.

In Gonzalez, the parents of a 23-year-old student killed in a 2015 ISIS attack in Paris argue that YouTube should be liable for radicalizing individuals via videos found and recommended on its platform. In Taamneh, the family of a Jordanian citizen who died in a 2017 ISIS attack in Istanbul sued Twitter, Google, and Facebook for failing to control terrorist content on their sites under anti-terrorism laws. A ruling assigning liability in either case could be consequential for S230. At TechDirt, Mike Masnick has an excellent summary of the Gonzalez hearing, as well as a preview of both cases.

Taamneh, on the other hand, asks whether social media sites are “aiding and abetting” terrorism via their recommendations engines under Section 2333 of the Antiterrorism and Effective Death Penalty Act (1996). Under the Justice Against Sponsors of Terrorism Act (2016) any US national who is injured by an act of international terorrism can sue anyone who “aids and abets by knowingly providing substantial assistance” to anyone committing such an act. The case turns on how much Twitter knows about its individual users and what constitutes substantial assistance. There has been some concern, expressed in amicus briefs, that making online intermediaries liable for terrorist content will result in overzealous content moderation. Lawfare has a good summary of the cases and the amicus briefs they’ve attracted.

Contrary to what many people seem to think, while S230 allows content moderation, it’s not a law that disproportionately protects large platforms, which didn’t exist when it was enacted. As Kosseff tells Gizmodo: without liability protection a local newspaper or personal blog could not risk publishing reader comments, and Wikipedia could not function. Justice Elena Kagan has been mocked for saying the justices are “not the nine greatest experts on the Internet”, but she grasped perfectly that undermining S230 could create “a world of lawsuits”.

For the last few years, both Democrats and Republicans have called for S230 reform, but for different reasons. Democrats fret about the proliferation of misinformation; Republicans complain that they (“conservative voices”) are being censored. The global level seen at the UNESCO event took a broader view in trying to draft a framework for self-regulation. While it wouldn’t be binding, there’s some value in having an multi-stakeholder-agreed standard against which individual governmental proposals can be evaluated. One of the big gaps in the UK’s Online Safety bill;, for example, is the failure to tackle misinformation or disinformation campaigns. Neither reforming S230 nor a framework for self-regulation will solve that problem either: over the last few years too much of the most widely-disseminated disinformation has been posted from official accounts belonging to world leaders.

One interesting aspect is how many new types of “content” have been created since S230’s passage in 1996, when the dominant web analogy was print publishing. It’s not just recommendation algorithms; are “likes” third-party content? Are the thumbnails YouTube’s algorithm selects to show each visitor on its front page to entice viewers presentation or publishing?

In his biography of S230, The Twenty-Six Words That Created the Internet, Jeff Kosseff notes that although similar provisions exist in other legislation across the world, S230 is unique in that only America privileges freedom of speech to such an extreme extent. Most other countries aim for more of a balance between freedom of expression and privacy. In 1997, it was easy to believe that S230 enabled the Internet to export the US’s First Amendment around the world like a stowaway. Today, it seems more like the first answer to an eternally-recurring debate. Despite its problems, like democracy itself, it may continue to be the least-worst option.

Illustrations: US senator and S230 co-author Ron Wyden (D-OR) in 2011 (by JS Lasica via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an archive of earlier columns backj to 2001. Follow on Mastodon or Twitter.