After much wrangling and with just a few days of legislative time between the summer holidays and the party conference season, on Tuesday night the British Parliament passed the Online Safety bill, which will become law as soon as it gets royally signed (assuming they can find a pen that doesn’t leak). The government announcement brims with propagandist ecstasy, while the Open Rights Group’s statement offers the reality: Briton’s online lives will be less secure as a result. Which means everyone’s will.
Parliament – and the net.wars archive – dates the current version of this bill to 2022, and the online harms white paper on which it’s based to 2020. But it *feels* like it’s been a much longer slog; I want to say six years.
This is largely because the fight over two key elements – access to encrypted messaging and age verification – *is* that old. Age verification was enshrined in the Digital Economy Act (2017), and we reviewed the contenders to implement it in 2016. If it’s ever really implemented, age verification will make Britain the most frustrating place in the world to be online.
Fights over strong encryption have been going on for 30 years. In that time, no new mathematics has appeared to change the fact that it’s not possible to create a cryptographic hole that only “good guys” can use. Nothing will change about that; technical experts will continue to try to explain to politicians that you can have secure communications or you can have access on demand, but you can’t have both.
At the New York Times, Farhood Manjou writes that while almost every other industry understands that the huge generation of aging Boomers is a business opportunity, outside of health care Silicon Valley is still resolutely focused on under-30s. This, even though the titans themselves age; boy-king Mark Zuckerberg is almost 40. Hey, it’s California; they want to turn back aging, not accept it.
Manjou struggles to imagine the specific directions products might take, but I like his main point: where’s the fun? What is this idea that after 65 you’re just something to send a robot to check up on? Yes, age often brings impairments, but why not build for them? You would think that given the right affordances, virtual worlds and online games would have a lot to offer people whose lives are becoming more constrained.
It’s true that by the time you realize that ageism pervades our society you’re old enough that no one’s listening to you any more. But even younger people must struggle with many modern IT practices: the pale, grey type that pervades the web, the picklists, the hidden passwords you have to type twice… And captchas, which often display on my desktop too small to see clearly and are resistant to resizing upwards. Bots are better at captchas than humans anyway, so what *is* the point?
We’re basically back where we were 30 years ago, when the new discipline of human-computer interaction fought to convince developers that if the people who struggle to operate their products look stupid the problem is bad design. And all this is coming much more dangerously to cars; touch screens that can’t be operated by feel are Exhibit A.
But there is much that’s worse about modern cars. A few weeks ago, the Mozilla Foundation published a report reviewing the privacy of modern cars. Tl;dr: “Cars are the worst product category we have ever reviewed for privacy.”
The problems are universal across the 25 brands Mozilla researchers Jen Caltrider, Misha Rykov, and Zoë MacDonald reviewed: “Modern cars are surveillance-machines on wheels souped-up with sensors, radars, cameras, telematics, and apps that can detect everything we do inside.” Cars can collect all the data that phones and smart home devices can. But unlike phones, space is a non-issue, and unlike smart speakers, video cameras, and thermostats, cars move with you and watch where you go. Drivers, passengers, passing pedestrians…all are fodder for data collection in the new automotive industry, where heated seats and unlocking extra battery range are subscription add-ons, and the car you buy isn’t any more yours than the £6-per-hour Zipcar in the designated space around the corner.
Then there are just some really weird clauses in the companies’ privacy policies. Some collect “genetic data” (here the question that arises is not only “why?” but “how?). Nissan says it can collect information about owners’ “sexual activity” for use in “direct marketing” or to share with marketing partners. ” The researchers ask, “What on earth kind of campaign are you planning, Nissan?”
Still unknown: whether the data is encrypted while held on the car; how securely it’s held; and whether the companies will resist law enforcement requests at all. We do know that that car companies share and sell the masses of intimate information they collect, especially the cars’ telematics with insurance companies.
The researchers also note that new features allow unprecedented levels of control. VW’s Car-Net, for example, allows parents – or abusers – to receive a phone alert if the car is driven outside of set hours or in or near certain locations. Ford has filed a patent on a system for punishing drivers who miss car payments.
“I got old at the right time,” a friend said in 2019. You can see his point.
Midway through this year’s gikii miniconference for pop culture-obsessed Internet lawyers, Jordan Hatcher proposed that generational differences are the key to understanding the huge gap between the Internet pioneers, who saw regulation as the enemy, and the current generation, who are generally pushing for it. While this is a bit too pat – it’s easy to think of Millennial libertarians and I’ve never thought of Boomers as against regulation, just, rationally, against bad Internet law that sticks – it’s an intriguing idea.
Hatcher, because this is gikii and no idea can be presented without a science fiction tie-in, illustrated this with 1990s movies, which spread the “DCF-84 virus” – that is, “doom cyberfuture-84”. The “84” is not chosen for Orwell but for the year William Gibson’s Neuromancer was published. Boomers – he mentioned John Perry Barlow, born 1947, and Lawrence Lessig, born 1961 – were instead infected with the “optimism virus”.
It’s not clear which 1960s movies might have seeded us with that optimism. You could certainly make the case that 1968’s 2001: A Space Odyssey ends on a hopeful note (despite an evil intelligence out to kill humans along the way), but you don’t even have to pick a different director to find dystopia: I see your 2001 and give you Dr Strangelove (1964). Even Woodstock (1970) is partly dystopian; the consciousness of the Vietnam war permeates every rain-soaked frame. But so did the belief that peace could win: so, wash.
I tend to think that if 1990s people are more doom-laden than 1960s people it has more to do with real life. Boomers were born in a time of economic expansion, relatively affordable education and housing, and and when they protested a war the government eventually listened. Millennials were born in a time when housing and education meant a lifetime of debt, and when millions of them protested a war they were ignored.
In any case, Hatcher is right about the stratification of demographic age groups. This is particularly noticeable in social media use; you can often date people’s arrival on the Internet by which communications medium they prefer. Over dinner, I commented on the nuisance of typing on a phone versus a real keyboard, and two younger people laughed at me: so much easier to type on a phone! They were among the crowd whose papers studied influencers on TikTok (Taylor Annabell, Thijs Kelder, Jacob van de Kerkhof, Haoyang Gui, and Catalina Goanta) and the privacy dangers of dating apps (Tima Otu Anwana and Paul Eberstaller), the kinds of subjects I rarely engage with because I am a creature of text, like most journalists. Email and the web feel like my native homes in a way that apps, game worlds, and video services never will. That dates me both chronologically and by my first experiences of the online world (1991).
Most years at this event there’s a new show or movie that fires many people’s imagination. Last year it was Upload with a dash of Severance. This year, real technological development overwhelmed fiction, and the star of the show was generative AI and large language models. Besides my paper with Jon Crowcrosft, there was one from Marvin van Bekkum, Tim de Jonge, and Frederik Zuiderveen Borgesius that compared the science fiction risks of AI – Skynet, Roko’s basilisk, and an ordering of Asimov’s Laws that puts obeying orders above not harming humans (see XKCD, above) – to the very real risks of the “AI” we have: privacy, discrimination, and environmental damage.
Other AI papers included one by Colin Gavaghan, who asked if it actually matters if you can’t tell whether the entity that’s communicating with you is an AI? Is that what you really need to know? You can see his point: if you’re being scammed, the fact of the scam matters more than the nature of the perpetrator, though your feelings about it may be quite different.
A standard explanation of what put the “science” in science fiction (or the “speculative” in “speculative fiction”) used be to that the authors ask, “What if?” What if a planet had six suns whose interplay meant that darkness only came once every 1,000 years? Would the reaction really be as Ralph Waldo Emerson imagined it? (Isaac Asimov’s Nightfall). What if a new link added to the increasingly complex Boston MTA accidentally turned the system into a Mobius strip (A Subway Named Mobius, by Armin Joseph Deutsch). And so on.
In that sense, gikii is often speculative law, thought experiments that tease out new perspectives. What if Prime Day becomes a culturally embedded religious holiday (Megan Rae Blakely)? What if the EU’s trademark system applied in the Star Trek universe (Simon Sellers)? What if, as in Max Gladsone’s Craft Sequence books, law is practical magic (Antonia Waltermann)? In the trademark example, time travel is a problem; as competing interests can travel further and further back to get the first registration. In the latter…well, I’m intrigued by the idea that a law making dumping sewage in England’s rivers illegal could physically stop it from happening without all the pesky apparatus of law enforcement and parliamentary hearings.
Waltermann concluded by suggesting that to some extent law *is* magic in our world, too. A useful reminder: be careful what law you wish for because you just may get it. Boomer!
Illustrations: Part of XKCD‘s analysis of Asimov’s Laws of Robotics.
In the latest example of corporate destruction, the Guardian reports on the disturbing trend in which streaming services like Disney and Warner Bros Discovery are deleting finished, even popular, shows for financial reasons. It’s like Douglas Adams’ rock star Hotblack Desiato spending a year dead for tax reasons.
Given that consumers’ budgets are stretched so thin that many are reevaluating the streaming services they’re paying for, you would think this would be the worst possible time to delete popular entertainments. Instead, the industry seems to be possessed by a death wish in which it’s making its offerings *less* attractive. Even worse, the promise they appeared to offer to showrunners was creative freedom and broad and permanent access to their work. The news that Disney+ is even canceling finished shows (Nautilus) shortly before their scheduled release in order to pay less *tax* should send a chill through every creator’s spine. No one wants to spend years of their life – for almost *any* amount of money – making things that wind up in the corporate equivalent of the warehouse at the end of Raiders of the Lost Ark.
Many of us were skeptical about Meta’s Oversight Board; it was easy to predict that Facebook would use it to avoid dealing with the PR fallout from controversial cases, but never relinquish control. And so it is proving.
This week, Meta overruled the Board‘s recommendation of a six-month suspension of the Facebook account belonging to former Cambodian prime minister Hun Sen. At issue was a video of one of Sen’s speeches, which everyone agreed incited violence against his opposition. Meta has kept the video up on the grounds of “newsworthiness”; Meta also declined to follow the Board’s recommendation to clarify its rules for public figures in “contexts in which citizens are under continuing threat of retaliatory violence from their governments”.
In the Platformer newsletter Casey Newton argues that the Board’s deliberative process is too slow to matter – it took eight months to decide this case, too late to save the election at stake or deter the political violence that has followed. Newton also concludes from the list of decisions that the Board is only “nibbling round the edges” of Meta’s policies.
A company with shareholders, a business model, and a king is never going to let an independent group make decisions that will profoundly shape its future. From Kate Klonick’s examination, we know the Board members are serious people prepared to think deeply about content moderation and its discontents. But they were always in a losing position. Now, even they must know that.
It should go without saying that anything that requires an Internet connection should be designed for connection failures, especially when the connected devices are required to operate the physical world. The downside was made clear by the 2017 incident, when lost signal meant a Tesla-owning venture capitalist couldn’t restart his car. Or the one in 2021, when a bunch of Tesla owners found their phone app couldn’t unlock their car doors. Tesla’s solution both times was to tell car owners to make sure they always had their physical car keys. Which, fine, but then why have an app at all?
Last week, Bambu 3D printers began printing unexpectedly when they got disconnected from the cloud. The software managing the queue of printer jobs lost the ability to monitor them, causing some to be restarted multiple times. Given the heat and extruded material 3D printers generate, this is dangerous for both themselves and their surroundings.
At TechRadar, Bambu’s PR acknowledges this: “It is difficult to have a cloud service 100% reliable all the time, but we should at least have designed the system more carefully to avoid such embarrassing consequences.” As TechRadar notes, if only embarrassment were the worst risk.
So, new rule: before installation test every new “smart” device by blocking its Internet connection to see how it behaves. Of course, companies should do this themselves, but as we/’ve seen, you can’t rely on that either.
Finally, in “be careful what you legislate for”, Canada is discovering the downside of C-18, which became law in June. and requires the biggest platforms to pay for the Canadian news content they host. Google and Meta warned all along that they would stop hosting Canadian news rather than pay for it. Experts like law professor Michael Geist predicted that the bill would merely serve to dramatically cut traffic to news sites.
However, there are worse consequences. Prime minister Justin Trudeau complains that Meta’s news block is endangering Canadians, who can’t access or share local up-to-date information about the ongoing wildfires.
In a sensible world, people wouldn’t rely on Facebook for their news, politicians would write legislation with greater understanding, and companies like Meta would wield their power responsibly. In *this* world, a we have a perfect storm.
A panel at the UK Internet Governance Forum a couple of weeks ago focused on this exact topic, and was mostly self-congratulatory. Which is when it occurred to me that the Internet may not *be* fragmented, but it *feels* fragmented. Almost every day I encounter some site I can’t reach: email goes into someone’s spam folder, the site or its content is off-limits because it’s been geofenced to conform with copyright or data protection laws, or the site mysteriously doesn’t load, with no explanation. The most likely explanation for the latter is censorship built into the Internet feed by the ISP or the establishment whose connection I’m using, but they don’t actually *say* that.
The ongoing attrition at Twitter is exacerbating this feeling, as the users I’ve followed for years continue to migrate elsewhere. At the moment, it takes accounts on several other services to keep track of everyone: definite fragmentation.
A number of companies have warned that the bill, particularly if it passes with its provisions undermining end-to-end encryption intact, will drive them out of the country. I’m not sure British politicians are taking them seriously; so often such threats are idle. But in this case, I think they’re real, not least because post-Brexit Britain carries so much less global and commercial weight, a reality some politicians are in denial about. WhatsApp, Signal, and Apple have all said openly that they will not compromise the privacy of their masses of users elsewhere to suit the UK. Wikipedia has warned that including it in the requirement to age-verify its users will force it to withdraw rather than violate its principles about collecting as little information about users as possible. The irony is that the UK government itself runs on WhatsApp.
Wikipedia, Ian McRae, the director of market intelligence for prospective online safety regulator Ofcom, showed in a presentation at UKIGF, would be just one of the estimated 150,000 sites within the scope of the bill. Ofcom is ramping up to deal with the workload, an effort the agency expects to cost £169 million between now and 2025.
In a legal opinion commissioned by the Open Rights Group, barristers at Matrix Chambers find that clause 9(2) of the bill is unlawful. This, as Thomas Macaulay explains at The Next Web, is the clause that requires platforms to proactively remove illegal or “harmful” user-generated content. In fact: prior restraint. As ORG goes on to say, there is no requirement to tell users why their content has been blocked.
Until now, the impact of most badly-formulated British legislative proposals has been sort of abstract. Data retention, for example: you know that pervasive mass surveillance is a bad thing, but most of us don’t really expect to feel the impact personally. This is different. Some of my non-UK friends will only use Signal to communicate, and I doubt a day goes by that I don’t look something up on Wikipedia. I could use a VPN for that, but if the only way to use Signal is to have a non-UK phone? I can feel those losses already.
And if people think they dislike those ubiquitous cookie banners and consent clickthroughs, wait until they have to age-verify all over the place. Worst case: this bill will be an act of self-harm that one day will be as inexplicable to future generations as Brexit.
The UK is not the only one pursuing this path. Age verification in particular is catching on. The US states of Virginia, Mississippi, Louisiana, Arkansas, Texas, Montana, and Utah have all passed legislation requiring it; Pornhub now blocks users in Mississippi and Virginia. The likelihood is that many more countries will try to copy some or all of its provisions, just as Australia’s law requiring the big social media platforms to negotiate with news publishers is spawning copies in Canada and California.
This is where the real threat of the “splinternet” lies. Think of requiring 150,000 websites to implement age verification and proactively police content. Many of those sites, as the law firm Mischon de Reya writes may not even be based in the UK.
This means that any site located outside the UK – and perhaps even some that are based here – will be asking, “Is it worth it?” For a lot of them, it won’t be. Which means that however much the Internet retains its integrity, the British user experience will be the Internet as a sea of holes.
Illustrations: Drunk parrot in a Putney garden (by Simon Bisson; used by permission).
The launch of the Fediverse-compatible Meta app Threads seems to have slightly overshadowed the European Court of Justice’s ruling, earlier in the week. This ruling deserves more attention: it undermines the basis of Meta’s targeted advertising. In noyb’s initial reaction, data protection legal bulldog Max Schrems suggests the judgment will make life difficult for not just Meta but other advertising companies.
As Alex Scroxton explains at Computer Weekly, the ruling rejects several different claims by Meta that all attempt to bypass the requirement enshrined in the General Data Protection Regulation that where there is no legal basis for data processing users must actively consent. Meta can’t get by with claiming that targeted advertising is a part of its service users expect, or that it’s technically necessary to provide its service.
More interesting is the fact that the original complaint was not filed by a data protection authority but by Germany’s antitrust body, which sees Meta’s our-way-or-get-lost approach to data gathering as abuse of its dominant position – and the CJEU has upheld this idea.
All this is presumably part of why Meta decided to roll out Threads in many countries but *not* the EU, In February, as a consequence of Brexit, Meta moved UK users to its US agreements. The UK’s data protection law is a clone of GDPR and will remain so until and unless the British Parliament changes it via the pending Data Protection and Digital Information bill. Still, it seems the move makes Meta ready to exploit such changes if they do occur.
Warning to people with longstanding Instagram accounts who want to try Threads: if your plan is to try and (maybe) delete, set up a new Instagram account for the purpose. Otherwise, you’ll be sad to discover that deleting your new Threads account means vaping your old Instagram account along with it. It’s the Hotel California method of Getting Big Fast.
Last week the Irish Council for Civil Liberties warned that a last-minute amendment to the Courts and Civil Law (Miscellaneous) bill will allow Ireland’s Data Protection Commissioner to mark any of its proceedings “confidential” and thereby bar third parties from publishing information about them. Effectively, it blocks criticism. This is a muzzle not only for the ICCL and other activists and journalists but for aforesaid bulldog Schrems, who has made a career of pushing the DPC to enforce the law it was created to enforce. He keeps winning in court, too, which I’m sure must be terribly annoying.
The Irish DPC is an essential resource for everyone in Europe because Ireland is the European home of so many of American Big Tech’s subsidiaries. So this amendment – which reportedly passed the Oireachta (Ireland’s parliament) – is an alarming development.
Naturally, Meta and Google have warned that they will block links to Canadian news media from their services when the bill comes into force six months hence. They also intend to withdraw their ongoing programs to support the Canadian press. In response, the Canadian government has pulled its own advertising from Meta platforms Facebook and Instagram. Much hyperbolic silliness is taking place –
Pretty much everyone who is not the Canadian government thinks the bill is misconceived. Canadian publishers will lose traffic, not gain revenues, and no one will be happy. In Australia, the main beneficiary appears to be Rupert Murdoch, with whom Google signed a three-year agreement in 2021 and who is hardly the sort of independent local media some hoped would benefit. Unhappily, the state of California wants in on this game; its in-progress Journalism Preservation Act also seeks to require Big Tech to pay a “journalism usage fee”.
The result is to continue to undermine the open Internet, in which the link is fundamental to sharing information. If things aren’t being (pay)walled off, blocked for copyright/geography, or removed for corporate reasons – the latest announced casualty is the GIF hosting site Gfycat – they’re being withheld to avoid compliance requirements or withdrawn for tax reasons. None of us are better off for any of this.
Meanwhile, Watson has a new (marketing) role: analyzing the draw and providing audio and text commentary for back-court tennis matches at Wimbledon and for highlights clips. For each match, Watson also calculates the competitors’ chances of winning and the favorability of their draw. For a veteran tennis watcher, it’s unsatisfying, though: IBM offers only a black box score, and nothing to show how that number was reached. At least human commentators tell you – albeit at great, repetitive length – the basis of their reasoning.
Illustrations: IBM’s Watson, which beat two of Jeopardy‘s greatest champions in 2011.
There’s no point in saying I told you so when the people you’re saying it to got the result they intended.
At the Guardian, Peter Walker reports the Electoral Commission’s finding that at least 14,000 people were turned away from polling stations in May’s local elections because they didn’t have the right ID as required under the new voter ID law. The Commission thinks that’s a huge underestimate; 4% of people who didn’t vote said it was because of voter ID – which Walker suggests could mean 400,000 were deterred. Three-quarters of those lacked the right documents; the rest opposed the policy. The demographics of this will be studied more closely in a report due in September, but early indications are that the policy disproportionately deterred people with disabilities, people from certain ethnic groups, and people who are unemployed.
The fact that the Conservatives, who brought in this policy, lost big time in those elections doesn’t change its wrongness. But it did lead the MP Jacob Rees-Mogg (Con-North East Somerset) to admit that this was an attempt to gerrymander the vote that backfired because older voters, who are more likely to vote Conservative, also disproportionately don’t have the necessary ID.
One of the more obscure sub-industries is the business of supplying ad services to websites. One such little-known company is Criteo, which provides interactive banner ads that are generated based on the user’s browsing history and behavior using a technique known as “behavioral retargeting”. In 2018, Criteo was one of seven companies listed in a complaint Privacy International and noyb filed with three data protection authorities – the UK, Ireland, and France. In 2020, the French data protection authority, CNIL, launched an investigation.
It’s good to see the legal actions and fines beginning to reach down into adtech’s underbelly. It’s also worth noting that the CNIL was willing to fine a *French* company to this extent. It makes it harder for the US tech giants to claim that the fines they’re attracting are just anti-US protectionism.
Also this week, the US Federal Trade Commission announced it’s suing Amazon, claiming the company enrolled millions of US consumers into its Prime subscription service through deceptive design and sabotaged their efforts to cancel.
“Amazon used manipulative, coercive, or deceptive user-interface designs known as “dark patterns” to trick consumers into enrolling in automatically-renewing Prime subscriptions,” the FTC writes.
It has long been no secret that the secret behind AI is human labor. In 2019, Mary L. Gray and Siddharth Suri documented this in their book Ghost Work. Platform workers label images and other content, annotate text, and solve CAPTCHAs to help train AI models.
At MIT Technology Review, Rhiannon Williams reports that platform workers are using ChatGPT to speed up their work and earn more. A team of researchers from the Swiss Federal Institute of Technology study (PDF)found that between 33% and 46% of the 44 workers they tested with a request to summarize 16 extracts from medical research papers used AI models to complete the task.
It’s hard not to feel a little gleeful that today’s “AI” is already eating itself via a closed feedback loop. It’s not good news for platform workers, though, because the most likely consequence will be increased monitoring to force them to show their work.
But this is yet another case in which computer people could have learned from their own history. In 2008, researchers at Google published a paper suggesting that Google search data could be used to spot flu outbreaks. Sick people searching for information about their symptoms could provide real-time warnings ten days earlier than the Centers for Disease Control could.
This actually worked, some of the time. However, as early as 2009, Kaiser Fung reported at Harvard Business Review in 2014, Google Flu Trends missed the swine flu pandemic; in 2012, researchers found that it had overestimated the prevalence of flu for 100 out of the previous 108 weeks. More data is not necessarily better, Fung concluded.
In 2013, as David Lazer and Ryan Kennedy reported for Wired in 2015 in discussing their investigation into the failure of this idea, GFT missed by 140% (without explaining what that means). Lazer and Kennedy find that Google’s algorithm was vulnerable to poisoning by unrelated seasonal search terms and search terms that were correlated purely by chance, and failed to take into account changing user behavior as when it introduced autosuggest and added health-related search terms. The “availability” cognitive bias also played a role: when flu is in the news, searches go up whether or not people are sick.
While the parallels aren’t exact, large language modelers could have drawn the lesson that users can poison their models. ChatGPT’s arrival for widespread use will inevitably thin out the proportion of text that is human-written – and taint the well from which LLMs drink. Everyone imagines the next generation’s increased power. But it’s equally possible that the next generation will degrade as the percentage of AI-generated data rises.
Illustrations: Drunk parrot seen in a Putney garden (by Simon Bisson).
This week, the Online Safety Bill reached the House of Lords, which will consider 300 amendments. There are lots of problems with this bill, but the one that continues to have the most campaigning focus is the age-old threat to require access to end-to-end encrypted messaging services.
At his blog, security consultant Alec Muffett predicts the bill will fail in implementation if it passes. For one thing, he cites the argument made by Richard Allan, Baron of Hallam that the UK government wants the power to order decryption but will likely only ever use it as a threat to force the technology companies to provide other useful data. Meanwhile, the technology companies have pushed back with an open letter saying they will withdraw their encrypted products from the UK market rather than weaken them.
In addition, Muffett believes the legally required secrecy when a service provider is issued with a Technical Capability Notice to provide access to communications, which was devised for the legacy telecommunications world, is impossible in today’s world of computers and smartphones. Secrecy is no longer possible, given the many researchers and hackers who make it their job to study changes to apps, and who would surely notice and publicize new decryption capabilities. The government will be left with the choice of alienating the public or failing to deliver its stated objectives.
Meanwhile, this week Ed Caesar reports at The New Yorker on law enforcement’s successful efforts to penetrate communications networks protected by Encrochat and Sky ECC. It’s a reminder that there are other choices besides opening up an entire nation’s communications to attack.
This week also saw the disappointing damp-squib settlement of the lawsuit brought by Dominion Voting Systems against Fox News. Disappointing, because it leaves Fox and its hosts free to go on wreaking daily havoc across America by selling their audience rage-enhanced lies without even an apology. The payment that Fox has agreed to – $787 million – sounds like a lot, but a) the company can afford it given the size of its cash pile, and b) most of it will likely be covered by insurance.
If Fox’s major source of revenues were advertising, these defamation cases – still to come is a similar case brought by Smartmatic – might make their mark by alienating advertisers, as has been happening with Twitter. But it’s not; instead, Fox is supported by the fees cable companies pay to carry the channel. Even subscribers who never watch it are paying monthly for Fox News to go on fomenting discord and spreading disinformation. And Fox is seeking a raise to $3 per subscriber, which would mean more than $1,8 billion a year just from affiliate revenue.
Meanwhile, an era is ending: Netflix will mail out its last rental DVD in September. As Chris Stokel-Walker writes at Wired, the result will be to shrink the range of content available by tens of thousands of titles because the streaming library is a fraction of the size of the rental library.
This reality seems backwards. Surely streaming services ought to have the most complete libraries. But licensing and lockups mean that Netflix can only host for streaming what content owners decree it may, whereas with the mail rental service once Netflix had paid the commercial rental rate to buy the DVD it could stay in the catalogue until the disk wore out.
The upshot is yet another data point that makes pirate services more attractive: no ads, easy access to the widest range of content, and no licensing deals to get in the way.
In all the professions people have been suggesting are threatened by large language model-based text generation – journalism, in particular – no one to date has listed fraudulent spiritualist mediums. And yet…
The family of Michael Schumacher is preparing legal action against the German weekly Die Aktuelle for publishing an interview with the seven-time Formula 1 champion. Schumacher has been out of the public eye since suffering a brain injury while skiing in 2013. The “interview” is wholly fictitious, the quotes created by prompting an “AI” chat bot.
Given my history as a skeptic, my instinctive reaction was to flash on articles in which mediums produced supposed quotes from dead people, all of which tended to be anodyne representations bereft of personality. Dressing this up in the trappings of “AI” makes such fakery no less reprehensible.
An article in the Washington Post examines Google’s C4 data set scraped from 15 million websites and used to train several of the highest profile large language models. The Post has provided a search engine, which tells us that my own pelicancrossing.net, which was first set up in 1996, has contributed 160,000 words or phrases (“tokens”), or 0.0001% of the total. The obvious implication is that LLM-generated fake interviews with famous people can draw on things they’ve actually said in the past, mixing falsity and truth into a wasteland that will be difficult to parse.
Illustrations: The House of Lords in 2011 (via Wikimedia).
So as previously discussed here three years ago and two years ago, on March 24 the US District Court for the Southern District of New York found that the Internet Archive’s controlled digital lending fails copyright law. Half of my social media feed on this subject filled immediately with people warning that publishers want to kill libraries and this judgment is a dangerous step limiting access to information; the other half is going “They’re stealing from authors. Copyright!” Both of these things can be true. And incomplete.
To recap: in 2006 the Internet Archive set up the Open Library to offer access to digitized books under “controlled digital lending”. The system allows each book to be “out” on “loan” to only one person at a time, with waiting lists for popular titles. In a white paper, lawyers David R. Hansen and Kyle K. Courtney call this “format shifting” and say that because the system replicates library lending it is fair use. Also germane: the Archive points to a 2007 California decision that it is in fact a library. Other countries may beg to differ.
When public libraries closed at the beginning of the covid19 pandemic, the Internet Archive announced the National Emergency Library, which suspended the one-copy-at-a-time rule and scrubbed the waiting lists so anyone could borrow any book at any time. The resulting publicity was the first time many people had heard of the Open Library, although authors had already complained. Hachette Book Group, Penguin Random House, HarperCollins, and Wiley filed suit. Shortly afterwards, the Archive shut down the National Emergency Library. The Open Library continues, and the Archive will appeal the judge’s ruling.
At Vice, Claire Woodstock lays out some of the economics of library ebook licenses, which eat up budgets but leave libraries vulnerable and empty-shelved when a service is withdrawn. She also notes that the Internet Archive digitizes physical copies it buys or receives as donations, and does not pay for ebook licenses.
Brief digression back to 1996, when Pamela Samuelson warned of the coming copyright battles in Wired. Many of its key points have since either been enshrined into law, such as circumventing copy protection; others, such as requiring Internet Service Providers to prevent users from uploading copyrighted material, remain in play today. Number three on her copyright maximalists’ wish listeliminating first-sale rights for digitally transmitted documents. This is the doctrine that enables libraries to lend books.
It is therefore entirely believable that commercial publishers believe that every library loan is a missed sale. Outside the US, many countries have a public lending right that pays royalties on loans for that sort of reason. The Internet Archive doesn’t pay those, either.
At her blog, librarian and consultant Karen Coyle, who has thought for decades about the future of libraries, takes three postings to consider the case. First, she offers a backgrounder, agreeing that the Archive’s losing on appeal could bring consequences for other libraries’ digital lending. In the second, she teases out the differences between academic/research libraries and public libraries and between research and reading. While journals and research materials are generally available in electronic format, centuries of books are not, and scanned books (like those the Archive offers) are a poor reading experience compared to modern publisher-created ebooks. These distinctions are crucial to her third posting, which traces the origins of controlled digital lending.
As initially conceived by Michelle M. Wu in a 2011 paper for Law Library Journal, controlled digital lending was a suggestion that law libraries could, either singly or in groups, buy a hard copy for their holdings and then circulate a digitized copy, similar to an Inter-Library Loan. Law libraries serve limited communities, and their comparatively modest holdings have a known but limited market.
By contrast, the Archive gives global access to millions of books it has scanned. In court, it argued that the availability of popular commercial books on its site has not harmed publishers’ revenues. The judge disagreed: the “alleged benefits” of access could not outweigh the market harm to the four publishers who brought the suit. This view entirely devalues the societal role libraries play, and Coyle, like many others, is dismayed that the judge saw the case purely in terms of its effect on the commercial market.
The question I’m left with is this: is the Open Library a library or a disruptor? If these were businesses, it would obviously be the latter: it avoids many of the costs of local competitors, and asks forgiveness not permission. As things are, it seems to be both: it’s a library for users, but a disruptor to some publishers, some authors, and potentially the world’s libraries. The judge’s ruling captures none of this nuance.
Illustrations: 19th century rendering of the Great Library of Alexandria (via Wikimedia).
“You have to register at home, where your parents live,” said the clerk at the Board of Elections office.
I was 18, and registering to vote for the first time. It was 1972.
“I don’t live there,” I said. “I live here.” “Here” was Ithaca, NY, a town that, I learned later, was hyper-conscious that college students – Cornell, Ithaca College – outnumbered local residents. They didn’t want us interlopers overwhelming their preferences.
We had a couple more back-and-forths like this, and then she picked up the phone and called the state authorities in Albany for an official ruling. I knew – or thought I knew – that the law was on my side.
It was. I registered. I voted.
In about a month, the UK will hold local elections. For the first time, anyone presenting themselves to vote at the polls will be required to show an ID card with a photograph. This is a policy purely imported from American Republicans, and it has no basis in necessity. The Electoral Commission, in recommending its introduction, admitted that the issue was public perception. The big issues with respect to elections are around dark money and the processes by which candidates are chosen.
For 49 days in the fall of 2022, Liz Truss served as prime minister; she was chosen by 81,326 Tory party members. Out of the country’s roughly 68 million people, only 141,725 (out of an estimated 172,000 party members) voted in that contest because, since the Conservatives had decisively won the 2019 election, they were just electing a new leader. Rishi Sunak was voted in by 202 MPs.
The government’s proximate excuse for bringing in voter ID is the fraud-riddled May 2014 mayoral election in the London borough of Tower Hamlets. Four local residents risked their own money to challenge the outcome, and in 2015 won an Election Court ruling voiding the election and barring the cheating winner from standing for public office for five years. Their complaints; included vote-rigging, false statements made by the winning candidates about his rival, bribery, and religious influence.
The High Court of Justice’s judgment in the case says: “…in practice, where electoral malpractice is established, particularly in the field of vote-rigging, it is very rare indeed to find members of the general public engaging in DIY vote-rigging on behalf of a candidate. Generally speaking, if there is widespread personation or false registration or misuse of postal votes, it will have been organised by the candidate or by someone who is, in law, his agent.”
Surely a more logical response to the Tower Hamlets case would be to make it easier – or at least quicker – for individuals to challenge election results and examine ways to ensure better behavior by *candidates*, not voters.
The judgment also notes that personation – assuming someone else’s identity in order to vote – was far more of a risk when fewer people qualified to vote. There followed a long period when it was too labor-intensive for too little reward; you need a lot of impersonators to change the result. In recent years, however, postal voting has made it viable again; in two wards of a 2008 Birmingham election Labour candidates committed 15 types of fraud involving postal ballots. The election in those two wards was re-run.
In his book Security Engineering, Cambridge professor Ross Anderson notes that the likelihood that expanded use of postal ballots would open the way for vote-buying an intimidation was predicted even as first Margaret Thatcher and then Tony Blair pursued the policy. But the main point is clear: the big problem is postal ballots, which you can’t solve by requiring voter ID from those who vote in person. It’s the wrong threat model. As Anderson observes, “…it’s typically the incumbent who tweaks the laws, buys the voting machines, and creates as many advantages for their own side, small and large, as the local political culture will tolerate.”
This was all maddening enough – and then they published the list of acceptable forms of ID. Tl;dr: the list blatantly skews in favor of older and richer people, who are presumed to be more likely to vote Conservative. Passports, driving licenses, and travel passes 60+ for people are all acceptable. Student ID cards and travel cards and passesare not. The government says they are not secure enough, a bit like saying a lock on the door is pointless because it’s not a burglar alarm.
There is a scheme for issuing free voter cards; applications must be in by April 25. People can also vote by post or by proxy without ID. And there are third parties pushing paid ID cards, too. But what it comes down to is next month a bunch of people are going to go to vote and will be barred. And this from the same people who wanted online voting to “increase access”.
First, they want meaningful access. They want usability. They want not to be scammed, manipulated, lied to, exploited, or cheated.
It’s unlikely that any of the ongoing debates in either the US or UK will deliver any of those.
First and foremost, this week concluded two frustrating years in which the US Senate failed to confirm the appointment of Public Knowledge co-founder and EFF board memberGigi Sohn to the Federal Communications Commission. In her withdrawal statement, Sohn blamed a smear campaign by “legions of cable and media industry lobbyists, their bought-and-paid-for surrogates, and dark money political groups with bottomless pockets”.
Whether you agree or not, the result remains that for the last two years and for the foreseeable future the FCC will remain deadlocked and problems such as the US’s lack of competition and patchy broadband provision will remain unsolved.
Meanwhile, US politicians continue obsessing about whether and how to abort-retry-fail Section 230, that pesky 26-word law that relieves Internet hosts of liability for third-party content. This week it was the turn of the Senate Judiciary Committee. In its hearing, the Internet Society’s Andrew Sullivan stood out for trying to get across to lawmakers that S230 wasn’t – couldn’t have been – intended as protectionism for the technology giants because they did not exist when the law was passed. It’s fair to say that S230 helped allow the growth of *some* Internet companies – those that host user-generated content. That means all the social media sites as well as web boards and blogs and Google’s search engine and Amazon’s reviews, but neither Apple nor Netflix makes its living that way. Attacking the technology giants is a popular pasttime just now, but throwing out S230 without due attention to the unexpected collateral damage will just make them bigger.
Also on the US political mind is a proposed ban on TikTok. It’s hard to think of a move that would more quickly alienate young people. Plus, it fails to get at the root problem. If the fear is that TikTok gathers data on Americans and sends it home to China for use in designing manipulative programs…well, why single out TikTok when it lives in a forest of US companies doing the same kind of thing? As Karl Bode writes at TechDirt, if you really want to mitigate that threat, rein in the whole forest. Otherwise, if China really wants that data it can buy it on the open market.
Meanwhile, in the UK, as noted last week, opposition continues to increase to the clauses in the Online Safety bill proposing to undermine end-to-end encryption by requiring platforms to proactively scan private messages. This week, WhatsApp said it would withdraw its app from the UK rather than comply. However important the UK market is, it can’t possibly be big enough for Meta to risk fines of 4% of global revenues and criminal sanctions for executives. The really dumb thing is that everyone within the government uses WhatsApp because of its convenience and security, and we all know it. Or do they think they’ll have special access denied the rest of the population?
The Open Rights Group and 25 other civil society organizations have written a letter (PDF) laying out their objections, noting that the proposed bill, in line with other recent legislation that weakens civil rights, weakens oversight and corporate accountability, lessens individuals’ rights, and weakens the independence of the Information Commissioner’s Office. “Co-designed with businesses from the start” is how the government describes the bill. But data protection law was not supposed to be designed for business – or, as Peter Geoghegan says at the London Review of Books, to aid SLAPP suits; it is supposed to protect our human rights in the face of state and corporate power. As the cryptography pioneer Whit Diffie said in 2019, “The problem isn’t privacy; it’s corporate malfeasance.”
The most depressing thing about all of these discussions is that the public interest is the loser in all of them. It makes no sense to focus on TikTok when US companies are just as aggressive in exploiting users’ data. It makes no sense to focus solely on the technology giants when the point of S230 was to protect small businesses, non-profits, and hobbyists. And it makes no sense to undermine the security afforded by end-to-end encryption when it’s essential for protecting the vulnerable people the Online Safety bill is supposed to help. In a survey, EDRi finds that compromising secure messaging is highly unpopular with young people, who clearly understand the risks to political activism and gender identity exploration.
One of the most disturbing aspects of our politics in this century so far is the widening gap between what people want, need, and know and the things politicians obsess about. We’re seeing this reflected in Internet policy, and it’s not helpful.
Illustrations: Andrew Sullivan, president of the Internet Society, testifying in front of the Senate Judiciary Committee.