Gather ye lawsuits while ye may

Most of us howled with laughter this week when the news broke that Elon Musk is suing companies for refusing to advertise on his exTwitter platform. To be precise, Musk is suing the World Federation of Advertisers, Unilever, Mars, CVS, and Ørsted in a Texas court.

How could Musk, who styles himself a “free speech absolutist”, possibly force companies to advertise on his site? This is pure First Amendment stuff: both the right to free speech (or to remain silent) and freedom of assembly. It adds to the nuttiness of it all that last November Musk was telling advertisers to “go fuck yourselves” if they threatened him with a boycott. Now he’s mad because they responded in kind.

Does the richest man in the world even need advertisers to finance his toy?

At Techdirt, Mike Masnick catalogues the “so much stupid here”.

The WFA initiative that offends Musk is the Global Alliance for Responsible Media, which develops guidelines for content moderation – things like a standard definition for “hate speech” to help sites operate consistent and transparent policies and reassure advertisers that their logos don’t appear next to horrors like the livestreamed shooting in Christchurch, New Zealand. GARM’s site says: membership is voluntary, following its guidelines is voluntary, it does not provide a rating service, and it is apolitical.

Pre-Musk, Twitter was a member. After Musk took over, he pulled exTwitter out of it – but rejoined a month ago. Now, Musk claims that refusing to advertise on his site might be a criminal matter under RICO. So he’s suing himself? Blink.

Enter US Republicans, who are convinced that content moderation exists only to punish conservative speech. On July 10, House Judiciary Committee, under the leadership of Jim Jordan (R-OH), released an interim report on its ongoing investigation of GARM.

The report says GARM appears to “have anti-democratic views of fundamental American freedoms” and likens its work to restraint of trade Among specific examples, it says GARM’s recommended that its members stop advertising on exTwitter, threatened Spotify when podcaster Joe Rogan told his massive audience that young, healthy people don’t need to be vaccinated against covid, and considered blocking news sites such as Fox News, Breitbart, and The Daily Wire. In addition, the report says, GARM advised its members to use fact-checking services like NewsGuard and the Global Disinformation Index “which disproportionately label right-of-center news sites as so-called misinformation”. Therefore, the report concludes, GARM’s work is “likely illegal under the antitrust laws”.

I don’t know what a court would have made of that argument – for one thing, GARM can’t force anyone to follow its guidelines. But now we’ll never know. Two days after Musk filed suit, the WFA announced it’s shuttering GARM immediately because it can’t afford to defend the lawsuit and keep operating even though it believes it’s complied with competition rules. Such is the role of bullies in our public life.

I suppose Musk can hope that advertisers decide it’s cheaper to buy space on his site than to fight the lawsuit?

But it’s not really a laughing matter. GARM is just one of a number of initiatives that’s come under attack as we head into the final three months of campaigning before the US presidential election. In June, Renee DiResta, author of the new book Invisible Rulers, announced that her contract as the research manager of the Stanford Internet Observatory was not being renewed. Founding director Alex Stamos was already gone. Stanford has said the Observatory will continue under new leadership, but no details have been published. The Washington Post says conspiracy theorists have called DiResta and Stamos part of a government-private censorship consortium.

Meanwhile, one of the Observatory’s projects, a joint effort with the University of Washington called the Election Integrity Partnership, has announced, in response to various lawsuits and attacks, that it will not work on the 2024 or future elections. At the same time, Meta is shutting down CrowdTangle next week, removing a research tool that journalists and academics use to study content on Facebook and Instagram. While CrowdTangle will be replaced with Meta Content Library, access will be limited to academics and non-profits, and those who’ve seen it say it’s missing useful data that was available through CrowdTangle.

The concern isn’t the future of any single initiative; it’s the pattern of these things winking out. As work like DiResta’s has shown, the flow of funds financing online political speech (including advertising) is dangerously opaque. We need access and transparency for those who study it, and in real time, not years after the event.

In this, as so much else, the US continues to clash with the EU, which it accused in December of breaching its rules with respect to disinformation, transparency, and extreme content. Last month, it formally charged Musk’s site for violating the Digital Services Act, for which Musk could be liable for a fine of up to 6% of exTwitter’s global revenue. Among the EU’s complaints is the lack of a searchable and reliable advertisement repository – again, an important element of the transparency we need. Its handling of disinformation and calls to violence during the current UK riots may be added to the investigation.

Musk will be suing *us*, next.

Illustrations: A cartoon caricature of Christina Rossetti by her brother Dante Gabriel Rossetti 1862, showing her having a tantrum after reading The Times’ review of her poetry (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Hostages

If you grew up with the slow but predictable schedule of American elections, the abruptness with which a British prime minister can prorogue Parliament and hit the campaign trail is startling. Among the pieces of legislation that fell by the wayside this time is the Data Protection and Digital Information bill, which had reached the House of Lords for scrutiny. The bill had many problems. This was the bill that proposed to give the Department of Work and Pensions the right to inspect the bank accounts and financial assets of anyone receiving any government benefits and undermined aspects of the adequacy agreement that allows UK companies to exchange data with businesses in the EU.

Less famously, it also includes the legislative underpinnings for a trust framework for digital verification. On Monday, at a UCL’s conference on crime science, Sandra Peaston, director of research and development at the fraud prevention organization Cifas, outlined how all this is intended to work and asked some pertinent questions. Among them: whether the new regulator will have enough teeth; whether the certification process is strong enough for (for example) mortgage lenders; and how we know how good the relevant algorithm is at identifying deepfakes.

Overall, I think we should be extremely grateful this bill wasn’t rushed through. Quite apart from the digital rights aspects, the framework for digital identity really needs to be right; there’s just too much risk in getting it wrong.

***

At Bloomberg, Mark Gurman reports that Apple’s arrangement with OpenAI to integrate ChatGPT into the iPhone, iPad, and Mac does not involve Apple paying any money. Instead, Gurman cites unidentified sources to the effect that “Apple believes pushing OpenAI’s brand and technology to hundreds of millions of its devices is of equal or greater value than monetary payments.”

We’ve come across this kind of claim before in arguments between telcos and Internet companies like Netflix or between cable companies and rights holders. The underlying question is who brings more value to the arrangement, or who owns the audience. I can’t help feeling suspicious that this will not end well for users. It generally doesn’t.

***

Microsoft is on a roll. First there was the Recall debacle. Now come accusations by a former employee that it ignored a reported security flaw in order to win a large government contract, as Renee Dudley and Doris Burke report at Pro Publica. Result: the Russian Solarwinds cyberattack on numerous US government departments and agencies, including the National Nuclear Security Administration.

This sounds like a variant of Cory Doctorow’s enshittification at the enterprise level (see also: Boeing). They don’t have to be monopolies: these organizations’ evolving culture has let business managers override safety and security engineers. This is how Challenger blew up in 1986.

Boeing is too big and too lacking in competition to be allowed to fail entirely; it will have to find a way back. Microsoft has a lot of customer lock-in. Is it too big to fail?

***

I can’t help feeling a little sad at the news that Raspberry Pi has had an IPO. I see no reason why it shouldn’t be successful as a commercial enterprise, but its values will inevitably change over time. CEO Eben Upton swears they won’t, but he won’t be CEO forever, as even he admits. But: Raspberry Pi could become the “unicorn” Americans keep saying Europe doesn’t have.

***

At that same UCL event, I finally heard someone say something positive about AI – for a meaning of “AI” that *isn’t* chatbots. Sarah Lawson, the university’s chief information security officer, said that “AI and machine learning have really changed the game” when it comes to detecting email spam, which remains the biggest vector for attacks. Dealing with the 2% that evades the filters is still a big job, as it leaves 6,000 emails a week hitting people’s inboxes – but she’ll take it. We really need to be more specific when we say “AI” about what kind of system we mean; success at spam filtering has nothing to say about getting accurate information out of a large language model.

***

Finally, I was highly amused this week when long-time security guy Nick Selby, posted on Mastodon about a long-forgotten incident from 1999 in which I disparaged the sort of technology Apple announced this week that’s supposed to organize your life for you – tell you when it’s time to leave for things based on the traffic, juggle meetings and children’s violin recitals, that sort of thing. Selby felt I was ahead of my time because “it was stupid then and is stupid now because even if it works the cost is insane and the benefit really, really dodgy”,

One of the long-running divides in computing is between the folks who want computers to behave predictably and those who want computers to learn from our behavior what’s wanted and do that without intervention. Right now, the latter is in ascendance. Few of us seem to want the “AI features” being foisted on us. But only a small percentage of mainstream users turn off defaults (a friend was recently surprised to learn you can use the history menu to reopen a closed browser tab). So: soon those “AI features” will be everywhere, pointlessly and extravagantly consuming energy, water, and human patience. How you use information technology used to be a choice. Now, it feels like we’re hostages.

Illustrations: Raspberry Pi: the little computer that could (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Admiring the problem

In one sense, the EU’s barely dry AI Act and the other complex legislation – the Digital Markets Act, Digital Services Act, GDPR, and so on -= is a triumph. Flawed it may be, but it’s a genuine attempt to protect citizens’ human rights against a technology that is being birthed with numerous trigger warnings. The AI-with-everything program at this year’s Computers, Privacy, and Data Protection, reflected that sense of accomplishment – but also the frustration that comes with knowing that all legislation is flawed, all technology companies try to game the system, and gaps will widen.

CPDP has had these moments before: new legislation always comes with a large dollop of frustration over the opportunities that were missed and the knowledge that newer technologies are already rushing forwards. AI, and the AI Act, more or less swallowed this year’s conference as people considered what it says, how it will play internationally, and the necessary details of implementation and enforcement. Two years at this event, inadequate enforcement of GDPR was a big topic.

The most interesting future gaps that emerged this year: monopoly power, quantum sensing, and spatial computing.

For at least 20 years we’ve been hearing about quantum computing’s potential threat to public key encryption – that day of doom has been ten years away as long as I can remember, just as the Singularity is always 30 years away. In the panel on quantum sensing, Chris Hoofnagle argued that, as he and Simson Garfinkel recently wrote at Lawfare and in their new book, quantum cryptanalysis is overhyped as a threat (although there are many opportunities for quantum computing in chemistry and materials science). However, quantum sensing is here now, works (because qubits are fragile), and is cheap. There is plenty of privacy threat here to go around: quantum sensing will benefit entirely different classes of intelligence, particularly remote, undetectable surveillance.

Hoofnagle and Garfinkel are calling this MASINT, for machine and signature intelligence, and believe that it will become very difficult to hide things, even at a national level. In Hoofnagle’s example, a quantum sensor-equipped drone could fly over the homes of parolees to scan for guns.

Quantum sensing and spatial computing have this in common: they both enable unprecedented passive data collection. VR headsets, for example, collect all sorts of biomechanical data that can be mined more easily for personal information than people expect.

Barring change, all that data will be collected by today’s already-powerful entities.

The deeper level on which all this legislation fails particularly exercised Cristina Caffarra, the co-founder of the Centre for Economic Policy Research in the panel on AI and monopoly, saying that all this legislation is basically nibbling around the edges because they do not touch the real, fundamental problem of the power being amassed by the handful of companies who own the infrastructure.

“It’s economics 101. You can have as much downstream competition as you like but you will never disperse the power upstream.” The reports and other material generated by government agencies like the UK’s Competition and Markets Authority are, she says, just “admiring the problem”.

A day earlier, the Novi Sad professor Vladen Joler had already pointed out the fundamental problem: at the dawn of the Internet anyone could start with nothing and build something; what we’re calling “AI” requires billions in investment, so comes pre-monopolized. Many people dismiss Europe for not having its own homegrown Big Tech, but that overlooks open technologies: the Raspberry Pi, Linux, and the web itself, which all have European origins.

In 2010, the now-departing MP Robert Halfon (Con-Harlow) said at an event on reining in technology companies that only a company the size of Google – not even a government – could create Street View. Legend has it that open source geeks heard that as a challenge, and so we have OpenStreetMap. Caffarra’s fiery anger raises the question: at what point do the infrastructure providers become so entrenched that they could choke off an open source competitor at birth? Caffarra wants to build a digital public interest infrastructure using the gaps where Big Tech doesn’t yet have that control.

The Dutch Groenlinks MEP Kim van Sparrentak offered an explanation for why the AI Act doesn’t address market concentration: “They still dream of a European champion who will rule the world.” An analogy springs to mind: people who vote for tax cuts for billionaires because one day that might be *them*. Meanwhile, the UK’s Competition and Markets Authority finds nothing to investigate in Microsoft’s partnership with the French AI startup Mistral.

Van Sparrentak thinks one way out is through public procurement; adopt goals of privacy and sustainability, and support European companies. It makes sense; as the AI Now Institute’s Amba Kak, noted, at the moment almost everything anyone does digitally has to go through the systems of at least one Big Tech company.

As Sebastiano Toffaletti, head of the secretariat of the European SME Alliance, put it, “Even if you had all the money in the world, these guys still have more data than you. If you don’t and can’t solve it, you won’t have anyone to challenge these companies.”

Illustrations: Vladen Joler shows Anatomy of an AI System, a map he devised with Kate Crawford of the human labor, data, and planetary resources that are extracted to make “AI”.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Deja news

At the first event organized by the University of West London group Women Into Cybersecurity, a questioner asked how the debates around the Internet have changed since I wrote the original 1997 book net.wars..

Not much, I said. Some chapters have dated, but the main topics are constants: censorship, freedom of speech, child safety, copyright, access to information, digital divide, privacy, hacking, cybersecurity, and always, always, *always* access to encryption. Around 2010, there was a major change when the technology platforms became big enough to protect their users and business models by opposing government intrusion. That year Google launched the first version of its annual transparency report, for example. More recently, there’s been another shift: these companies have engorged to the point where they need not care much about their users or fear regulatory fines – the stage Ed Zitron calls the rot economy and Cory Doctorow dubs enshittification.

This is the landscape against which we’re gearing up for (yet) another round of recursion. April 25 saw the passage of amendments to the UK’s Investigatory Powers Act (2016). These are particularly charmless, as they expand the circumstances under which law enforcement can demand access to Internet Connection Records, allow the government to require “exceptional lawful access” (read: backdoored encryption) and require technology companies to get permission before issuing security updates. As Mark Nottingham blogs, no one should have this much power. In any event, the amendments reanimate bulk data surveillance and backdoored encryption.

Also winding through Parliament is the Data Protection and Digital Information bill. The IPA amendments threaten national security by demanding the power to weaken protective measures; the data bill threatens to undermine the adequacy decision under which the UK’s data protection law is deemed to meet the requirements of the EU’s General Data Protection Regulation. Experts have already put that adequacy at risk. If this government proceeds, as it gives every indication of doing, the next, presumably Labour, government may find itself awash in an economic catastrophe as British businesses become persona-non-data to their European counterparts.

The Open Rights Group warns that the data bill makes it easier for government, private companies, and political organizations to exploit our personal data while weakening subject access rights, accountability, and other safeguards. ORG is particularly concerned about the impact on elections, as the bill expands the range of actors who are allowed to process personal data revealing political opinions on a new “democratic engagement activities” basis.

If that weren’t enough, another amendment also gives the Department of Work and Pensions the power to monitor all bank accounts that receive payments, including the state pension – to reduce overpayments and other types of fraud, of course. And any bank account connected to those accounts, such as landlords, carers, parents, and partners. At Computer Weekly, Bill Goodwin suggests that the upshot could be to deter landlords from renting to anyone receiving state benefits or entitlements. The idea is that banks will use criteria we can’t access to flag up accounts for the DWP to inspect more closely, and over the mass of 20 million accounts there will be plenty of mistakes to go around. Safe prediction: there will be horror stories of people denied benefits without warning.

And in the EU… Techcrunch reports that the European Commission (always more surveillance-happy and less human rights-friendly than the European Parliament) is still pursuing its proposal to require messaging platforms to scan private communications for child sexual abuse material. Let’s do the math of truly large numbers: billions of messages, even a teeny-tiny percentage of inaccuracy, literally millions of false positives! On Thursday, a group of scientists and researchers sent an open letter pointing out exactly this. Automated detection technologies perform poorly, innocent images may occur in clusters (as when a parent sends photos to a doctor), and such a scheme requires weakening encryption, and in any case, better to focus on eliminating child abuse (taking CSAM along with it).

Finally, age verification, which has been pending in the UK ever since at least 2016, is becoming a worldwide obsession. At least eight US states and the EU have laws mandating age checks, and the Age Verification Providers Association is pushing to make the Internet “age-aware persistently”. Last month, the BSI convened a global summit to kick off the work of developing a worldwide standard. These moves are the latest push against online privacy; age checks will be applied to *everyone*, and while they could be designed to respect privacy and anonymity, the most likely is that they won’t be. In 2022, the French data protection regulator, CNIL, found that current age verification methods are both intrusive and easily circumvented. In the US, Casey Newton is watching a Texas case about access to online pornography and age verification that threatens to challenge First Amendment precedent in the Supreme Court.

Because the debates are so familiar – the arguments rarely change – it’s easy to overlook how profoundly all this could change the Internet. An age-aware Internet where all web use is identified and encrypted messaging services have shut down rather than compromise their users and every action is suspicious until judged harmless…those are the stakes.

Illustrations: Angel sensibly smashes the ring that makes vampires impervious (in Angel, “In the Dark” (S01e03)).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

Selective enforcement

This week, as a rider to the 21st Century Peace Through Strength Act, which provides funding for defense in Ukraine, Israel, and Taiwan, the US Congress passed provisions for banning the distribution of TikTok if owner ByteDance has not divested it within 270 days. President Joe Biden signed it into law on Wednesday, and, as Mike Masnick says at Techdirt, ByteDance’s lawsuit is imminently expected, largely on First Amendment grounds. ACLU agrees. Similar arguments won when ByteDance challenged a 2023 Montana law.

For context: Pew Research says TikTok is the fifth-most popular social media service in the US. An estimated 150 million Americans – and 62% of 18-29-year-olds – use it.

The ban may not be a slam-dunk to fail in court. US law, including the constitution, includes many restrictions on foreign influence, from requiring registration for those acting as agents to requiring presidents to have been born US citizens. Until 2017, foreigners were barred from owning US broadcast networks.

So it seems to this non-lawyer as though a lot hinges on how the court defines TikTok and what precedents apply. This is the kind of debate that goes back to the dawn of the Internet: is a privately-owned service built of user-generated content more like a town square, a broadcaster, a publisher, or a local pub? “Broadcast”, whether over the air or via cable, implies being assigned a channel on a limited resource; this clearly doesn’t apply to apps and services carried over the presumably-infinite Internet. Publishing implies editorial control, which social media lacks. A local pub might be closest: privately owned, it’s where people go to connect with each other. “Congress may make no law…abridging the freedom of speech”…but does that cover denying access to one “place” where speech takes place when there are many other options?

TikTok is already banned in Pakistan, Nepal, and Afghanistan, and also India, where it is one of 500 apps that have been banned since 2020. ByteDance will argue that the ban hurts US creators who use TikTok to build businesses. But as NPR reports, in India YouTube and Instagram rolled out short video features to fill the gap for hyperlocal content that the loss of TikTok opened up, and four years on creators have adapted to other outlets.

It will be more interesting if ByteDance claims the company itself has free speech rights. In a country where commercial companies and other organizations are deemed to have “free speech” rights entitling them to donate as much money as they want to political causes (as per the Supreme Court’s ruling in Citizens United v. Federal Election Commission), that might make a reasonable argument.

On the other hand, there is no question that this legislation is full of double standards. If another country sought to ban any of the US-based social media, American outrage would be deafening. If the issue is protecting the privacy of Americans against rampant data collection, then, as Free Press argues, pass a privacy law that will protect Americans from *every* service, not just this one. The claim that the ban is to protect national security is weakened by the fact that the Chinese government, like apparently everyone else, can buy data on US citizens even if it’s blocked from collecting it directly from ByteDance.

Similarly, if the issue is the belief that social media inevitably causes harm to teenagers, as author and NYU professor Jonathan Haidt insists in his new book, then again, why only pick on TikTok? Experts who have really studied this terrain, such as Danah Boyd and others, insist that Haidt is oversimplifying and pushing parents to deny their children access to technologies whose influence is largely positive. I’m inclined to agree; between growing economic hardship, expanding wars, and increasing climate disasters young people have more important things to be anxious about than social media. In any case, where’s the evidence that TikTok is a bigger source of harm than any other social medium?

Among digital rights activists, the most purely emotional argument against the TikTok ban revolves around the original idea of the Internet as an open network. Banning access to a service in one country (especially the country that did the most to promote the Internet as a vector for free speech and democratic values) is, in this view, a dangerous step toward the government control John Perry Barlow famously rejected in 1996. And yet, to increasing indifference, no-go signs are all over the Internet. *Six* years after GDPR came into force, Europeans are still blocked from many US media sites that can’t be bothered to comply with it. Many other media links don’t work because of copyright restrictions, and on and on.

The final double standard is this: a big element in the TikTok ban is the fear that the Chinese government, via its control over companies hosted there, will have access to intimate personal information about Americans. Yet for more than 20 years this has been the reality for non-Americans using US technology services outside the US: their data is subject to NSA surveillance. This, and the lack of redress for non-Americans, is what Max Schrems’ legal cases have been about. Do as we say, not as we do?

Illustrations: TikTok CEO Shou Zi Chew, at the European Commission in 2024 (by Lukasz Kobus at Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Irrevocable

One of the biggest advances in computing in my lifetime is the “Undo” button. Younger people will have no idea of this, but at one time if you accidentally deleted the piece you’d spent hours typing into your computer, it was just…gone forever.

This week, UK media reported on what seems to be an unusual but not unique case: a solicitor accidentally opened the wrong client’s divorce case on her computer screen and went on to apply for a final decree for the couple concerned. The court granted the divorce in a standardly automated 21 minutes, even though the specified couple had not yet agreed on a financial settlement. Despite acknowledging the error, the court now refuses to overturn the decree. UK lawyers of my acquaintance say that this obvious unfairness may be because granting the final decree sets in motion other processes that are difficult to reverse.

That triggers a memory of the time I accidentally clicked on “cancel” instead of “check in” on a flight reservation, and casually, routinely, clicked again to confirm. I then watched in horror as the airline website canceled the flight. The undo button in this case was to phone customer service. Minutes later, they reinstated the reservation and thereafter I checked in without incident. Undone!

Until the next day, when I arrived in the US and my name wasn’t on the manifest. The one time I couldn’t find my boarding pass… After a not-long wait that seeemd endless in a secondary holding area (which I used to text people to tell them where I was just in case) I explained the rogue cancellation and was let go. Whew! (And yes, I know: citizen, white, female privilege.)

“Ease of use” should include making it hard to make irrecoverable mistakes. And maybe a grace period before automated processes cascade.

The Guardian quotes family court division head Sir Andrew McFarlane explaining that the solicitor’s error was not easy to make: “Like many similar online processes, an operator may only get to the final screen where the final click of the mouse is made after traveling through a series of earlier screens,” Huh? If you think you have opened the right case, then those are the screens you would expect to see. Why wouldn’t you go ahead?

At the Law Gazette, John Hyde reports that the well-known law firm in question, Vardag, is backing the young lawyer who made the error, describing it as a “slip up with the drop down menu” on “the new divorce portal”, noting that similar errors had happened “a few times” and felt like a design error.

“Design errors” can do a lot of damage. Take paying a business or person via online banking. In the UK, until recently, you entered account name, number, and sort code, and confirmed to send. If you made a mistake, tough. If the account information was sent by a scammer instead of the recipient you thought, tough. It was only in 2020 that most banks began participating in “Confirmation of payee”, which verifies the account with the receiving bank and checks with you that the name is correct. In 2020, Which? estimated that confirming payee could have saved £320 million in bank transfer fraud since 2017.

Similarly, while many more important factors caused the Horizon scandal, software design played its part: subpostmasters could not review past transactions as they could on paper.

Many computerized processes are blocked unless precursor requirements have been completed and checked for compliance. A legally binding system seems like it similarly ought to incorporate checks to ensure that all necessary steps had been completed.

Arguably, software design is failing users. In ecommerce, user-hostile software design is deceptive, or “dark”, patterns, user interfaces built deliberately to manipulate users into buying/spending more than they intended. The clutter that makes Amazon unusable directs shoppers to its house brands.

User interface design is where I began writing about computers circa 1990. Windows 3 was new, and the industry was just discovering that continued growth depended on reaching past those who *liked* software to be difficult. I vividly recall being told by a usability person at then-market leader Lotus about the first time her company’s programmers watched ordinary people using their software. First one fails to complete task. “Well, that’s a stupid person.” Second one. “Well, that’s a stupid person, too.” Third one. “Where do you find these people?” But after watching a couple more, they got it.

In the law firm’s case, the designers likely said, “This system is just for expert users”. True, but what they’re expert in is law, not software. Hopefully the software will now be redesigned to reflect the rule that it should be as easy as possible to do the work but as hard as possible to make unrecoverable mistakes (the tolerance principle). It’s a simple idea that goes all the way back to Donald Norman’s classic 1988. book The Design of Everyday Things.

At a guess, if today’s “AI” automation systems become part of standard office work making mistakes will become easier rather than harder, partly because it makes systems more inscrutable. In addition, the systems being digitized are increasingly complex with more significant consequences reaching deep into people’s lives, and intended to serve the commissioning corporations’ short-term desires. It will not be paranoid to believe the world is stacked against us.

Illustrations: Cary Grant and Rosalind Russell as temporarily divorced newspapermen in His Girl Friday (1944).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Borderlines

Think back to the year 2000. New York’s World Trade Center still stood. Personal digital assistants were a niche market. There were no smartphones (the iPhone arrived in 2006) or tablets (the iPad took until 2010). Social media was nascent; Facebook first opened in 2004. The Good Friday agreement was just two years old, and for many in Britain “terrorists” were still “Irish”. *That* was when the UK passed the Terrorism Act (2000).

Usually when someone says the law can’t keep up with technological change they mean that technology can preempt regulation at speed. What the documentary Phantom Parrot shows, however, is that technological change can profoundly alter the consequences of laws already on the books. The film’s worked example is Schedule 7 of the 2000 Terrorism Act, which empowers police to stop, question, search, and detain people passing through the UK’s borders. They do not need prior authority or suspicion, but may only stop and question people for the purpose of determining whether the individual may be or have been concerned in the commission, preparation, or instigation of acts of terrorism.

Today this law means that anyone ariving at the UK border may be compelled to unlock access to data charting their entire lives. The Hansard record of the debate on the bill shows clearly that lawmakers foresaw problems: the classification of protesters as terrorists, the uselessness of fighting terrorism by imprisoning the innocent (Jeremy Corbyn), the reversal of the presumption of innocence. But they could not foresee how far-reaching the powers the bill granted would become.

The film’s framing story begins in November 2016, when Muhammed Rabbani arrived at London’s Heathrow Airport from Doha and was stopped and questioned by police under Schedule 7. They took his phone and laptop and asked for his passwords. He refused to supply them. On previous occasions, when he had similarly refused, they’d let him go. This time, he was arrested. Under Schedule 7, the penalty for such a refusal can be up to three months in jail.

Rabbani is managing director of CAGE International, a human rights organization that began by focusing on prisoners seized under the war on terror and expanded its mission to cover “confronting other rule of law abuses taking place under UK counter-terrorism strategy”. Rabbani’s refusal to disclose his passwords was, he said later, because he was carrying 30,000 confidential documents relating to a client’s case. A lawyer can claim client confidentiality, but not NGOs. In 2018, the appeals court ruled the password demands were lawful.

In September 2017, Rabbani was convicted. He was g iven a 12-month conditional discharge and ordered to pay £620 in costs. As Rabbani says in the film, “The law made me a terrorist.” No one suspected him of being a terrorist or placing anyone in danger; but the judge made clear she had no choice under the law and so he nonetheless has been convicted of a terrorism offense. On appeal in 2018, his conviction was upheld. We see him collect his returned devices – five years on from his original detention.

Britain is not the only country that regards him with suspicion. Citing his conviction, in 2023 France banned him, and, he claims, Poland deported him.

Unsurprisingly, CAGE is on the first list of groups that may be dubbed “extremist” under the new definition of extremism released last week by communities secretary Michael Gove. The direct consequence of this designation is a ban on participation in public life – chiefly, meetings with central and local government. The expansion of the meaning of “extremist”, however, is alarming activists on all sides.

Director Kate Stonehill tells the story of Rabbani’s detention partly through interviews and partly through a reenactment using wireframe-style graphics and a synthesized voice that reads out questions and answers from the interview transcripts. A cello of doom provides background ominance. Laced through this narrative are others. A retired law enforcement office teaches a class to use extraction and analysis tools, in which we see how extensive the information available to them really is. Ali Al-Marri and his lawyer review his six years of solitary detention as an enemy combatant in Charleston, South Carolina. Lastly, Stonehill calls on Ryan Gallegher’s reporting, which exposed the titular Phantom Parrot, the program to exploit the data retained under Schedule 7. There are no records of how many downloads have been taken.

The retired law enforcement officer’s class is practically satire. While saying that he himself doesn’t want to be tracked for safety reasons, he tells students to grab all the data they can when they have the opportunity. They are in Texas: “Consent’s not even a problem.” Start thinking outside of the box, he tells them.

What the film does not stress is this: rights are largely suspended at all borders. In 2022, the UK extended Schedule 7 powers to include migrants and refugees arriving in boats.

The movie’s future is bleak. At the Chaos Computer Congress, a speaker warns that gait recognition, eye movement detection, and speech analysis (accents, emotion) and and other types of analysis will be much harder to escape and enable watchers to do far more with the ever-vaster stores of data collected from and about each of us.

“These powers are capable of being misused,” said Douglas Hogg in the 1999 Commons debate. “Most powers that are capable of being misused will be misused.” The bill passed 210-1.

Illustrations: Still shot from the wireframe reenactment of Rabbani’s questioning in Phantom Parrot.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

Competitive instincts

This week – Wednesday, March 6 – saw the EU’s Digital Markets Act come into force. As The Verge reminds us, the law is intended to give users more choice and control by forcing technology’s six biggest “gatekeepers” to embrace interoperability and avoid preferencing their own offerings across 22 specified services. The six: Alphabet, Amazon, Apple, ByteDance, Meta, and Microsoft. Alphabet’s covered list is the longest: advertising, app store, search engine, maps, and shopping, plus Android, Chrome, and YouTube. For Apple, it’s the app store, operating system, and web browser. Meta’s list includes Facebook, WhatsApp, and Instagram, plus Messenger, Ads, and Facebook Marketplace. Amazon: third-party marketplace and advertising business. Microsoft: Windows and internal features. ByteDance just has TikTok.

The point is to enable greater competition by making it easier for us to pick a different web browser, uninstall unwanted features (like Cortana), or refuse the collection and use of data to target us with personalized ads. Some companies are haggling. Meta, for example, is trying to get Messenger and Marketplace off the list, while Apple has managed to get iMessage removed from the list. More notably, though, the changes Apple is making to support third-party app stores have been widely cricitized as undermining any hope of success for independents.

Americans visiting Europe are routinely astonished at the number of cookie consent banners that pop up as they browse the web. Comments on Mastodon this week have reminded that this was their churlish choice to implement the 2009 Cookie Directive and 2018 General Data Protection Regulation in user-hostile ways. It remains to be seen how grown-up the technology companies will be in this new round of legal constraints. Punishing users won’t get the EU law changed.

***

The last couple of weeks have seen a few significant outages among Internet services. Two weeks ago, AT&T’s wireless service went down for many hours across the US after a failed software update. On Tuesday, while millions of Americans were voting in the presidential primaries, it was Meta’s turn, when a “technical issue” took out both Facebook and Instagram (and with the latter, Threads) for a couple of hours. Concurrently but separately, users of Ad Manager had trouble logging in at Google, and users of Microsoft Teams and exTwitter also reported some problems. Ironically, Meta’s outage could have been fixed faster if the engineers trying to fix it hadn’t had trouble gaining remote access to the servers they needed to fix (and couldn’t gain access to the physical building because their passes didn’t work either).

Outages like these should serve as reminders not to put all your login eggs in one virtual container. If you use Facebook to log into other sites, besides the visibility you’re giving Meta into your activities elsewhere, those sites will be inaccessible any time Facebook goes down. In the case of AT&T, one reason this outage was so disturbing – the FTC is formally investigating it – is that the company has applied to get rid of its landlines in California. While lots of people no longer have landlines, they’re important in rural areas where cell service can be spotty, some services such as home alarm systems and other equipment depend on them, and they function in emergencies when electric power fails.

But they should also remind that the infrastructure we’re deprecating in favor of “modern” Internet stuff was more robust than the new systems we’re increasingly relying on. A home with smart devices that cannot function without an uninterrupted Internet connection is far more fragile and has more points of failure than one without them, just as you can read a paper map when your phone is dead. At The Verge, Jennifer Pattison Tuohy tests a bunch of smart kitchen appliances including a faucet you can operate via Alexa or Google voice assistants. As in digital microwave ovens, telling the faucet the exact temperature and flow rate you want…seems unnecessarily detailed. “Connect with your water like never before,” the faucet manufacturer’s website says. Given the direction of travel of many companies today, I don’t want new points of failure between me and water.

***

It has – already! – been three years since Australia’s News Media Bargaining Code led to Facebook and Google signing three-year deals that have primarily benefited Rupert Murdoch’s News Corporation, owner of most of Australia’s press. A week ago, Meta announced it will not renew the agreement. At The Conversation, Rod Sims, who chaired the commission that formulated the law, argues it’s time to force Meta into the arbitration the code created. At ABC Science, however, James Purtill counters that the often “toxic” relationship between Facebook and news publishers means that forcing the issue won’t solve the core problem of how to pay for news, since advertising has found elsewheres it would rather be. (Separately, in Europe, 32 media organizations covering 17 countries have filed a €2.1 billion lawsuit against Google, matching a similar one filed last year in the UK, alleging that the company abused its dominant position to deprive them of advertising revenue.)

Purtill predicts, I think correctly, that attempting to force Meta to pay up will instead bring Facebook to ban news, as in Canada, following the passage of a similar law. Facebook needed news once; it doesn’t now. But societies do. Suddenly, I’m glad to pay the BBC’s license fee.

Illustrations: Red deer (via Wikimedia.)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

To tell the truth

It was toward the end of Craig Wright’s cross-examination on Wednesday when, for the first time in many days, he was lost for words. Wright is in court because the non-profit Crypto Open Patent Alliance seeks a ruling that he is not, as he claims, bitcoin inventor Satoshi Nakomoto, who was last unambiguously heard from in 2011.

Over the preceding days, Wright had repeatedly insisted “I am the real Satoshi” and disputed forensic analysis – anachronistic fonts, metadata, time stamps – pronouncing his proffered proofs forgeries.. He was consistently truculent, verbose, and dismissive of everyone’s expertise but his own and of everyone’s degrees except the ones he holds. For example: “Meiklejohn has not studied cryptography in any depth,” he said of Sarah Meiklejohn, the now-professor who as a student in 2013 showed that bitcoin transactions are traceable. In a favorite moment, Jonathan Hough, KC, who did most of the cross-examination, interrupted a diatribe about the failings of the press with, “Moving on from your expertise on journalism, Dr Wright…”

Participants in a drinking game based on his saying “That is not correct” would be dead of alcohol poisoning. In between, he insisted several times that he never wanted to be outed as Satoshi, and wishes that everyone would “leave me alone and let me invent”. Any money he is awarded in court he will give to charities ; he wants nothing for himself.

But at the moment we began with he was visibly stumped. The question, regarding a variable on a Github page: “Do you know what unsigned means?”

Wright: “Basically, an unsigned variable…it’s not an integer with…it’s larger. I’m not sure how to say it.”

Lawyer: “Try.”

Wright: “How I’d describe it, I’m not quite sure. I’m not good with trying to do things like this.” He could explain it easily in writing… (Transcription by Norbert on exTwitter.)

The lawyer explained it thusly: an unsigned variable cannot be a negative number.

“I understand that, but would I have thought of saying it in such a simple way? No.”

Experience as a journalist teaches you that the better you understand something the more simply and easily you can explain it. Wright’s inability to answer blew the inadequately bolted door plug out of his world’s expert persona. Everything until then could be contested: the stomped hard drive, the emails he wrote, or didn’t write, or wrote only one sentence of, the allegations that he had doctored old documents to make it look like he had been thinking about bitcoin before the publication of Satoshi’s foundational 2008 paper. But there’s no disguising lack of basic knowledge. “Should have been easy,” says a security professor (tenured, chaired) friend.

Normally, cryptography removes ambiguity. This is especially true of public key cryptography and its complementary pair of public and private keys. Being able to decrypt something with a well-attested public key is clear proof that it was encrypted with the complementary private key. Contrariwise, if a specific private key decrypts it, you know that key’s owner is the intended recipient. In both cases, as a bonus, you get proof that the text has not been altered since its encryption. It *ought* to be simple for Wright to support his claim by using Satoshi’s private keys. If he can’t do that, he must present a reason and rely on weaker alternatives.

Courts of law, on the other hand, operate on the balance of probabilities. They don’t remove ambiguity; they study it. Wright’s case is therefore a cultural clash, with far-reaching consequences. COPA is complaining that Wright’s repeated intellectual property lawsuits against developers working on bitcoin projects are expensive in both money and time. Soon after the unsigned variable exchange, the lawyer asked Wright what he will do if the court rules against him. “Move on to patents,” Wright said. He claims thousands of patents relating to bitcoin and the blockchain, and a brief glance at Google Patents shows many filings, some granted.

However this case comes out, therefore, it seems likely Wright will continue to try to control bitcoin. Wright insists that bitcoin isn’t meant to be “digital gold”, but that its true purpose is to facilitate micropayments. I haven’t “studied bitcoin in any depth” (as he might say), but as far as I can tell it’s far too slow, too resource-intensive, and too volatile to be used that way. COPA argues, I think correctly, that it’s the opposite of the world enshrined in Satoshi’s original paper; its whole point was to use cryptography to create the blockchain as a publicly attested, open, shared database that could eliminate central authorities such as banks.

In the Agatha Christie version of this tale, most likely Wright would be an imposter, an early hanger-on who took advantage of the gap formed by Satoshi’s disappearance and the deaths of other significant candidates. Dorothy Sayers would have Lord Peter Wimsey display unexpected mathematical brilliance to improve on Satoshi’s work, find him, and persuade him to turn over his keys and documents to king and country. Sir Arthur Conan Doyle would have both Moriarty and Sherlock Holmes on the trail. Holmes would get there first and send him into protection to ensure Morarty couldn’t take criminal advantage. And then the whole thing would be hushed up in the public interest.

The case continues.

Illustrations: The cryptographic code from “The Dancing Men”, by Sir Arthur Conan Doyle (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Nefarious

Torrentfreak is reporting that OCLC, owner of the WorldCat database of bibliographic records, is suing the “shadow library” search engine Anna’s Archive. The claim: that Anna’s Archive hacked into WorldCat, copied 2.2TB of records, and posted them publicly.

Shadow libraries are the text version of “pirate” sites. The best-known is probably Sci-Hub, which provides free access to hundreds of thousands of articles from (typically expensive) scientific journals. Others such as Library Genesis and sites on the dark web offer ebooks. Anna’s Archive indexes as many of these collections as it can find; it was set up in November 2022, shortly after the web domains belonging to the then-largest of these book libraries, Z-Library, were seized by the US Department of Justice. Z-Library has since been rebuilt on the dark web, though it remains under attack by publishers and law enforcement.

Anna’s Archive also includes some links to the unquestionably legal and long-running Gutenberg Project, which publishes titles in the public domain in a wide variety of formats.

The OCLC-Anna’s Archive case has a number of familiar elements that are variants of long-running themes, open versus gatekept being the most prominent. Like many such sites (post-Napster), Anna’s Archive does not host files itself. That’s no protection from the law; authorities in various countries from have nonetheless blocked or seized the domains belonging to such sites. But OCLC is not a publisher or rights holder, although it takes large swipes at Anna’s Archive for lawlessness and copyright infringement. Instead, it says Anna’s Archive hacked WorldCat, violating its terms and conditions, disrupting its business arrangements, and costing it $1.4 million and 10,000 employee hours in system remediation. Second, it complains that Anna’s Archive has posted the data in the aggregate for public download, and is “actively encouraging nefarious use of the data”. Other than the use of “nefarious”, there seems little to dispute about either claim; Anna’s Archive published the details in an October 2023 blog posting.

Anna’s Archive describes this process as “preserving” the world’s books for public access. OCLC describes it as “tortious inference” with its business. It wants the court to issue injunctive relief to make the scraping and use of the data stop, compensatory damages in excess of $75,000, punitive damages, costs, and whatever else the court sees fit. The sole named defendant is a US citizen, María A. Matienzo, thought to be resident near Seattle. If the identification and location are correct, that’s a high-risk situation to be in.

In the blog posting, Anna’s Archive writes that its initial goal was to answer the question of what percentage of the world’s published books are held in shadow libraries and create a to-do list of gaps to fill. To answer these questions, they began by scraping ISBNdb, the database of publications with ISBNs, which only came into use in 1970. When the overlap with the Internet Archive’s Open Library and the seized Z-library was less than they hoped, they turned to Worldcat. At that point, they openly say that security flaws in the fortuitously redesigned Worldcat website allowed them to grab more or less the comprehensive set of records. While scraping can be legal, exploiting security flaws to gain unauthorized access to a computer system likely violates the widely criticized Computer Fraud and Abuse Act (1986), which could be a felony. OCLC has, however, brought a civil case.

Anna’s Archive also searches the Internet Archive’s Open Library, founded in 2006. In 2009, co-creator Aaron Swartz told me that he believed the creation of Open Library pushed OCLC into opening up greater public access to the basic tier of its bibliographic data. The Open Library currently has its own legal troubles; it lost in court in August 2023 after Hachette sued it for copyright infringement. The Internet Archive is appealing; in the meantime it is required to remove on request of any member of the American Asociation of Publishers any book commercially available in electronic format.

OCLC began life as the Ohio Library College Library Center; its WorldCat database is a collaboration between it and its member libraries to create a shared database of bibliographic records and enable online cataloguing. The last time I wrote about it, in 2009, critics were complaining that libraries in general were failing to bring book data onto the open web. It has gotten a lot better in the years since, and many local libraries are now searchable online and enable their card holders to borrow from their holdings of ebooks over the web.

The fact that it’s now often possible to borrow ebooks from libraries should mean there’s less reason to use unauthorized sites. Nonetheless, these still appeal: they have the largest catalogues, the most convenient access, DRM-free files, and no time limits, so you can read them at your leisure using the full-featured reader you prefer.

In my 2009 piece, an OCLC spokesperson fretted about “over-exploitation”, which there would be no good way to maintain or update countless unknown scattered pockets of data, seemingly a solvable problem.

OCLC and its member libraries are all non-profit organizations ultimately funded by taxpayers. The data they collect has one overriding purpose: to facilitate public access to libraries’ holdings by showing who holds what books in which editions. What are “nefarious” uses? Arguably, the data they collect should be public by right. But that’s not the question the courts will decide.

Illustrations: The New York Public Library, built 1911 (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.