Intents and purposes

One of the basic principles of data protection law is the requirement for consent for change of use. For example, giving a site a mobile number for two-factor authentication doesn’t entitle it to sell that number to a telemarketing company. Providing a home address to enable package delivery doesn’t also invite ads trying to manipulate my vote in an election. Governments, too, are subject to data protection law, but they have more scope than most to carve out – or simply take – exceptions for themselves.

And so to the UK’s Department of Work and Pensions, whose mission in life is supposed to be to provide people with the financial support the state has promised them, whether that’s welfare or state pensions – overall, about 23 million people. Schools Week reports that Jen Persson at Defend Digital Me has discovered that the DWP has a secret deal with the Department of Education granting it access to the National Pupil Database for the purpose of finding benefit fraud.

“Who knows their family’s personal confidential records are in the haystack used to find the fraudulent needle?” Persson asks.

Every part of this is a mess. First of all, it turns schools into hostile environments for those already at greatest risk. Second, as we saw as long ago as 2010, parents and children have little choice about the data schools collect and keep. The breadth and depth of this data has been expanding long enough to burn out the UK’s first campaigner on children’s privacy rights (Terri Dowty, with Action for Rights of Children), and keep the second (Persson) fully occupied for some years now.

Persson told Schools Week that more than 15 million of the people on the NPD have long since left school. That sounds right; the database was created in 2002, five years into Tony Blair’s database-loving Labour government. In the 2009 report Database State, written under the aegis of the Foundation for Information Policy Research, Ross Anderson, Terri Dowty, Philip Inglesant, William Heath, and Angela Sasse surveyed 46 government databases. They found that a quarter of them were “almost certainly illegal” under human rights or data protection law, and noted that Britain was increasingly centralizing all such data.

“The emphasis on data capture, form-filling, mechanical assessment and profiling damages professional responsibility and alienates the citizen from the state. Over two-thirds of the population no longer trust the government with their personal data,” they wrote then.

The report was published while Blair’s government was trying to implement the ID card enshrined in the 2006 ID Cards Act. This latest in a long string of such proposals following the withdrawal of ID cards after the end of World War II was ultimately squelched when David Cameron’s coalition government took office in 2010. The act was repealed in 2011.

These bits of history are relevant for three reasons: 1) there is no reason to believe that the Labour government everyone expects will win office in the next nine months will be any less keen on dataveillance; 2) tackling benefit fraud was what they claimed they wanted the ID card for in 2006; 3) you really don’t need an ID *card* if you have biometrics and ubiquitous, permanent access online to a comprehensive government database. This was obvious even in 2006, and now we’re seeing it in action.

Dowty often warned that children were used as experimental subjects on which British governments sharpened the policies they intended to expand to the rest of the population. And so it is proving: the use of education data to look for benefit fraud is the opening act for the provision in the Data Protection and Digital Information bill empowering the DWP to demand account data from banks and other financial institutions, again to reduce benefit fraud.

The current government writes, “The new proposals would allow regular checks to be carried out on the bank accounts held by benefit claimants to spot increases in their savings which push them over the benefit eligibility threshold, or when people send [sic] more time overseas than the benefit rules allow for.” The Information Commissioner’s Office has called the measure disproportionate, and says it does not provide sufficient safeguards.

Big Brother Watch, which is campaigning against this proposal, argues that it reverses the fundamental principle of the presumption of innocence. All pervasive “monitoring” does that; you are continuously a suspect except at the specific points where you’ve been checked and found innocent. .

In a commercial context, we’d call the coercion implicit in repurposing data given under compulsion bait and switch. We’d also bear in mind the Guardian’s recent expose: the DWP has been demanding back huge sums of money from carers who’ve made minor mistakes in reporting their income. As BBW also wrote, even a tiny false positive rate will give the DWP hundreds of thousands of innocent people to harass.

Thirty years ago, when I was first learning about the dangers of rampant data collection, it occurred to me that the only way you can ensure that data can’t be leaked, exploited, or used maliciously is to not collect in the first place. This isn’t a choice anyone can make now. But there are alternatives that reverse the trend toward centralization that Anderson et. al identified in 2009.

Illustrations: Haystacks at a Moldovan village (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Deja news

At the first event organized by the University of West London group Women Into Cybersecurity, a questioner asked how the debates around the Internet have changed since I wrote the original 1997 book net.wars..

Not much, I said. Some chapters have dated, but the main topics are constants: censorship, freedom of speech, child safety, copyright, access to information, digital divide, privacy, hacking, cybersecurity, and always, always, *always* access to encryption. Around 2010, there was a major change when the technology platforms became big enough to protect their users and business models by opposing government intrusion. That year Google launched the first version of its annual transparency report, for example. More recently, there’s been another shift: these companies have engorged to the point where they need not care much about their users or fear regulatory fines – the stage Ed Zitron calls the rot economy and Cory Doctorow dubs enshittification.

This is the landscape against which we’re gearing up for (yet) another round of recursion. April 25 saw the passage of amendments to the UK’s Investigatory Powers Act (2016). These are particularly charmless, as they expand the circumstances under which law enforcement can demand access to Internet Connection Records, allow the government to require “exceptional lawful access” (read: backdoored encryption) and require technology companies to get permission before issuing security updates. As Mark Nottingham blogs, no one should have this much power. In any event, the amendments reanimate bulk data surveillance and backdoored encryption.

Also winding through Parliament is the Data Protection and Digital Information bill. The IPA amendments threaten national security by demanding the power to weaken protective measures; the data bill threatens to undermine the adequacy decision under which the UK’s data protection law is deemed to meet the requirements of the EU’s General Data Protection Regulation. Experts have already put that adequacy at risk. If this government proceeds, as it gives every indication of doing, the next, presumably Labour, government may find itself awash in an economic catastrophe as British businesses become persona-non-data to their European counterparts.

The Open Rights Group warns that the data bill makes it easier for government, private companies, and political organizations to exploit our personal data while weakening subject access rights, accountability, and other safeguards. ORG is particularly concerned about the impact on elections, as the bill expands the range of actors who are allowed to process personal data revealing political opinions on a new “democratic engagement activities” basis.

If that weren’t enough, another amendment also gives the Department of Work and Pensions the power to monitor all bank accounts that receive payments, including the state pension – to reduce overpayments and other types of fraud, of course. And any bank account connected to those accounts, such as landlords, carers, parents, and partners. At Computer Weekly, Bill Goodwin suggests that the upshot could be to deter landlords from renting to anyone receiving state benefits or entitlements. The idea is that banks will use criteria we can’t access to flag up accounts for the DWP to inspect more closely, and over the mass of 20 million accounts there will be plenty of mistakes to go around. Safe prediction: there will be horror stories of people denied benefits without warning.

And in the EU… Techcrunch reports that the European Commission (always more surveillance-happy and less human rights-friendly than the European Parliament) is still pursuing its proposal to require messaging platforms to scan private communications for child sexual abuse material. Let’s do the math of truly large numbers: billions of messages, even a teeny-tiny percentage of inaccuracy, literally millions of false positives! On Thursday, a group of scientists and researchers sent an open letter pointing out exactly this. Automated detection technologies perform poorly, innocent images may occur in clusters (as when a parent sends photos to a doctor), and such a scheme requires weakening encryption, and in any case, better to focus on eliminating child abuse (taking CSAM along with it).

Finally, age verification, which has been pending in the UK ever since at least 2016, is becoming a worldwide obsession. At least eight US states and the EU have laws mandating age checks, and the Age Verification Providers Association is pushing to make the Internet “age-aware persistently”. Last month, the BSI convened a global summit to kick off the work of developing a worldwide standard. These moves are the latest push against online privacy; age checks will be applied to *everyone*, and while they could be designed to respect privacy and anonymity, the most likely is that they won’t be. In 2022, the French data protection regulator, CNIL, found that current age verification methods are both intrusive and easily circumvented. In the US, Casey Newton is watching a Texas case about access to online pornography and age verification that threatens to challenge First Amendment precedent in the Supreme Court.

Because the debates are so familiar – the arguments rarely change – it’s easy to overlook how profoundly all this could change the Internet. An age-aware Internet where all web use is identified and encrypted messaging services have shut down rather than compromise their users and every action is suspicious until judged harmless…those are the stakes.

Illustrations: Angel sensibly smashes the ring that makes vampires impervious (in Angel, “In the Dark” (S01e03)).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

Selective enforcement

This week, as a rider to the 21st Century Peace Through Strength Act, which provides funding for defense in Ukraine, Israel, and Taiwan, the US Congress passed provisions for banning the distribution of TikTok if owner ByteDance has not divested it within 270 days. President Joe Biden signed it into law on Wednesday, and, as Mike Masnick says at Techdirt, ByteDance’s lawsuit is imminently expected, largely on First Amendment grounds. ACLU agrees. Similar arguments won when ByteDance challenged a 2023 Montana law.

For context: Pew Research says TikTok is the fifth-most popular social media service in the US. An estimated 150 million Americans – and 62% of 18-29-year-olds – use it.

The ban may not be a slam-dunk to fail in court. US law, including the constitution, includes many restrictions on foreign influence, from requiring registration for those acting as agents to requiring presidents to have been born US citizens. Until 2017, foreigners were barred from owning US broadcast networks.

So it seems to this non-lawyer as though a lot hinges on how the court defines TikTok and what precedents apply. This is the kind of debate that goes back to the dawn of the Internet: is a privately-owned service built of user-generated content more like a town square, a broadcaster, a publisher, or a local pub? “Broadcast”, whether over the air or via cable, implies being assigned a channel on a limited resource; this clearly doesn’t apply to apps and services carried over the presumably-infinite Internet. Publishing implies editorial control, which social media lacks. A local pub might be closest: privately owned, it’s where people go to connect with each other. “Congress may make no law…abridging the freedom of speech”…but does that cover denying access to one “place” where speech takes place when there are many other options?

TikTok is already banned in Pakistan, Nepal, and Afghanistan, and also India, where it is one of 500 apps that have been banned since 2020. ByteDance will argue that the ban hurts US creators who use TikTok to build businesses. But as NPR reports, in India YouTube and Instagram rolled out short video features to fill the gap for hyperlocal content that the loss of TikTok opened up, and four years on creators have adapted to other outlets.

It will be more interesting if ByteDance claims the company itself has free speech rights. In a country where commercial companies and other organizations are deemed to have “free speech” rights entitling them to donate as much money as they want to political causes (as per the Supreme Court’s ruling in Citizens United v. Federal Election Commission), that might make a reasonable argument.

On the other hand, there is no question that this legislation is full of double standards. If another country sought to ban any of the US-based social media, American outrage would be deafening. If the issue is protecting the privacy of Americans against rampant data collection, then, as Free Press argues, pass a privacy law that will protect Americans from *every* service, not just this one. The claim that the ban is to protect national security is weakened by the fact that the Chinese government, like apparently everyone else, can buy data on US citizens even if it’s blocked from collecting it directly from ByteDance.

Similarly, if the issue is the belief that social media inevitably causes harm to teenagers, as author and NYU professor Jonathan Haidt insists in his new book, then again, why only pick on TikTok? Experts who have really studied this terrain, such as Danah Boyd and others, insist that Haidt is oversimplifying and pushing parents to deny their children access to technologies whose influence is largely positive. I’m inclined to agree; between growing economic hardship, expanding wars, and increasing climate disasters young people have more important things to be anxious about than social media. In any case, where’s the evidence that TikTok is a bigger source of harm than any other social medium?

Among digital rights activists, the most purely emotional argument against the TikTok ban revolves around the original idea of the Internet as an open network. Banning access to a service in one country (especially the country that did the most to promote the Internet as a vector for free speech and democratic values) is, in this view, a dangerous step toward the government control John Perry Barlow famously rejected in 1996. And yet, to increasing indifference, no-go signs are all over the Internet. *Six* years after GDPR came into force, Europeans are still blocked from many US media sites that can’t be bothered to comply with it. Many other media links don’t work because of copyright restrictions, and on and on.

The final double standard is this: a big element in the TikTok ban is the fear that the Chinese government, via its control over companies hosted there, will have access to intimate personal information about Americans. Yet for more than 20 years this has been the reality for non-Americans using US technology services outside the US: their data is subject to NSA surveillance. This, and the lack of redress for non-Americans, is what Max Schrems’ legal cases have been about. Do as we say, not as we do?

Illustrations: TikTok CEO Shou Zi Chew, at the European Commission in 2024 (by Lukasz Kobus at Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Borderlines

Think back to the year 2000. New York’s World Trade Center still stood. Personal digital assistants were a niche market. There were no smartphones (the iPhone arrived in 2006) or tablets (the iPad took until 2010). Social media was nascent; Facebook first opened in 2004. The Good Friday agreement was just two years old, and for many in Britain “terrorists” were still “Irish”. *That* was when the UK passed the Terrorism Act (2000).

Usually when someone says the law can’t keep up with technological change they mean that technology can preempt regulation at speed. What the documentary Phantom Parrot shows, however, is that technological change can profoundly alter the consequences of laws already on the books. The film’s worked example is Schedule 7 of the 2000 Terrorism Act, which empowers police to stop, question, search, and detain people passing through the UK’s borders. They do not need prior authority or suspicion, but may only stop and question people for the purpose of determining whether the individual may be or have been concerned in the commission, preparation, or instigation of acts of terrorism.

Today this law means that anyone ariving at the UK border may be compelled to unlock access to data charting their entire lives. The Hansard record of the debate on the bill shows clearly that lawmakers foresaw problems: the classification of protesters as terrorists, the uselessness of fighting terrorism by imprisoning the innocent (Jeremy Corbyn), the reversal of the presumption of innocence. But they could not foresee how far-reaching the powers the bill granted would become.

The film’s framing story begins in November 2016, when Muhammed Rabbani arrived at London’s Heathrow Airport from Doha and was stopped and questioned by police under Schedule 7. They took his phone and laptop and asked for his passwords. He refused to supply them. On previous occasions, when he had similarly refused, they’d let him go. This time, he was arrested. Under Schedule 7, the penalty for such a refusal can be up to three months in jail.

Rabbani is managing director of CAGE International, a human rights organization that began by focusing on prisoners seized under the war on terror and expanded its mission to cover “confronting other rule of law abuses taking place under UK counter-terrorism strategy”. Rabbani’s refusal to disclose his passwords was, he said later, because he was carrying 30,000 confidential documents relating to a client’s case. A lawyer can claim client confidentiality, but not NGOs. In 2018, the appeals court ruled the password demands were lawful.

In September 2017, Rabbani was convicted. He was g iven a 12-month conditional discharge and ordered to pay £620 in costs. As Rabbani says in the film, “The law made me a terrorist.” No one suspected him of being a terrorist or placing anyone in danger; but the judge made clear she had no choice under the law and so he nonetheless has been convicted of a terrorism offense. On appeal in 2018, his conviction was upheld. We see him collect his returned devices – five years on from his original detention.

Britain is not the only country that regards him with suspicion. Citing his conviction, in 2023 France banned him, and, he claims, Poland deported him.

Unsurprisingly, CAGE is on the first list of groups that may be dubbed “extremist” under the new definition of extremism released last week by communities secretary Michael Gove. The direct consequence of this designation is a ban on participation in public life – chiefly, meetings with central and local government. The expansion of the meaning of “extremist”, however, is alarming activists on all sides.

Director Kate Stonehill tells the story of Rabbani’s detention partly through interviews and partly through a reenactment using wireframe-style graphics and a synthesized voice that reads out questions and answers from the interview transcripts. A cello of doom provides background ominance. Laced through this narrative are others. A retired law enforcement office teaches a class to use extraction and analysis tools, in which we see how extensive the information available to them really is. Ali Al-Marri and his lawyer review his six years of solitary detention as an enemy combatant in Charleston, South Carolina. Lastly, Stonehill calls on Ryan Gallegher’s reporting, which exposed the titular Phantom Parrot, the program to exploit the data retained under Schedule 7. There are no records of how many downloads have been taken.

The retired law enforcement officer’s class is practically satire. While saying that he himself doesn’t want to be tracked for safety reasons, he tells students to grab all the data they can when they have the opportunity. They are in Texas: “Consent’s not even a problem.” Start thinking outside of the box, he tells them.

What the film does not stress is this: rights are largely suspended at all borders. In 2022, the UK extended Schedule 7 powers to include migrants and refugees arriving in boats.

The movie’s future is bleak. At the Chaos Computer Congress, a speaker warns that gait recognition, eye movement detection, and speech analysis (accents, emotion) and and other types of analysis will be much harder to escape and enable watchers to do far more with the ever-vaster stores of data collected from and about each of us.

“These powers are capable of being misused,” said Douglas Hogg in the 1999 Commons debate. “Most powers that are capable of being misused will be misused.” The bill passed 210-1.

Illustrations: Still shot from the wireframe reenactment of Rabbani’s questioning in Phantom Parrot.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

The bridge

Seven months ago, Mastodon was fretting about Meta’s newly-launched Threads. The issue: Threads, which was built on top of Instagram’s user database, had said it complied with the Activity Pub protocol, which allows Mastodon servers (“instances”) to federate with any other service that also uses that protocol. The potential threat that Threads would become interoperable and that potentially millions of Threads users would swamp Mastodon, ignoring its existing social norms and culture created an existential dilemma: to federate or not to federate?

Today, Threads’ integration is still just a plan.

Instead, it seems the first disruptive arrival looks set to be Bluesky, created by a team backed by Twitter co-founder Jack Dorsey and facilitated by a third party. Bluesky wrote a new open source protocol, AT, so the proposal isn’t federation with Mastodon but a bridge, as Amanda Silberling reports at TechCrunch. According to Silberling’s numbers, year-old Bluesky stands at 4.8 million users to Mastodon’s 8.7 million. Anyone familiar with the history of AOL’s gateway to Usenet will tell you that’s big enough to disrupt existing social norms. The AOL exercise was known as Eternal September (because every September Usenet had to ingest a new generation of incoming university freshmen).

There are two key differences, however. First, a third of those Blusky users are new to that system, only joining last week, when the service opened fully to the public. They will bring challenges to the culture Bluesky has so far developed. Second, AOL’s gateway was unidirectional: AOLers could read and post to Usenet newsgroups, but Usenet posters could not read anything on AOL without paying for access. The Bluesky-Mastodon bridge is planned to be bidirectional, so anything posted publicly on one service would be accessible to both – or to outsiders using BridgyFed to connect via website feeds.

I haven’t spent a lot of time on Bluesky, but it’s clear it and Mastodon have different cultures. Friends who spend more time there say Bluesky has a “weirdness” they like and is less “scoldy” than Mastodon, where long-time users tended to school incoming ex-Twitter users in 2022 on their mistakes. That makes sense, when you consider that Mastodon has had time since its 2016 founding to develop an existing culture that newcomers are joining, where Bluesky has been a closed beta group until last week, and its users to date were the ones defining its culture for the future. The newcomers of the past week may have a very different experience.

Even if they don’t, there’s a fundamental economic difference that no technology can bridge: Mastodon is a non-profit cooperative endeavor, while Bluesky is has venture capital funding, although the list of investors is not the usual suspects. Social media users have often been burned by corporate business decisions. It’s therefore easy to believe that the $8 million in seed funding will lead inevitably to user data exploitation, no matter what they say now about being determined to find a different and more sustainable business model based on selling ancillary servicesx. Even if that strategy works, later owners or the dictates of shareholders may demand higher profits via a pivot to advertising, just as the Netflix and Amazon Prime streaming services are doing now.

Designing any software involves making rules for how it will operate and setting defaults. Here’s where the project hit trouble: should it be opt-out, so that users who don’t want their posts to be visible outside their home system have to specifically turn it off, or opt-in, so that users who want their posts published far and wide have to turn it on? BridgyFed’s creator, Ryan Barrett chose opt-out. It was immediately divisive: privacy versus openness.

Silberman reports that Barrett has fashioned a solution, giving users warning pop-ups and a chance to decline if someone from another service tries to follow them, and is thinking more carefully about the risks to safety his bridge might bring.

That’s great, but the next guy may not be so willing to reconsider. As we’ve observed before, there is no way to restrict the use of open protocols without closing them and putting them under centralized control – which is the opposite of the federated, decentralized systems Mastodon and Bluesky were created to build.

In a federated system anything one person can open another can close. Individual admins will decide for their users how their instances will operate. Those who don’t like their choice will be told they can port their accounts to one whose policies they prefer. That’s true, but unsatisfying as an answer. As the “Fediverse” grows, it must accommodate millions of mainstream users for whom moving servers is too complicated.

The key point, however, is that the illusion of control Mastodon seemed to offer is being punctured. Usenet users could have warned them: from its creation in 1979, users believed their postings were readable for a few weeks before expiring and being expunged. Then, in 1995, Steve Madere created the Deja News archive from scattered collections. Overnight, those “ephemeral” postings became permanent and searchable – and even more so, after 2001, when Google bought the archive (see groups.google.com).

The upshot: privacy in public networks is only ever illusory. Assume you have no control over anything you post, no matter how cozy and personal the network seems. As we’ve said before, the privacy-in-public afforded by the physical world has no online counterpart.

Illustrations: A mastodon by Heinrich Harder (public domain, via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Relativity

“Status: closed,” the website read. It gave the time as 10:30 p.m.

Except it wasn’t. It was 5:30 p.m., and the store was very much open. The website, instead of consulting the time zone the store – I mean, the store’s particular branch whose hours and address I had looked up – was in was taking the time from my laptop. Which I hadn’t bothered to switch to the US east coat from Britain because I can subtract five hours in my head and why bother?

Years ago, I remember writing a rant (which I now cannot find) about the “myness” of modern computers: My Computer, My Documents. My account. And so on, like a demented two-year-old who needed to learn to share. The notion that the time on my laptop determined whether or not the store was open had something of the same feel: the computational universe I inhabit is designed to revolve around me, and any dispute with reality is someone else’s problem.

Modern social media have hardened this approach. I say “modern” because back in the days of bulletin board systems, information services, and Usenet, postings were time- and date-stamped with when they were sent and specifying a time zone. Now, every post is labelled “2m” or “30s” or “1d”, so the actual date and time are hidden behind their relationship to “now”. It’s like those maps that rotate along with you so wherever you’re pointed physically is at the top. I guess it works for some people, but I find it disorienting; instead of the map orienting itself to me, I want to orient myself to the map. This seems to me my proper (infinitesimal) place in the universe.

All of this leads up to the revival of software agents. This was a Big Idea in the late 1990s/early 2000s, when it was commonplace to think that the era of having to make appointments and book train tickets was almost over. Instead, software agents configured with your preferences would do the negotiating for you. Discussions of this sort of thing died away as the technology never arrived. Generative AI has brought this idea back, at least to some extent, particularly in the financial area, where smart contracts can be used to set rules and then run automatically. I think only people who never have to worry about being able to afford anything will like this. But they may be the only ones the “market” cares about.

Somewhere during the time when software agents were originally mooted, I happened to sit at a conference dinner with the University of Maryland human-computer interaction expert Ben Shneiderman. There are, he said, two distinct schools of thought in software. In one, software is meant to adapt to the human using it – think of predictive text and smartphones as an example. In the other, software is consistent, and while using it may be repetitive, you always know that x command or action will produce y result. If I remember correctly, both Shneiderman and I were of the “want consistency” school.

Philosophically, though, these twin approaches have something in common with seeing the universe as if the sun went around the earth as against the earth going around the sun. The first of those makes our planet and, by extension, us far more important in the universe than we really are. The second cuts us down to size. No surprise, then, if the techbros who build these things, like the Catholic church in Galileo’s day, prefer the former.

***

Politico has started the year by warning that the UK is seeking to expand its surveillance regime even further by amending the 2016 Investigatory Powers Act. Unnoticed in the run-up to Christmas, the industry body techUK sent a letter to “express our concerns”. The short version: the bill expands the definition of “telecommunications operator” to include non-UK providers when operating outside the UK; allows the Home Office to require companies to seek permission before making changes to a privately and uniquely specified list of services; and the government wants to whip it through Parliament as fast as possible.

No, no, Politico reports the Home Office told the House of Lords, it supports innovation and isn’t threatening encryption. These are minor technical changes. But: “public safety”. With the ink barely dry on the Online Safety Act, here we go again.

***

As data breaches go, the one recently reported by 23andMe is alarming. By using passwords exposed in previous breaches (“credential stuffing”) to break into 14,000 accounts, attackers gained access to 6.9 million account profiles. The reason is reminiscent of the Cambridge Analytica scandal, where access to a few hundred thousand Facebook accounts was leveraged to obtain the data of millions: people turned on “DNA Relatives to allow themselves to be found by those searching for genetic relatives. The company, which afterwards turned on a requireme\nt for two-factor authentication, is fending off dozens of lawsuits by blaming the users for reusing passwords. According to Gizmodo, the legal messiness is considerable, as the company recently changed its terms and conditions to make arbitration more difficult and litigation almost impossible.

There’s nothing good to say about a data breach like this or a company that handles such sensitive data with such disdainx. But it’s yet one more reason why putting yourself at the center of the universe is bad hoodoo.

Illustrations: DNA strands (via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

A surveillance state of mind

­”Do computers automatically favor authoritarianism?” a friend asked recently. Or, are they fundamentally anti-democratic?

Certainly, at the beginning, many thought that both the Internet and personal computers (think, for example, of Apple’s famed Super Bowl ad, “1984”) – would favor democratic ideals by embedding values such as openness, transparency, and collaborative policy-making in their design. Universal access to information and to networks of distribution was always going to have downsides, but on balance was going to be a Good Thing (actually, I still believe this). So, my friend was asking, were those hopes always fundamentally absurd, or were the problems of disinformation and widespread installation of surveillance technology always inevitable for reasons inherent in the technology itself?

Computers, like all technology, are what we make them. But one fundamental characteristic does seem to me unavoidable: they upend the distribution of data-related costs. In the physical world, more data always involved more expense: storing it required space, and copying or transmitting it took time, ink, paper, and personnel. In the computer world, more data is only marginally more expensive, and what costs remain have kept falling for 70 years. For most purposes, more digital data incurs minimal costs. The expenses of digital data only kick in when you curate it: selection and curation take time and personnel. So the easiest path with computer data is always to keep it. In that sense, computers inevitably favor surveillance.

The marketers at companies that collect data about this try to argue this is a public *good* because doing so enables them to offer personalized services that benefit us. Underneath, of course, there are too many economic incentives for them not to “share” – that is, sell – it onward, creating an ecosystem that sends our data careening all over the place, and where “personalization” becomes “surveillance” and then, potentially, “maleveillance”, which is definitely not in our interests.

At a 2011 workshop on data abuse, participants noted that the mantra of the day was “the data is there, we might as well use it”. At the time, there was a definite push from the industry to move from curbing data collection to regulating its use instead. But this is the problem: data is tempting. This week has provided a good example of just how tempting in the form of a provision in the UK’s criminal justice bill will allow police to use the database of driver’s license photos for facial recognition searches. “A permanent police lineup,” privacy campaigners are calling it.

As long ago as 1996, the essayist and former software engineer Ellen Ullman called out this sort of temptation, describing it as a system “infecting” its owner. Data tempts those with access to it to ask questions they couldn’t ask before. In many cases that’s good. Data enables Patrick Ball’s Human Rights Data Analysis Group to establish “who did what to whom” in cases of human rights abuse. But, in the downside in Ullman’s example, it undermines the trust between a secretary and her boss, who realizes he can use the system to monitor her work, despite prior decades of trust. In the UK police example, the downside is tempting the authorities to combine the country’s extensive network of CCTV images and the largest database of photographs of UK residents. “Crime scene investigations,” say police and ministers. “Chill protests,” the rest of us predict. In a story I’m writing for the sucessor to the Cybersalon anthology Twenty-Two Ideas About the Future, I imagined a future in which police have the power and technology to compel every camera in the country to join a national network they control. When it fails to solve an important crime of the day, they successfully argue it’s because the network’s availability was too limted.

The emphasis on personalization as a selling point for surveillance – if you turn it off you’ll get irrelevant ads! – reminds that studies of astrology starting in 1949 have found that people’s rating of their horoscopes varies directly with how personalized they perceive them to be. The horoscope they are told has been drawn up just for them by an astrologer gets much higher ratings than the horoscope they are told is generally true of people with their sun sign – even when it’s the *same* horoscope.

Personalization is the carrot businesses use to get us to feed our data into their business models; their privacy policies dictate the terms. Governments can simply compel disclosure as a requirement for a benefit we’re seeking – like the photo required to get a driver’s license,, passport, or travel pass. Or, under greater duress, to apply for or await a decision about asylum, or try to cross a border.

“There is no surveillance state,” then-Home Secretary Theresa May said in 2014. No, but if you put all the pieces in place, a future government of a malveillance state of mind can turn it on at will.

So, going back to my friend’s question. Yes, of course we can build the technology so that it favors democratic values instead of surveillance. But because of that fundamental characteristic that makes creating and retaining data the default and the business incentives currently exploiting the results, it requires effort and thought. It is easier to surveil. Malveillance, however, requires power and a trust-no-one state of mind. That’s hard to design out.

Illustrations: The CCTV camera at 22 Portobello Road, where George Orwell lived circa 1927.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

New phone, who dis?

So I got a new phone. What makes the experience remarkable is that the old phone was a Samsung Galaxy Note 4, which, if Wikipedia is correct, was released in 2014. So the phone was at least eight, probably nine, years old. When you update incrementally, like a man who gets his hair cut once a week, it’s hard to see any difference. When you leapfrog numerous generations of updates, it’s seeing the man who’s had his first haircut in a year: it’s a shock.

The tl;dr: most of what I don’t like about the switch is because of Google.

There were several reasons why I waited so long. It was a good enough phone and it had a very good camera for its time; I finessed the lack of security updates by not using the phone for functions where it mattered. Also, I didn’t want to give up the disappearing headphone jack, home button, or, especially, user-replaceable battery. The last of those is why I could keep the phone for so long, and it was the biggest deal-breaker.

For that reason, I’ve known for years that the Note’s eventual replacement would likely be a Fairphone, a Dutch outfit that is doing its best to produce sustainable phones. It’s repairable and user-upgradable (it takes one screwdriver to replace a cracked screen or the camera), and changing the bettery takes a second. I had to compromise on the headphone jack, which requires a USB-C dongle. Not having the home button is hard to get used to; I used it constantly. It turns out, though, that it’s even harder to get used to not having the soft button on the bottom left that used to show me recently used apps so I could quickly switch back to the thing I was using a few minutes ago. But that….is software.

The biggest and most noticeable change between Android 6 (the Note 4 got its last software update in 2017) and Android 13 (last week) is the assumptions both Android chief Google and the providers of other apps make about what users want. On the Note 4, I had a quick-access button to turn the wifi on and off. Except for the occasional call over Signal, I saw no reason to keep it on to drain the battery unnecessarily. Today, that same switch is buried several layers deep in settings with apparently no way to move that into the list of quick-access functions. That’s just one example. But no acommodation for my personal quirks can change the sense of being bullied into giving away more data and control than I’d like.

Giving in to Google does, however, mean an easy transfer of your old phone’s contents to your new phone (if transferring the external SD card isn’t enough).

Too late I remembered the name Murena – a company that equips Fairphones with de-Googlified Android. As David Pierce writes at The Verge, that requires a huge effort. Murena has built replacements for the standard Google apps, a cloud system for email, calendars, and productivity software. Even so, Pierce writes, apps hit the limit: despite Murena’s effort to preserve user anonymity, it’s just not possible to download them without interacting with Google, especially when payment is required. And who wants to run their phone without third-party apps? Not even me (although I note that many of those I use can still be sideloaded).

The reality is I would have preferred to wait even longer to make the change. I was pushed by the fact that several times recently the Note has complained that it can’t download email because it was running out of storage space (which is why I would prefer to store everything on an external SD card, but: not an option for email and apps). And on a recent trip to the US, there were numerous occasions where the phone simply didn’t work, even though there shouldn’t be any black spots in places like Boston and San Francisco. A friend suggested that in all likelihood there were freuqency bands being turned off while other newer ones were probably ones the Note couldn’t use. I had forgotten that 5G, which I last thought about in 2018, had been arriving. So: new phone. Resentfully.

This kind of forced wastefulness is one of the things Donald Norman talks about in his new book, Design for a Better World. To some extent, the book is a mea culpa: after decades of writing about how to design things better to benefit us as individuals, Norman has recognized the necessity to rethink and replace human-centered design with humanity-centered design. Sustainability is part of that.

Everything around us is driven by design choices. Building unrepairable phones is a choice, and a destructive one, given the amount of rare materials used inside that wind up in landfills instead of, new phones or some other application. The Guardian’s review of the latest Fairphone asks, “Could this be the first phone to last ten years?” I certainly hope so, but if something takes it down before then it will be an externality like switched-off bands, the end of software updates, or a bank’s decision to require customers use an app for two-factor authentication and then update it so older phones can’t run it. These are, as Norman writes, complex systems in which the incentives are all misplaced. And so: new phone. Largely unnecessarily.

Illustrations: Personally owned 1970s AT&T phone.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

The documented life

For various reasons, this week I asked my GP for printed verification of my latest covid booster. They handed me what appears to be a printout of the entire history of my interactions with the practice back to 1997.

I have to say, reading it was a shock. I expected them to have kept records of tests ordered and the results. I didn’t think about them keeping everything I said on the website’s triage form, which they ask you to use when requesting an appointment, treatment, or whatever. Nor did I expect notes beginning “Pt dropped in to ask…”

The record doesn’t, however, show all details of all conversations I’ve had with everyone in the practice. It notes medical interactions, like noting a conversation in which I was advised about various vaccinations. It doesn’t mention that on first acquaintance with the GP to whom I’m assigned I asked her about her attitudes toward medical privacy and alternative treatments such as acupuncture. “Are you interviewing me?” she asked. A little bit, yes.

There are also bits that are wrong or outdated.

I think if you wanted a way to make the privacy case, showing people what’s in modern medical records would go a long way. That said, one of the key problems in current approaches to the issues surrounding mass data collection is that everything is siloed in people’s minds. It’s rare for individuals to look at a medical record and connect it to the habit of mind that continues to produce Google, Meta, Amazon, and an ecosystem of data brokers that keeps getting bigger no matter how many data protection laws we pass. Medical records hit a nerve in an intimate way that purchase histories mostly don’t. Getting the broad mainstream to see the overall picture, where everything connects into giant, highly detailed dossiers on all of us, is hard.

And it shouldn’t be. Because it should be obvious by now that what used to be considered a paranoid view has a lot of reality. Governments aren’t highly motivated to curb commercial companies’ data collecction because that all represents data that can be subpoenaed without the risk of exciting a public debate or having to justify a budget. In the abstract, I don’t care that much who knows what about me. Seeing the data on a printout, though, invites imagining a hostile stranger reading it. Today, that potentially hostile stranger is just some other branch of the NHS, probably someone looking for clues in providing me with medical care. Five or twenty years from now…who knows?

More to the point, who knows what people will think is normal? Thirty years ago, “normal” meant being horrified at the idea of cameras watching everywhere. It meant fingerprints were only taken from criminal suspects. And, to be fair, it meant that governments could intercept people’s phone calls by making a deal with just one legacy giant telephone company (but a lot of people didn’t fully realize that). Today’s kids are growing up thinking of constantly being tracked as normal, I’d like to think that we’re reaching a turning point where what Big Tech and other monopolists have tried to convince is is normal is thoroughly rejected. It’s been a long wait.

I think the real shock in looking at records like this is seeing yourself through someone else’s notes. This is very like the moment in the documentary Erasing David, when the David of the title gets his phone book-sized records from a variety of companies. “What was I angry about on November 2006?” he muses, staring at the note of a moment he had long forgotten but the company hadn’t. I was relieved to see there were no such comments. On the other hand, also missing were a couple of things I distinctly remember asking them to write down.

But don’t get me wrong: I am grateful that someone is keeping these notes besides me. I have medical records! For the first 40 years of my life, doctors routinely refused to show patients any of their medical records. Even when I was leaving the US to move overseas in 1981, my then-doctor refused to give me copies, saying, “There’s nothing there that would be any use to you.” I took that to mean there were things he didn’t want me to see. Or he didn’t want to take the trouble to read through and see that there weren’t. So I have no record of early vaccinations or anything else from those years. At some point I made another attempt and was told the records had been destroyed after seven years. Given that background, the insousiance with which the receptionist printed off a dozen pages of my history and handed it over was a stunning advance in patient rights.

For the last 30-plus years, therefore, I’ve kept my own notes. There isn’t, after checking, anything in the official record that I don’t have. There may, of course, be other notes they don’t share with patients.

Whether for purposes malign (surveillance, control) or benign (service), undocumented lives are increasingly rare. In an ideal world, there’d be a way for me and the medical practice to collaborate to reconcile discrepancies and rectify omissions. The notion of patients controlling their own data is still far from acceptance. That requires a whole new level of trust.

Illustrations: Asclepius, god of medieine, exhibited in the Museum of Epidaurus Theatre (Michael F. Mehnert via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

The grown-ups

In an article this week in the Guardian, Adrian Chiles asks what decisions today’s parents are making that their kids will someday look back on in horror the way we look back on things from our childhood. Probably his best example is riding in cars without seatbelts (which I’m glad to say I survived). In contrast to his suggestion, I don’t actually think tomorrow’s parents will look back and think they shouldn’t have had smartphones, though it’s certainly true that last year a current parent MP (whose name I’ve lost) gave an impassioned speech opposing the UK’s just-passed Online Safety Act in which she said she had taken risks on the Internet as a teenager that she wouldn’t want her kids to take now.

Some of that, though, is that times change consequences. I knew plenty of teens who smoked marijuana in the 1970s. I knew no one whose parents found them severely ill from overdoing it. Last week, the parent of a current 15-year-old told me he’d found exactly that. His kid had made the classic error (see several 2010s sitcoms) of not understanding how slowly gummies act. Fortunately, marijuana won’t kill you, as the parent found to his great relief after some frenzied online searching. Even in 1972, it was known that consuming marijuana by ingestion (for example, in brownies) made it take effect more slowly. But the marijuana itself, by all accounts, was less potent. It was, in that sense, safer (although: illegal, with all the risks that involves).

The usual excuse for disturbing levels of past parental risk-taking is “We didn’t know any better”. A lot of times that’s true. When today’s parents of teenagers were 12 no one had smartphones; when today’s parents were teens their parents had grown up without Internet access at home; when my parents were teens they didn’t have TV. New risks arrive with every generation, and each new risk requires time to understand the consequences of getting it wrong.

That is, however, no excuse for some of the decisions adults are making about systems that affect all of us. Also this week and also at the Guardian, Akiko Hart, interim director of Liberty writes scathingly about government plans to expand the use of live facial recognition to track shoplifters. Under Project Pegasus, shops will use technology provided by Facewatch.

I first encountered Facewatch ten years ago at a conference on biometrics. Even then the company was already talking about “cloud-based crime reporting” in order to deter low-level crime. And even then there were questions about fairness. For how long would shoplifters remain on a list of people to watch closely? What redress was there going to be if the system got it wrong? Facewatch’s attitude seemed to be simply that what the company was doing wasn’t illegal because its customer companies were sharing information across their own branches. What Hart is describing, however, is much worse: a state-backed automated system that will see ten major retailers upload their CCTV images for matching against police databases. Policing minister Chris Philp hopes to expand this into a national shoplifting database including the UK’s 45 million passport photos. Hart suggests instead tackling poverty.

Quite apart from how awful all that is, what I’m interested in here is the increased embedding in public life of technology we already know is flawed and discriminatory. Since 2013, myriad investigations have found the algorithms that power facial recognition to have been trained on unrepresentative databases that make them increasingly inaccurate as the subjects diverge from “white male”.

There are endless examples of misidentification leading to false arrests. Last month, a man who was pulled over on the road in Georgia filed a lawsuit after being arrested and held for several days for a crime he didn’t commit in Louisiana, where he had never been.

In 2021, a story I’d missed, the University of Illinois at Urbana-Champaign announced it would discontinue using Proctorio, remote proctoring software that monitors students for cheating. The issue: the software frequently fails to recognize non-white faces. In a retail store, this might mean being followed until you leave. In an exam situation, this may mean being accused of cheating and having your results thrown out. A few months later, at Vice, Todd Feathers reported that a student researcher had studied the algorithm Proctorio was using and found its facial detection model failed to recognize black faces more than half the time. Late last year, the Dutch Institute of Human Rights found that using Proctorio could be discriminatory.

The point really isn’t this specific software or these specific cases. The point is more that we have a technology that we know is discriminatory and that we know would still violate human rights if it were accurate…and yet it keeps getting more and more deeply embedded in public systems. None of these systems are transparent enough to tell us what facial identification model they use, or publish benchmarks and test results.

So much of what net.wars is about is avoiding bad technology law that sticks. In this case, it’s bad technology that is becoming embedded in systems that one day will have to be ripped out, and we are entirely ignoring the risks. On that day, our children and their children will look at us, and say, “What were you thinking? You did know better.”

Illustrations: The CCTV camera on George Orwell’s house at 22 Portobello Road, London.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.