The one hundred

Among the highlights of this week’s hearings of the Covid Inquiry were comments made by Helen MacNamara, who was the deputy cabinet secretary during the relevant time, about the effect of the lack of diversity. The absence of women in the room, she said, led to a “lack of thought” about a range of issues, including dealing with childcare during lockdowns, the difficulties encountered by female medical staff in trying to find personal protective equipment that fit, and the danger lockdowns would inevitably pose when victims of domestic abuse were confined with their abusers. Also missing was anyone who could have identified issues for ethnic minorities, disabled people, and other communities. Even the necessity of continuing free school lunches was lost on the wealthy white men in charge, none of whom were ever poor enough to need them. Instead, MacNamara said, they spent “a disproportionate amount” of their time fretting about football, hunting, fishing, and shooting.

MacNamara’s revelations explain a lot. Of course a group with so little imagination about or insight into other people’s lives would leave huge, gaping holes. Arrogance would ensure they never saw those as failures.

I was listening to this while reading posts on Mastodon complaining that this week’s much-vaunted AI Safety Summit was filled with government representatives and techbros, but weak on human rights and civil society. I don’t see any privacy organizations on the guest list, for example, and only the largest technology platforms needed apply. Granted, the limit of 100 meant there wasn’t room for everyone. But these are all choices seemingly designed to make the summit look as important as possible.

From this distance, it’s hard to get excited about a bunch of bigwigs getting together to alarm us about a technology that, as even the UK government itself admits, may – even most likely – will never happen. In the event, they focused on a glut of disinformation and disruption to democratic polls. Lots of people are thinking about the first of these, and the second needs local solutions. Many technology and policy experts are advocating openness and transparency in AI regulation.

Me, I’d rather they’d given some thought to how to make “AI” (any definition) sustainable, given the massive resources today’s math-and-statistics systems demand. And I would strongly favor a joint resolution to stop using these systems for surveillance and eliminate predictive systems that pretend to be sble to spot potential criminals in advance or decide who are deserving of benefits, admission into retail stores, or parole. But this summit wasn’t about *us*.

***

A Mastodon post reminded me that November 2 – yesterday – was the 35th anniversary of the Morris Worm and therefore the 35th anniversary of the day I first heard of the Internet. Anniversaries don’t matter much, but any history of the Internet would include this now largely-fotgotten (or never-known) event.

Morris’s goals were pretty anodyne by today’s standards. He wanted, per Wikipedia, to highlight flaws in some computer systems. Instead, the worm replicated out of control and paralyzed parts of this obscure network that linked university and corporate research institutions, who now couldn’t work. It put the Internet on the front pages for the first time.

Morris became the first person to be convicted of a felony under the brand-new Computer Fraud and Abuse Act (1986); that didn’t stop him from becoming a tenured professor at MIT in 2006. The heroes of the day were the unsung people who worked hard to disable the worm and restore full functionality. But it’s the worm we remember.

It was another three years before I got online myself, in 1991, and two or three more years after that before I got direct Internet access via the now-defunct Demon Internet. Everyone has a different idea of when the Internet began, usually based on when they got online. For many of us, it was November 2, 1988, the day when the world learned how important this technology they had never heard of had already become.

***

This week also saw the first anniversary of Twitter’s takeover. Despite a variety of technical glitches and numerous user-hostile decisions, the site has not collapsed. Many people I used to follow are either gone or posting very little. Even though I’m not experiencing the increased abuse and disinformation I see widely reported, there’s diminishing reward for checking in.

There’s still little consensus on a replacement. About half of my Twitter list have settled in on Mastodon. Another third or so are populating Bluesky. I hear some are finding Threads useful, but until it has a desktop client I’m out (and maybe even then, given its ownership). A key issue, however, is that uncertainty about which site will survive (or “win”) leads many people to post the same thing on multiple services. But you don’t dare skip one just in case.

For both philosophical and practical reasons, I’m hoping more people will get comfortable on Mastodon. Any corporate-owned system will merely replicate the situation in which we become hostages to business interests who have as little interest in our welfare as Boris Johnson did according to MacNamara and other witnesses. Mastodon is not a safe harbor from horrible human behavior, but with no ads and no algorithm determining what you see, at least the system isn’t designed to profit from it.

Illustrations: Former deputy cabinet secretary Helen MacNamara testifying at the Covid Inquiry.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

Planned incompatibility

My first portable music player was a monoaural Sony cassette player a little bigger than a deck of cards. I think it was intended for office use as a dictation machine, but I hauled it to folk clubs and recorded the songs I liked, and used it to listen to music while in transit. Circa 1977, I was the only one on most planes.

At the time, each portable device had its own charger with its own electrical specification and plug type. Some manufacturers saw this as an opportunity, and released so-called “universal” chargers that came with an array of the most common plugs and user-adjustable settings so you could match the original amps and volts. Sony reacted by ensuring that each new generation had a new plug that wasn’t included on the universal chargers…which would then copy it….which would push Sony to come up with yet another new plug And so on. All in the name of consumer safety, of course.

Sony’s modern equivalent (which of course includes Sony itself) doesn’t need to invent new plugs because more sophisticated methods are available. They can instead insert a computer chip that the main device checks to ensure the part is “genuine”. If the check fails, as it might if you’ve bought your replacement part from a Chinese seller on eBay, the device refuses to let the new part function. This is how Hewlett-Packard has ensured that its inkjet printers won’t work with third-party cartridges, it’s one way that Apple has hobbled third-party repair services, and it’s how, as this week’s news tells us, the PS5 will check its optonal disc drives.

Except the PS5 has a twist: in order to authenticate the drive the PS5 has to use an Internet connection to contact Sony’s server. I suppose it’s better than John Deere farm equipment, which, Cory Doctorow writes in his new book, The Internet Con: How to Seize the Means of Computation, requires a technician to drive out to a remote farm and type in a code before the new part will work while the farmer waits impatiently. But not by much, if you’re stuck somewhere offline.

“It’s likely that this is a security measure in order to ensure that the disc drive is a legitimate one and not a third party,” Video Gamer speculates. Checking the “legitimacy” of an optional add-on is not what I’d call “security”; in general it’s purely for the purpose of making it hard for customers to buy third-party add-ons (a goal the article does nod at later). Like other forms of digital rights management, the nuisance all accrues to the customer and the benefits, such as they are, accrue only to the manufacturer.

As Doctorow writes, part-pairing, as this practice is known, originated with cars (for this reason, it’s also often known as “VIN” locking, from vehicle information number), brought in to reducee the motivation to steal cars in order to strip them and sell their parts (which *is* security). The technology sector has embraced and extended this to bolster the Gilette business model: sell inkjet printers cheap and charge higher-than-champagne prices for ink. Apple, Doctorow writes, has used this approach to block repairs in order to sustain new phone sales – good for Apple, but wasteful for the environment and expensive for us. The most appalling of his examples, though, is wheelchairs, which are “VIN-locked and can’t be serviced by a local repair shop”, and medical devices. Making on-location repairs impossible in these cases is evil.

The PS5, though, compounds part-pairing by requiring an Internet connection, a trend that really needs not to catch on. As hundreds of Tesla drivers discovered the hard way during an app server outage it’s risky to presume those connections will always be there when you need them. Over the last couple of decades, we’ve come to accept that software is not a purchase but a subscription service subject to license. Now, hardware is going the same way, as seemed logical from the late-1990s moment when MIT’s Neil Gershenfeld proposed Things That Think. Back then, I imagined the idea applying to everyday household items, not devices that keep our bodies functioning. This oncoming future is truly dangerous, as Andrea Matwyshyn has been pointing out..

For Doctorow, the solution is to mandate and enforce interoperability as well as other regulations such as antitrust law. The right to repair laws that are appearing inany jurisdictions (and which companies like Apple and John Deere have historically opposed). Requiring interoperability would force companies to enable – or at least not to hinder – third-party repairs.

But more than that is going to be needed if we are to avoid a future in which every piece of our personal infrastructures is turned into a subscription service. At The Register, Richard Speed reminds that Microsoft will end support for Windows 10 in 2025, potentially leaving 400 million PCs stranded. We have seen this before.

I’m not sure anyone in government circles is really thinking about the implications for an aging population. My generation still owns things; you can’t delete my library of paper books or charge me for each reread. But today’s younger generation, for whom everything is a rental…what will they do at retirement age, when income drops but nothing gets cheaper in a world where everything stops working the minute you stop paying? If we don’t force change now, this will be their future.

Illustrations: A John Deere tractor.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

The documented life

For various reasons, this week I asked my GP for printed verification of my latest covid booster. They handed me what appears to be a printout of the entire history of my interactions with the practice back to 1997.

I have to say, reading it was a shock. I expected them to have kept records of tests ordered and the results. I didn’t think about them keeping everything I said on the website’s triage form, which they ask you to use when requesting an appointment, treatment, or whatever. Nor did I expect notes beginning “Pt dropped in to ask…”

The record doesn’t, however, show all details of all conversations I’ve had with everyone in the practice. It notes medical interactions, like noting a conversation in which I was advised about various vaccinations. It doesn’t mention that on first acquaintance with the GP to whom I’m assigned I asked her about her attitudes toward medical privacy and alternative treatments such as acupuncture. “Are you interviewing me?” she asked. A little bit, yes.

There are also bits that are wrong or outdated.

I think if you wanted a way to make the privacy case, showing people what’s in modern medical records would go a long way. That said, one of the key problems in current approaches to the issues surrounding mass data collection is that everything is siloed in people’s minds. It’s rare for individuals to look at a medical record and connect it to the habit of mind that continues to produce Google, Meta, Amazon, and an ecosystem of data brokers that keeps getting bigger no matter how many data protection laws we pass. Medical records hit a nerve in an intimate way that purchase histories mostly don’t. Getting the broad mainstream to see the overall picture, where everything connects into giant, highly detailed dossiers on all of us, is hard.

And it shouldn’t be. Because it should be obvious by now that what used to be considered a paranoid view has a lot of reality. Governments aren’t highly motivated to curb commercial companies’ data collecction because that all represents data that can be subpoenaed without the risk of exciting a public debate or having to justify a budget. In the abstract, I don’t care that much who knows what about me. Seeing the data on a printout, though, invites imagining a hostile stranger reading it. Today, that potentially hostile stranger is just some other branch of the NHS, probably someone looking for clues in providing me with medical care. Five or twenty years from now…who knows?

More to the point, who knows what people will think is normal? Thirty years ago, “normal” meant being horrified at the idea of cameras watching everywhere. It meant fingerprints were only taken from criminal suspects. And, to be fair, it meant that governments could intercept people’s phone calls by making a deal with just one legacy giant telephone company (but a lot of people didn’t fully realize that). Today’s kids are growing up thinking of constantly being tracked as normal, I’d like to think that we’re reaching a turning point where what Big Tech and other monopolists have tried to convince is is normal is thoroughly rejected. It’s been a long wait.

I think the real shock in looking at records like this is seeing yourself through someone else’s notes. This is very like the moment in the documentary Erasing David, when the David of the title gets his phone book-sized records from a variety of companies. “What was I angry about on November 2006?” he muses, staring at the note of a moment he had long forgotten but the company hadn’t. I was relieved to see there were no such comments. On the other hand, also missing were a couple of things I distinctly remember asking them to write down.

But don’t get me wrong: I am grateful that someone is keeping these notes besides me. I have medical records! For the first 40 years of my life, doctors routinely refused to show patients any of their medical records. Even when I was leaving the US to move overseas in 1981, my then-doctor refused to give me copies, saying, “There’s nothing there that would be any use to you.” I took that to mean there were things he didn’t want me to see. Or he didn’t want to take the trouble to read through and see that there weren’t. So I have no record of early vaccinations or anything else from those years. At some point I made another attempt and was told the records had been destroyed after seven years. Given that background, the insousiance with which the receptionist printed off a dozen pages of my history and handed it over was a stunning advance in patient rights.

For the last 30-plus years, therefore, I’ve kept my own notes. There isn’t, after checking, anything in the official record that I don’t have. There may, of course, be other notes they don’t share with patients.

Whether for purposes malign (surveillance, control) or benign (service), undocumented lives are increasingly rare. In an ideal world, there’d be a way for me and the medical practice to collaborate to reconcile discrepancies and rectify omissions. The notion of patients controlling their own data is still far from acceptance. That requires a whole new level of trust.

Illustrations: Asclepius, god of medieine, exhibited in the Museum of Epidaurus Theatre (Michael F. Mehnert via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

Surveillance machines on wheels

After much wrangling and with just a few days of legislative time between the summer holidays and the party conference season, on Tuesday night the British Parliament passed the Online Safety bill, which will become law as soon as it gets royally signed (assuming they can find a pen that doesn’t leak). The government announcement brims with propagandist ecstasy, while the Open Rights Group’s statement offers the reality: Briton’s online lives will be less secure as a result. Which means everyone’s will.

Parliament – and the net.wars archive – dates the current version of this bill to 2022, and the online harms white paper on which it’s based to 2020. But it *feels* like it’s been a much longer slog; I want to say six years.

This is largely because the fight over two key elements – access to encrypted messaging and age verification – *is* that old. Age verification was enshrined in the Digital Economy Act (2017), and we reviewed the contenders to implement it in 2016. If it’s ever really implemented, age verification will make Britain the most frustrating place in the world to be online.

Fights over strong encryption have been going on for 30 years. In that time, no new mathematics has appeared to change the fact that it’s not possible to create a cryptographic hole that only “good guys” can use. Nothing will change about that; technical experts will continue to try to explain to politicians that you can have secure communications or you can have access on demand, but you can’t have both.

***

At the New York Times, Farhood Manjou writes that while almost every other industry understands that the huge generation of aging Boomers is a business opportunity, outside of health care Silicon Valley is still resolutely focused on under-30s. This, even though the titans themselves age; boy-king Mark Zuckerberg is almost 40. Hey, it’s California; they want to turn back aging, not accept it.

Manjou struggles to imagine the specific directions products might take, but I like his main point: where’s the fun? What is this idea that after 65 you’re just something to send a robot to check up on? Yes, age often brings impairments, but why not build for them? You would think that given the right affordances, virtual worlds and online games would have a lot to offer people whose lives are becoming more constrained.

It’s true that by the time you realize that ageism pervades our society you’re old enough that no one’s listening to you any more. But even younger people must struggle with many modern IT practices: the pale, grey type that pervades the web, the picklists, the hidden passwords you have to type twice… And captchas, which often display on my desktop too small to see clearly and are resistant to resizing upwards. Bots are better at captchas than humans anyway, so what *is* the point?

We’re basically back where we were 30 years ago, when the new discipline of human-computer interaction fought to convince developers that if the people who struggle to operate their products look stupid the problem is bad design. And all this is coming much more dangerously to cars; touch screens that can’t be operated by feel are Exhibit A.

***

But there is much that’s worse about modern cars. A few weeks ago, the Mozilla Foundation published a report reviewing the privacy of modern cars. Tl;dr: “Cars are the worst product category we have ever reviewed for privacy.”

The problems are universal across the 25 brands Mozilla researchers Jen Caltrider, Misha Rykov, and Zoë MacDonald reviewed: “Modern cars are surveillance-machines on wheels souped-up with sensors, radars, cameras, telematics, and apps that can detect everything we do inside.” Cars can collect all the data that phones and smart home devices can. But unlike phones, space is a non-issue, and unlike smart speakers, video cameras, and thermostats, cars move with you and watch where you go. Drivers, passengers, passing pedestrians…all are fodder for data collection in the new automotive industry, where heated seats and unlocking extra battery range are subscription add-ons, and the car you buy isn’t any more yours than the £6-per-hour Zipcar in the designated space around the corner.

Then there are just some really weird clauses in the companies’ privacy policies. Some collect “genetic data” (here the question that arises is not only “why?” but “how?). Nissan says it can collect information about owners’ “sexual activity” for use in “direct marketing” or to share with marketing partners. ” The researchers ask, “What on earth kind of campaign are you planning, Nissan?”

Still unknown: whether the data is encrypted while held on the car; how securely it’s held; and whether the companies will resist law enforcement requests at all. We do know that that car companies share and sell the masses of intimate information they collect, especially the cars’ telematics with insurance companies.

The researchers also note that new features allow unprecedented levels of control. VW’s Car-Net, for example, allows parents – or abusers – to receive a phone alert if the car is driven outside of set hours or in or near certain locations. Ford has filed a patent on a system for punishing drivers who miss car payments.

“I got old at the right time,” a friend said in 2019. You can see his point.

Illustrations: Artist Dominic Wilcox‘s imagined driverless sleeper car of the future, as seen at the Science Museum in 2019.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

Doom cyberfuture

Midway through this year’s gikii miniconference for pop culture-obsessed Internet lawyers, Jordan Hatcher proposed that generational differences are the key to understanding the huge gap between the Internet pioneers, who saw regulation as the enemy, and the current generation, who are generally pushing for it. While this is a bit too pat – it’s easy to think of Millennial libertarians and I’ve never thought of Boomers as against regulation, just, rationally, against bad Internet law that sticks – it’s an intriguing idea.

Hatcher, because this is gikii and no idea can be presented without a science fiction tie-in, illustrated this with 1990s movies, which spread the “DCF-84 virus” – that is, “doom cyberfuture-84”. The “84” is not chosen for Orwell but for the year William Gibson’s Neuromancer was published. Boomers – he mentioned John Perry Barlow, born 1947, and Lawrence Lessig, born 1961 – were instead infected with the “optimism virus”.

It’s not clear which 1960s movies might have seeded us with that optimism. You could certainly make the case that 1968’s 2001: A Space Odyssey ends on a hopeful note (despite an evil intelligence out to kill humans along the way), but you don’t even have to pick a different director to find dystopia: I see your 2001 and give you Dr Strangelove (1964). Even Woodstock (1970) is partly dystopian; the consciousness of the Vietnam war permeates every rain-soaked frame. But so did the belief that peace could win: so, wash.

For younger people’s pessimism, Hatcher cited 1995’s Johnny Mnemonic (based on a Gibson short story) and Strange Days.

I tend to think that if 1990s people are more doom-laden than 1960s people it has more to do with real life. Boomers were born in a time of economic expansion, relatively affordable education and housing, and and when they protested a war the government eventually listened. Millennials were born in a time when housing and education meant a lifetime of debt, and when millions of them protested a war they were ignored.

In any case, Hatcher is right about the stratification of demographic age groups. This is particularly noticeable in social media use; you can often date people’s arrival on the Internet by which communications medium they prefer. Over dinner, I commented on the nuisance of typing on a phone versus a real keyboard, and two younger people laughed at me: so much easier to type on a phone! They were among the crowd whose papers studied influencers on TikTok (Taylor Annabell, Thijs Kelder, Jacob van de Kerkhof, Haoyang Gui, and Catalina Goanta) and the privacy dangers of dating apps (Tima Otu Anwana and Paul Eberstaller), the kinds of subjects I rarely engage with because I am a creature of text, like most journalists. Email and the web feel like my native homes in a way that apps, game worlds, and video services never will. That dates me both chronologically and by my first experiences of the online world (1991).

Most years at this event there’s a new show or movie that fires many people’s imagination. Last year it was Upload with a dash of Severance. This year, real technological development overwhelmed fiction, and the star of the show was generative AI and large language models. Besides my paper with Jon Crowcrosft, there was one from Marvin van Bekkum, Tim de Jonge, and Frederik Zuiderveen Borgesius that compared the science fiction risks of AI – Skynet, Roko’s basilisk, and an ordering of Asimov’s Laws that puts obeying orders above not harming humans (see XKCD, above) – to the very real risks of the “AI” we have: privacy, discrimination, and environmental damage.

Other AI papers included one by Colin Gavaghan, who asked if it actually matters if you can’t tell whether the entity that’s communicating with you is an AI? Is that what you really need to know? You can see his point: if you’re being scammed, the fact of the scam matters more than the nature of the perpetrator, though your feelings about it may be quite different.

A standard explanation of what put the “science” in science fiction (or the “speculative” in “speculative fiction”) used be to that the authors ask, “What if?” What if a planet had six suns whose interplay meant that darkness only came once every 1,000 years? Would the reaction really be as Ralph Waldo Emerson imagined it? (Isaac Asimov’s Nightfall). What if a new link added to the increasingly complex Boston MTA accidentally turned the system into a Mobius strip (A Subway Named Mobius, by Armin Joseph Deutsch). And so on.

In that sense, gikii is often speculative law, thought experiments that tease out new perspectives. What if Prime Day becomes a culturally embedded religious holiday (Megan Rae Blakely)? What if the EU’s trademark system applied in the Star Trek universe (Simon Sellers)? What if, as in Max Gladsone’s Craft Sequence books, law is practical magic (Antonia Waltermann)? In the trademark example, time travel is a problem; as competing interests can travel further and further back to get the first registration. In the latter…well, I’m intrigued by the idea that a law making dumping sewage in England’s rivers illegal could physically stop it from happening without all the pesky apparatus of law enforcement and parliamentary hearings.

Waltermann concluded by suggesting that to some extent law *is* magic in our world, too. A useful reminder: be careful what law you wish for because you just may get it. Boomer!

Illustrations: Part of XKCD‘s analysis of Asimov’s Laws of Robotics.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Wendy M. GrossmanPosted on Categories AI, Future tech, LawTags , , 2 Comments on Doom cyberfuture

Cryptocurrency winter

There is nowhere in the world, Brett Scott says in his recent book, Cloudmoney, that supermarkets price oatmeal in bitcoin. Even in El Salvador, where bitcoin became legal tender in 2021, what appear to be bitcoin prices are just the underlying dollar price refracted through bitcoin’s volatile exchange rate.

Fifteen years ago, when bitcoin was invented, its adherents thought by now it would be a mainstream currency instead of a niche highly speculative instrument of financial destruction and facilitator of crime. Five years ago, the serious money people thought it important enough to consider fighting back with central bank digital currencies (CBDCs).

In 2019, Facebook announced Libra, a consortium-backed cryptocurrency that would enable payments on its platform, apparently to match China’s social media messaging system WeChat, which are used by 1 billion users monthly. By 2021, when Facebook’s holding company renamed itself Meta, Libra had become “Diem”. In January 2022 Diem was sold to Silvergate Bank, which announced in February 2023 it would wind down and liquidate its assets, a casualty of the FTX collapse.

As Dave Birch writes in his 2020 book, The Currency Cold War, it was around the time of Facebook’s announcement that central banks began exploring CBDCs. According to the Atlantic Council’s tracker, 114 countries are exploring CDBCs, and 11 have launched one. Two – Ecuador and Senegal – have canceled theirs. Plans are inactive in 15 more.
politico

The tracker marks the EU, US, and UK as in development. The EU is quietly considering the digital euro. In the US, in March 2022 president Joe Biden issued an executive order including instructions to research a digital dollar. In the UK the Bank of England has an open consultation on the digital pound (closes June 7). It will not make a decision until at least 2025 after completing technical development of proofs of concept and the necessary architecture. The earliest we’d see a digital pound is around 2030.

But first: the BoE needs a business case. In 2021, the House of Lords issued a report (PDF) calling the digital pound a “solution in search of a problem” and concluding, “We have yet to hear a convincing case for why the UK needs a retail CBDC.” Note “retail”. Wholesale, for use only between financial institutions, may have clearer benefits.

Some of the imagined benefits of CBDCs are familiar: better financial inclusion, innovation, lowered costs, and improved efficiency. Others are more arcane: replicating the role of cash to anchor the monetary system in a digital economy. That’s perhaps the strongest argument, in that today’s non-cash payment options are commercial products but cash is public infrastructure. Birch suggests that the digital pound could allow individuals to hold accounts at the BoE. These would be as risk-free as cash and potentially open to those underserved by the banking system.

Many of these benefits will be lost on most of us. People who already have bank accounts or modern financial apps are unlikely to care about a direct account with the BoE, especially if, as Birch suggests, one “innovation” they might allow is negative interest rates. More important, what is the difference between pounds as numbers in cyberspace and pounds as fancier numbers in cyberspace? For most of us, our national currencies are already digital, even if we sometimes convert some of it into physical notes and coins. The big difference – and part of what they’re fighting over – is who owns the transaction data.

At Rest of World, Temitayo Lawal recounts the experience in Nigeria., the first African country to adopt a CBDC. Launched 18 months ago, the eNaira has been tried by only 0.5% of the population and used for just 1.4 million transactions. Among the reasons Lawal finds, Nigeria’s eNaira doesn’t have the flexibility or sophistication of independent cryptocurrencies, younger Nigerians see little advantage to the eNaira over the apps they were already using, 30 million Nigerians (about 13% of the population) lack Internet access, and most people don’t want to entrust their financial information to their government. By comparison, during that time Nigerians traded $1.16 billion in bitcoin on the peer-to-peer platform Paxful.

Many of these factors play out the same way elsewhere. From 2014 to 2018, Ecuador operated Dinero Electrónico, a mobile payment system that allowed direct transfer of US dollars and aimed to promote financial inclusion. In a 2020 paper, researchers found DE never reached critical mass because it didn’t offer enough incentive for adoption, was opposed by the commercial banks, and lacked a sufficient supporting ecosystem for cashing in and out. In China, which launched its CBDC in August 2020, the e-CNY is rarely used because, the Economist reports Alipay and We Chat work well enough that retailers don’t see the need to accept it. The Bahamanian sand dollar has gained little traction. Denmark and Japan have dropped the idea entirely, as has Finland, although it supports the idea of a digital euro.

The good news, such as it is, is that by the time Western countries are ready to make a decision either some country will have found a successful formula that can be adapted, or everyone who’s tried it will have failed and the thing can be shelved until it’s time to rediscover it. That still leaves the problem that Scott warns of: a cashless society will give Big Tech and Big Finance huge power over us. We do need an alternative.

Illustrations: Bank of England facade.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Mastodon or Twitter.

Appropriate privacy

At a workshop this week, one of the organizers posed a question that included the term “appropriate”. As in: “lawful access while maintaining appropriate user privacy”. We were there to think about approaches that could deliver better privacy and security over the next decade, with privacy defined as “the embedding of encryption or anonymization in software or devices”.

I had to ask: What work is “appropriate” doing in that sentence?

I had to ask because last weekend’s royal show was accompanied by preemptive arrests well before events began – at 7:30 AM. Most of the arrested were anti-monarchy protesters armed with luggage straps and placards, climate change protesters whose T-shirts said “Just Stop Oil”, and volunteers for the Night Stars on suspicion that the rape whistles they hand out to vulnerable women might be used to disrupt the parading horses. All of these had coordinated with the Metropolitan Police in advance or actually worked with them…which made no difference. All were held for many hours. Since then, the news has broken that an actual monarchist was arrested, DNA-sampled, fingerprinted, and held for 13 hours just for standing *near* some protesters.

It didn’t help the look of the thing that several days before the Big Show, the Met tweeted a warning that: “Our tolerance for any disruption, whether through protest or otherwise, will be low.”

The arrests were facilitated by the last-minute passage of the Public Order Act just days before with the goal of curbing “disruptive” protests. Among the now-banned practices is “locking on” – that is, locking oneself to a physical structure, a tactic the suffragettes used. among many others in campaigning for women’s right to vote. Because that right is now so thoroughly accepted, we tend to forget how radical and militant the Suffragists had to be to get their point across and how brutal the response was. A century from now, the mainstream may look back and marvel at the treatment meted out to climate change activists. We all know they’re *right*, whether or not we like their tactics.

Since the big event, the House of Lords has published its report on current legislation. The government is seeking to expand the Public Order Act even further by lowering the bar for “serious disruption” from “significant” and “prolonged” to “more than minor” and may include the cumulative impact of repeated protests in the same area. The House of Lords is unimpressed by these amendments via secondary legislation, first because of their nature, and second because they were rejected during the scrutiny of the original bill, which itself is only days old. Secondary legislation gets looked at less closely; the Lords suggest that using this route to bring back rejected provisions “raises possible constitutional issues”. All very Polite for accusing the government of abusing the system.

In the background, we’re into the fourth decade of the same argument between governments and technical experts over encryption. Technical experts by and large take the view that opening a hole for law enforcement access to encrypted content fatally compromises security; law enforcement by and large longs for the old days when they could implement a wiretap with a single phone call to a major national telephone company. One of the technical experts present at the workshop phrased all this gently by explaining that providing access enlarges the attack surface, and the security of such a system will always be weaker because there are more “moving parts”. Adding complexity always makes security harder.

This is, of course, a live issue because of the Online Safety bill, a sprawling mess of 262 pages that includes a requirement to scan public and private messaging for child sexual abuse material, whether or not the communications are encrypted.

None of this is the fault of the workshop we began with, which is part of a genuine attempt to find a way forward on a contentious topic, and whose organizers didn’t have any of this in mind when they chose their words. But hearing “appropriate” in that way at that particular moment raised flags: you can justify anything if the level of disruption that’s allowed to trigger action is vague and you’re allowed to use “on suspicion of” indiscriminately as an excuse. “Police can do what they want to us now,” George Monbiot writes at the Guardian of the impact of the bill.

Lost in the upset about the arrests was the Met’s decision to scan the crowds with live facial recognition. It’s impossible to overstate the impact of this technology. There will be no more recurring debates about ID cards because our faces will do the job. Nothing has been said about how the Met used it on the day, whether its use led to arrests (or on what grounds), or what the Met plans to do with the collected data. The police – and many private actors – have certainly inhaled the Silicon Valley ethos of “ask forgiveness, not permission”.

In this direction of travel, many things we have taken for granted as rights become privileges that can be withdrawn at will, and what used to be public spaces open to all become restricted like an airport or a small grocery store in Whitley Bay. This is the sliding scale in which “appropriate user privacy” may be defined.

Illustrations: Protesters at the coronation (by Alisdair Hickson at Wikimedia .

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Mastodon or Twitter.

The privacy price of food insecurity

One of the great unsolved questions continues to be: what is my data worth? Context is always needed: worth to whom, under what circumstances, for what purpose? Still, supermarkets may give us a clue.

At Novara Media, Jake Hurfurt, who runs investigations for Big Brother Watch, has been studying suprmarket loyalty cards. He finds that increasingly only loyalty card holders have access to special offers, which used to be open to any passing customer.

Tesco now and Sainsburys soon, he says, “are turning the cost-of-living crisis into a cost-of-privacy crisis”,

Neat phrasing, but I’d say it differently: these retailers are taking advantage of the cost-of-living crisis to extort desperate people ito giving up their data. The average value of the discounts might – for now – give a clue to the value supermarkets place on it.

But not for long, since the pattern going forward is a predictable one of monopoly power: as the remaining supermarkets follow suit and smaller independent shops thin out under the weight of rising fuel bills and shrinking margins, and people have fewer choices, the savings from the loyalty card-only special offers will shrink. Not so much that they won’t be worth having, but it seems obvious they’ll be more generous with the discounts – if “generous” is the word – in the sign-up phase than they will once they’ve achieved customer lock-in.

The question few shoppers are in a position to answer while they’re strying to lower the cost of filling their shopping carts is what the companies do with the data they collect. BBW took the time to analyze Tesco’s and Sainsburys’ privacy policies, and found that besides identity data they collect detailed purchase histories as well as bank accounts and payment information…which they share with “retail partners, media partners, and service providers”. In Tesco’s case, these include Facebook, Google, and, for those who subscribe to them, Virgin Media and Sky. Hyper-targeted personal ads right there on your screen!

All that sounds creepy enough. But consider what could well come next. Also this week, a cross-party group of 50 MPs and peers and cosinged by BBW, Privacy International and Liberty, wrote to Frasers Group deploring that company’s use of live facial recognition in its stores, which include Sports Direct and the department store chain House of Fraser. Frasers Group’s purpose, like retailers and pub chains were trialing a decade ago , is effectively to keep out people suspected of shoplifting and bad behavior. Note that’s “suspected”, not “convicted”.

What happens as these different privacy invasions start to combine?

A store equipped with your personal shopping history and financial identity plus live facial recognition cameras, knows the instant you walk into the store who you are, what you like to buy, and how valuable a customer your are. Such a system, equipped with some sort of socring, could make very fine judgments. Such as: this customer is suspected of stealing another customer’s handbag, but they’re highly profitable to us, so we’ll let that go. Or: this customer isn’t suspected of anything much but they look scruffy and although they browse they never buy anything – eject! Or even: this journalist wrote a story attacking our company. Show them the most expensive personalized prices. One US entertainment company is already using live facial recognition to bar entry to its venues to anyone who works for any law firm involved in litigation against it. Britain’s data protection laws should protect us against that sort of abuse, but will they survive the upcoming bonfire of retained EU law?

And, of course, what starts with relatively anodyne product advertising becomes a whole lot more sinister when it starts getting applied to politics, voter manipulation and segmentation, and the “pre-crime” systems

Add the possibilities of technology that allows retailers to display personalized pricing in-store, just like an online retailer could do in the privacy of your own browser, Could we get to a scenario where a retailer, able to link your real world identity and purchasing power to your online nd offline movements could perform a detailed calculation of what you’d be willing to pay for a particular item? What would surge pricing for the last remaining stock of the year’s hottest toy on Christmas Eve look like?

This idea allows me to imagine shopping partnerships, where the members compare prices and the partner with the cheapest prices buys that item for the whole group. In this dystopian future, I imagine such gambits would be banned.

Most of this won’t affect people rich enough to grandly refuse to sign up for loyalty cards, and none of it will affect people rich and eccentric enough to do source everything from local, independent shops – and, if they’re allowed, pay cash.

Four years ago, Jaron Lanier toured with the proposal that we should be paid for contributing to commercial social media sites. The problem with this idea was and is that payment creates a perverse incentive for users to violate their own privacy even more than they do already, and that fair payment can’t be calculated when the consequences of disclosure are perforce unknown.

The supermarket situation is no different. People need food security and affordability, They should not have to pay for that with their privacy.

Illustrations: .London supermarket checkout, 2006 (via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Mastodon or Twitter.

Ex libris

So as previously discussed here three years ago and two years ago, on March 24 the US District Court for the Southern District of New York found that the Internet Archive’s controlled digital lending fails copyright law. Half of my social media feed on this subject filled immediately with people warning that publishers want to kill libraries and this judgment is a dangerous step limiting access to information; the other half is going “They’re stealing from authors. Copyright!” Both of these things can be true. And incomplete.

To recap: in 2006 the Internet Archive set up the Open Library to offer access to digitized books under “controlled digital lending”. The system allows each book to be “out” on “loan” to only one person at a time, with waiting lists for popular titles. In a white paper, lawyers David R. Hansen and Kyle K. Courtney call this “format shifting” and say that because the system replicates library lending it is fair use. Also germane: the Archive points to a 2007 California decision that it is in fact a library. Other countries may beg to differ.

When public libraries closed at the beginning of the covid19 pandemic, the Internet Archive announced the National Emergency Library, which suspended the one-copy-at-a-time rule and scrubbed the waiting lists so anyone could borrow any book at any time. The resulting publicity was the first time many people had heard of the Open Library, although authors had already complained. Hachette Book Group, Penguin Random House, HarperCollins, and Wiley filed suit. Shortly afterwards, the Archive shut down the National Emergency Library. The Open Library continues, and the Archive will appeal the judge’s ruling.

On the they’re-killing-the-libraries side: Mike Masnick and Fight for the Future. At Walled Culture, Glyn Moody argues that sharing ebooks helps sell paid copies. Many authors agree with the publishers that their living is at risk; a group of exceptions including Neil Gaiman, Naomi Klein, and Cory Doctorow, have published an open letter defending the Archive.

At Vice, Claire Woodstock lays out some of the economics of library ebook licenses, which eat up budgets but leave libraries vulnerable and empty-shelved when a service is withdrawn. She also notes that the Internet Archive digitizes physical copies it buys or receives as donations, and does not pay for ebook licenses.

Brief digression back to 1996, when Pamela Samuelson warned of the coming copyright battles in Wired. Many of its key points have since either been enshrined into law, such as circumventing copy protection; others, such as requiring Internet Service Providers to prevent users from uploading copyrighted material, remain in play today. Number three on her copyright maximalists’ wish listeliminating first-sale rights for digitally transmitted documents. This is the doctrine that enables libraries to lend books.

It is therefore entirely believable that commercial publishers believe that every library loan is a missed sale. Outside the US, many countries have a public lending right that pays royalties on loans for that sort of reason. The Internet Archive doesn’t pay those, either.

It surely isn’t facing the headwinds public libraries are. In the UK, years of austerity have shrunk library budgets and therefore their numbers and opening hours. In the US, libraries are fighting against book bans; in Missouri, the Republican-controlled legislature voted to defund the state’s libraries entirely, apparently in retaliation.

At her blog, librarian and consultant Karen Coyle, who has thought for decades about the future of libraries, takes three postings to consider the case. First, she offers a backgrounder, agreeing that the Archive’s losing on appeal could bring consequences for other libraries’ digital lending. In the second, she teases out the differences between academic/research libraries and public libraries and between research and reading. While journals and research materials are generally available in electronic format, centuries of books are not, and scanned books (like those the Archive offers) are a poor reading experience compared to modern publisher-created ebooks. These distinctions are crucial to her third posting, which traces the origins of controlled digital lending.

As initially conceived by Michelle M. Wu in a 2011 paper for Law Library Journal, controlled digital lending was a suggestion that law libraries could, either singly or in groups, buy a hard copy for their holdings and then circulate a digitized copy, similar to an Inter-Library Loan. Law libraries serve limited communities, and their comparatively modest holdings have a known but limited market.

By contrast, the Archive gives global access to millions of books it has scanned. In court, it argued that the availability of popular commercial books on its site has not harmed publishers’ revenues. The judge disagreed: the “alleged benefits” of access could not outweigh the market harm to the four publishers who brought the suit. This view entirely devalues the societal role libraries play, and Coyle, like many others, is dismayed that the judge saw the case purely in terms of its effect on the commercial market.

The question I’m left with is this: is the Open Library a library or a disruptor? If these were businesses, it would obviously be the latter: it avoids many of the costs of local competitors, and asks forgiveness not permission. As things are, it seems to be both: it’s a library for users, but a disruptor to some publishers, some authors, and potentially the world’s libraries. The judge’s ruling captures none of this nuance.

Illustrations: 19th century rendering of the Great Library of Alexandria (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Mastodon or Twitter.

Unclear and unpresent dangers

Monthly computer magazines used to fret that their news pages would be out of date by the time the new issue reached readers. This week in AI, a blog posting is out of date before you hit send.

This – Friday – morning, the Italian data protection authority, Il Garante, has ordered ChatGPT to stop processing the data of Italian users until it complies with the General Data Protection Regulation. Il Garante’s objections, per Apple’s translation, posted by Ian Brown: ChatGPT provides no legal basis for collecting and processing its massive store of the personal data used to train the model, and that it fails to filter out users under 13.

This may be the best possible answer to the complaint I’d been writing below.

On Wednesday, the Future of Life Institute published an open letter calling for a six-month pause on developing systems more powerful than Open AI’s current state of the art, GPT4. Barring Elon Musk, Steve Wozniack, and Skype co-founder Jaan Tallinn, most of the signatories are unfamiliar names to most of us, though the companies and institutions they represent aren’t – Pinterest, the MIT Center for Artificial Intelligence, UC Santa Cruz, Ripple, ABN-Amro Bank. Almost immediately, there was a dispute over the validity of the signatures..

My first reaction was on the order of: huh? The signatories are largely people who are inventing this stuff. They don’t have to issue a call. They can just *stop*, work to constrain the negative impacts of the services they provide, and lead by example. Or isn’t that sufficiently performative?

A second reaction: what about all those AI ethics teams that Silicon Valley companies are disbanding? Just in the last few weeks, these teams have been axed or cut at Microsoft and Twitch; Twitter of course ditched such fripperies last November in Musk’s inaugural wave of cost-cutting. The letter does not call to reinstate these.

The problem, as familiar critics such as Emily Bender pointed out almost immediately, is that the threats the letter focuses on are distant not-even-thunder. As she went on to say in a Twitter thread, the artificial general intelligence of the Singularitarian’s rapture is nowhere in sight. By focusing on distant threats – longtermism – we ignore the real and present problems whose roots are being continuously more deeply embedded into the new-building infrastructure: exploited workers, culturally appropriated data, lack of transparency around the models and algorithms used to build these systems….basically, all the ways they impinge upon human rights.

This isn’t the first time such a letter has been written and circulated. In 2015, Stephen Hawking, Musk, and about 150 others similarly warned of the dangers of the rise of “superintelligences”. Just a year later, in 2016, Pro Publica investigated the algorithm behind COMPAS, a risk-scoring criminal justice system in use in US courts in several states. Under Julia Angwin‘s scrutiny, the algorithm failed at both accuracy and fairness; it was heavily racially biased. *That*, not some distant fantasy, was the real threat to society.

“Threat” is the key issue here. This is, at heart, a letter about a security issue, and solutions to security issues are – or should be – responses to threat models. What is *this* threat model, and what level of resources to counter it does it justify?

Today, I’m far more worried by the release onto public roads of Teslas running Full Self Drive helmed by drivers with an inflated sense of the technology’s reliability than I am about all of human work being wiped away any time soon. This matters because, as Jessie Singal, author of There Are No Accidents, keeps reminding us, what we call “accidents” are the results of policy decisions. If we ignore the problems we are presently building in favor of fretting about a projected fantasy future, that, too, is a policy decision, and the collateral damage is not an accident. Can’t we do both? I imagine people saying. Yes. But only if we *do* both.

In a talk this week for a group at the French international research group AI Act. This effort began well before today’s generative tools exploded into public consciousness, and isn’t likely to conclude before 2024. It is, therefore, much more focused on the kinds of risks attached to public sector scandals like COMPAS and those documented in Cathy O’Neil’s 2017 book Weapons of Math Destruction, which laid bare the problems with algorithmic scoring with little to tether it to reality.

With or without a moratorium, what will “AI” look like in 2024? It has changed out of recognition just since the last draft text was published. Prediction from this biological supremacist: it still won’t be sentient.

All this said, as Edwards noted, even if the letter’s proposal is self-serving, a moratorium on development is not necessarily a bad idea. It’s just that if the risk is long-term and existential, what will six months do? If the real risk is the hidden continued centralization of data and power, then those six months could be genuinely destructive. So far, it seems like its major function is as a distraction. Resist.

Illustrations: IBM’s Watson, which beat two of Jeopardy‘s greatest champions in 2011. It has since failed to transform health care.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Mastodon or Twitter.