Surveillance machines on wheels

After much wrangling and with just a few days of legislative time between the summer holidays and the party conference season, on Tuesday night the British Parliament passed the Online Safety bill, which will become law as soon as it gets royally signed (assuming they can find a pen that doesn’t leak). The government announcement brims with propagandist ecstasy, while the Open Rights Group’s statement offers the reality: Briton’s online lives will be less secure as a result. Which means everyone’s will.

Parliament – and the net.wars archive – dates the current version of this bill to 2022, and the online harms white paper on which it’s based to 2020. But it *feels* like it’s been a much longer slog; I want to say six years.

This is largely because the fight over two key elements – access to encrypted messaging and age verification – *is* that old. Age verification was enshrined in the Digital Economy Act (2017), and we reviewed the contenders to implement it in 2016. If it’s ever really implemented, age verification will make Britain the most frustrating place in the world to be online.

Fights over strong encryption have been going on for 30 years. In that time, no new mathematics has appeared to change the fact that it’s not possible to create a cryptographic hole that only “good guys” can use. Nothing will change about that; technical experts will continue to try to explain to politicians that you can have secure communications or you can have access on demand, but you can’t have both.

***

At the New York Times, Farhood Manjou writes that while almost every other industry understands that the huge generation of aging Boomers is a business opportunity, outside of health care Silicon Valley is still resolutely focused on under-30s. This, even though the titans themselves age; boy-king Mark Zuckerberg is almost 40. Hey, it’s California; they want to turn back aging, not accept it.

Manjou struggles to imagine the specific directions products might take, but I like his main point: where’s the fun? What is this idea that after 65 you’re just something to send a robot to check up on? Yes, age often brings impairments, but why not build for them? You would think that given the right affordances, virtual worlds and online games would have a lot to offer people whose lives are becoming more constrained.

It’s true that by the time you realize that ageism pervades our society you’re old enough that no one’s listening to you any more. But even younger people must struggle with many modern IT practices: the pale, grey type that pervades the web, the picklists, the hidden passwords you have to type twice… And captchas, which often display on my desktop too small to see clearly and are resistant to resizing upwards. Bots are better at captchas than humans anyway, so what *is* the point?

We’re basically back where we were 30 years ago, when the new discipline of human-computer interaction fought to convince developers that if the people who struggle to operate their products look stupid the problem is bad design. And all this is coming much more dangerously to cars; touch screens that can’t be operated by feel are Exhibit A.

***

But there is much that’s worse about modern cars. A few weeks ago, the Mozilla Foundation published a report reviewing the privacy of modern cars. Tl;dr: “Cars are the worst product category we have ever reviewed for privacy.”

The problems are universal across the 25 brands Mozilla researchers Jen Caltrider, Misha Rykov, and Zoë MacDonald reviewed: “Modern cars are surveillance-machines on wheels souped-up with sensors, radars, cameras, telematics, and apps that can detect everything we do inside.” Cars can collect all the data that phones and smart home devices can. But unlike phones, space is a non-issue, and unlike smart speakers, video cameras, and thermostats, cars move with you and watch where you go. Drivers, passengers, passing pedestrians…all are fodder for data collection in the new automotive industry, where heated seats and unlocking extra battery range are subscription add-ons, and the car you buy isn’t any more yours than the £6-per-hour Zipcar in the designated space around the corner.

Then there are just some really weird clauses in the companies’ privacy policies. Some collect “genetic data” (here the question that arises is not only “why?” but “how?). Nissan says it can collect information about owners’ “sexual activity” for use in “direct marketing” or to share with marketing partners. ” The researchers ask, “What on earth kind of campaign are you planning, Nissan?”

Still unknown: whether the data is encrypted while held on the car; how securely it’s held; and whether the companies will resist law enforcement requests at all. We do know that that car companies share and sell the masses of intimate information they collect, especially the cars’ telematics with insurance companies.

The researchers also note that new features allow unprecedented levels of control. VW’s Car-Net, for example, allows parents – or abusers – to receive a phone alert if the car is driven outside of set hours or in or near certain locations. Ford has filed a patent on a system for punishing drivers who miss car payments.

“I got old at the right time,” a friend said in 2019. You can see his point.

Illustrations: Artist Dominic Wilcox‘s imagined driverless sleeper car of the future, as seen at the Science Museum in 2019.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

Doom cyberfuture

Midway through this year’s gikii miniconference for pop culture-obsessed Internet lawyers, Jordan Hatcher proposed that generational differences are the key to understanding the huge gap between the Internet pioneers, who saw regulation as the enemy, and the current generation, who are generally pushing for it. While this is a bit too pat – it’s easy to think of Millennial libertarians and I’ve never thought of Boomers as against regulation, just, rationally, against bad Internet law that sticks – it’s an intriguing idea.

Hatcher, because this is gikii and no idea can be presented without a science fiction tie-in, illustrated this with 1990s movies, which spread the “DCF-84 virus” – that is, “doom cyberfuture-84”. The “84” is not chosen for Orwell but for the year William Gibson’s Neuromancer was published. Boomers – he mentioned John Perry Barlow, born 1947, and Lawrence Lessig, born 1961 – were instead infected with the “optimism virus”.

It’s not clear which 1960s movies might have seeded us with that optimism. You could certainly make the case that 1968’s 2001: A Space Odyssey ends on a hopeful note (despite an evil intelligence out to kill humans along the way), but you don’t even have to pick a different director to find dystopia: I see your 2001 and give you Dr Strangelove (1964). Even Woodstock (1970) is partly dystopian; the consciousness of the Vietnam war permeates every rain-soaked frame. But so did the belief that peace could win: so, wash.

For younger people’s pessimism, Hatcher cited 1995’s Johnny Mnemonic (based on a Gibson short story) and Strange Days.

I tend to think that if 1990s people are more doom-laden than 1960s people it has more to do with real life. Boomers were born in a time of economic expansion, relatively affordable education and housing, and and when they protested a war the government eventually listened. Millennials were born in a time when housing and education meant a lifetime of debt, and when millions of them protested a war they were ignored.

In any case, Hatcher is right about the stratification of demographic age groups. This is particularly noticeable in social media use; you can often date people’s arrival on the Internet by which communications medium they prefer. Over dinner, I commented on the nuisance of typing on a phone versus a real keyboard, and two younger people laughed at me: so much easier to type on a phone! They were among the crowd whose papers studied influencers on TikTok (Taylor Annabell, Thijs Kelder, Jacob van de Kerkhof, Haoyang Gui, and Catalina Goanta) and the privacy dangers of dating apps (Tima Otu Anwana and Paul Eberstaller), the kinds of subjects I rarely engage with because I am a creature of text, like most journalists. Email and the web feel like my native homes in a way that apps, game worlds, and video services never will. That dates me both chronologically and by my first experiences of the online world (1991).

Most years at this event there’s a new show or movie that fires many people’s imagination. Last year it was Upload with a dash of Severance. This year, real technological development overwhelmed fiction, and the star of the show was generative AI and large language models. Besides my paper with Jon Crowcrosft, there was one from Marvin van Bekkum, Tim de Jonge, and Frederik Zuiderveen Borgesius that compared the science fiction risks of AI – Skynet, Roko’s basilisk, and an ordering of Asimov’s Laws that puts obeying orders above not harming humans (see XKCD, above) – to the very real risks of the “AI” we have: privacy, discrimination, and environmental damage.

Other AI papers included one by Colin Gavaghan, who asked if it actually matters if you can’t tell whether the entity that’s communicating with you is an AI? Is that what you really need to know? You can see his point: if you’re being scammed, the fact of the scam matters more than the nature of the perpetrator, though your feelings about it may be quite different.

A standard explanation of what put the “science” in science fiction (or the “speculative” in “speculative fiction”) used be to that the authors ask, “What if?” What if a planet had six suns whose interplay meant that darkness only came once every 1,000 years? Would the reaction really be as Ralph Waldo Emerson imagined it? (Isaac Asimov’s Nightfall). What if a new link added to the increasingly complex Boston MTA accidentally turned the system into a Mobius strip (A Subway Named Mobius, by Armin Joseph Deutsch). And so on.

In that sense, gikii is often speculative law, thought experiments that tease out new perspectives. What if Prime Day becomes a culturally embedded religious holiday (Megan Rae Blakely)? What if the EU’s trademark system applied in the Star Trek universe (Simon Sellers)? What if, as in Max Gladsone’s Craft Sequence books, law is practical magic (Antonia Waltermann)? In the trademark example, time travel is a problem; as competing interests can travel further and further back to get the first registration. In the latter…well, I’m intrigued by the idea that a law making dumping sewage in England’s rivers illegal could physically stop it from happening without all the pesky apparatus of law enforcement and parliamentary hearings.

Waltermann concluded by suggesting that to some extent law *is* magic in our world, too. A useful reminder: be careful what law you wish for because you just may get it. Boomer!

Illustrations: Part of XKCD‘s analysis of Asimov’s Laws of Robotics.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Wendy M. GrossmanPosted on Categories AI, Future tech, LawTags , , Leave a comment on Doom cyberfuture

Cryptocurrency winter

There is nowhere in the world, Brett Scott says in his recent book, Cloudmoney, that supermarkets price oatmeal in bitcoin. Even in El Salvador, where bitcoin became legal tender in 2021, what appear to be bitcoin prices are just the underlying dollar price refracted through bitcoin’s volatile exchange rate.

Fifteen years ago, when bitcoin was invented, its adherents thought by now it would be a mainstream currency instead of a niche highly speculative instrument of financial destruction and facilitator of crime. Five years ago, the serious money people thought it important enough to consider fighting back with central bank digital currencies (CBDCs).

In 2019, Facebook announced Libra, a consortium-backed cryptocurrency that would enable payments on its platform, apparently to match China’s social media messaging system WeChat, which are used by 1 billion users monthly. By 2021, when Facebook’s holding company renamed itself Meta, Libra had become “Diem”. In January 2022 Diem was sold to Silvergate Bank, which announced in February 2023 it would wind down and liquidate its assets, a casualty of the FTX collapse.

As Dave Birch writes in his 2020 book, The Currency Cold War, it was around the time of Facebook’s announcement that central banks began exploring CBDCs. According to the Atlantic Council’s tracker, 114 countries are exploring CDBCs, and 11 have launched one. Two – Ecuador and Senegal – have canceled theirs. Plans are inactive in 15 more.
politico

The tracker marks the EU, US, and UK as in development. The EU is quietly considering the digital euro. In the US, in March 2022 president Joe Biden issued an executive order including instructions to research a digital dollar. In the UK the Bank of England has an open consultation on the digital pound (closes June 7). It will not make a decision until at least 2025 after completing technical development of proofs of concept and the necessary architecture. The earliest we’d see a digital pound is around 2030.

But first: the BoE needs a business case. In 2021, the House of Lords issued a report (PDF) calling the digital pound a “solution in search of a problem” and concluding, “We have yet to hear a convincing case for why the UK needs a retail CBDC.” Note “retail”. Wholesale, for use only between financial institutions, may have clearer benefits.

Some of the imagined benefits of CBDCs are familiar: better financial inclusion, innovation, lowered costs, and improved efficiency. Others are more arcane: replicating the role of cash to anchor the monetary system in a digital economy. That’s perhaps the strongest argument, in that today’s non-cash payment options are commercial products but cash is public infrastructure. Birch suggests that the digital pound could allow individuals to hold accounts at the BoE. These would be as risk-free as cash and potentially open to those underserved by the banking system.

Many of these benefits will be lost on most of us. People who already have bank accounts or modern financial apps are unlikely to care about a direct account with the BoE, especially if, as Birch suggests, one “innovation” they might allow is negative interest rates. More important, what is the difference between pounds as numbers in cyberspace and pounds as fancier numbers in cyberspace? For most of us, our national currencies are already digital, even if we sometimes convert some of it into physical notes and coins. The big difference – and part of what they’re fighting over – is who owns the transaction data.

At Rest of World, Temitayo Lawal recounts the experience in Nigeria., the first African country to adopt a CBDC. Launched 18 months ago, the eNaira has been tried by only 0.5% of the population and used for just 1.4 million transactions. Among the reasons Lawal finds, Nigeria’s eNaira doesn’t have the flexibility or sophistication of independent cryptocurrencies, younger Nigerians see little advantage to the eNaira over the apps they were already using, 30 million Nigerians (about 13% of the population) lack Internet access, and most people don’t want to entrust their financial information to their government. By comparison, during that time Nigerians traded $1.16 billion in bitcoin on the peer-to-peer platform Paxful.

Many of these factors play out the same way elsewhere. From 2014 to 2018, Ecuador operated Dinero Electrónico, a mobile payment system that allowed direct transfer of US dollars and aimed to promote financial inclusion. In a 2020 paper, researchers found DE never reached critical mass because it didn’t offer enough incentive for adoption, was opposed by the commercial banks, and lacked a sufficient supporting ecosystem for cashing in and out. In China, which launched its CBDC in August 2020, the e-CNY is rarely used because, the Economist reports Alipay and We Chat work well enough that retailers don’t see the need to accept it. The Bahamanian sand dollar has gained little traction. Denmark and Japan have dropped the idea entirely, as has Finland, although it supports the idea of a digital euro.

The good news, such as it is, is that by the time Western countries are ready to make a decision either some country will have found a successful formula that can be adapted, or everyone who’s tried it will have failed and the thing can be shelved until it’s time to rediscover it. That still leaves the problem that Scott warns of: a cashless society will give Big Tech and Big Finance huge power over us. We do need an alternative.

Illustrations: Bank of England facade.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Twitter.

Appropriate privacy

At a workshop this week, one of the organizers posed a question that included the term “appropriate”. As in: “lawful access while maintaining appropriate user privacy”. We were there to think about approaches that could deliver better privacy and security over the next decade, with privacy defined as “the embedding of encryption or anonymization in software or devices”.

I had to ask: What work is “appropriate” doing in that sentence?

I had to ask because last weekend’s royal show was accompanied by preemptive arrests well before events began – at 7:30 AM. Most of the arrested were anti-monarchy protesters armed with luggage straps and placards, climate change protesters whose T-shirts said “Just Stop Oil”, and volunteers for the Night Stars on suspicion that the rape whistles they hand out to vulnerable women might be used to disrupt the parading horses. All of these had coordinated with the Metropolitan Police in advance or actually worked with them…which made no difference. All were held for many hours. Since then, the news has broken that an actual monarchist was arrested, DNA-sampled, fingerprinted, and held for 13 hours just for standing *near* some protesters.

It didn’t help the look of the thing that several days before the Big Show, the Met tweeted a warning that: “Our tolerance for any disruption, whether through protest or otherwise, will be low.”

The arrests were facilitated by the last-minute passage of the Public Order Act just days before with the goal of curbing “disruptive” protests. Among the now-banned practices is “locking on” – that is, locking oneself to a physical structure, a tactic the suffragettes used. among many others in campaigning for women’s right to vote. Because that right is now so thoroughly accepted, we tend to forget how radical and militant the Suffragists had to be to get their point across and how brutal the response was. A century from now, the mainstream may look back and marvel at the treatment meted out to climate change activists. We all know they’re *right*, whether or not we like their tactics.

Since the big event, the House of Lords has published its report on current legislation. The government is seeking to expand the Public Order Act even further by lowering the bar for “serious disruption” from “significant” and “prolonged” to “more than minor” and may include the cumulative impact of repeated protests in the same area. The House of Lords is unimpressed by these amendments via secondary legislation, first because of their nature, and second because they were rejected during the scrutiny of the original bill, which itself is only days old. Secondary legislation gets looked at less closely; the Lords suggest that using this route to bring back rejected provisions “raises possible constitutional issues”. All very Polite for accusing the government of abusing the system.

In the background, we’re into the fourth decade of the same argument between governments and technical experts over encryption. Technical experts by and large take the view that opening a hole for law enforcement access to encrypted content fatally compromises security; law enforcement by and large longs for the old days when they could implement a wiretap with a single phone call to a major national telephone company. One of the technical experts present at the workshop phrased all this gently by explaining that providing access enlarges the attack surface, and the security of such a system will always be weaker because there are more “moving parts”. Adding complexity always makes security harder.

This is, of course, a live issue because of the Online Safety bill, a sprawling mess of 262 pages that includes a requirement to scan public and private messaging for child sexual abuse material, whether or not the communications are encrypted.

None of this is the fault of the workshop we began with, which is part of a genuine attempt to find a way forward on a contentious topic, and whose organizers didn’t have any of this in mind when they chose their words. But hearing “appropriate” in that way at that particular moment raised flags: you can justify anything if the level of disruption that’s allowed to trigger action is vague and you’re allowed to use “on suspicion of” indiscriminately as an excuse. “Police can do what they want to us now,” George Monbiot writes at the Guardian of the impact of the bill.

Lost in the upset about the arrests was the Met’s decision to scan the crowds with live facial recognition. It’s impossible to overstate the impact of this technology. There will be no more recurring debates about ID cards because our faces will do the job. Nothing has been said about how the Met used it on the day, whether its use led to arrests (or on what grounds), or what the Met plans to do with the collected data. The police – and many private actors – have certainly inhaled the Silicon Valley ethos of “ask forgiveness, not permission”.

In this direction of travel, many things we have taken for granted as rights become privileges that can be withdrawn at will, and what used to be public spaces open to all become restricted like an airport or a small grocery store in Whitley Bay. This is the sliding scale in which “appropriate user privacy” may be defined.

Illustrations: Protesters at the coronation (by Alisdair Hickson at Wikimedia .

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Twitter.

The privacy price of food insecurity

One of the great unsolved questions continues to be: what is my data worth? Context is always needed: worth to whom, under what circumstances, for what purpose? Still, supermarkets may give us a clue.

At Novara Media, Jake Hurfurt, who runs investigations for Big Brother Watch, has been studying suprmarket loyalty cards. He finds that increasingly only loyalty card holders have access to special offers, which used to be open to any passing customer.

Tesco now and Sainsburys soon, he says, “are turning the cost-of-living crisis into a cost-of-privacy crisis”,

Neat phrasing, but I’d say it differently: these retailers are taking advantage of the cost-of-living crisis to extort desperate people ito giving up their data. The average value of the discounts might – for now – give a clue to the value supermarkets place on it.

But not for long, since the pattern going forward is a predictable one of monopoly power: as the remaining supermarkets follow suit and smaller independent shops thin out under the weight of rising fuel bills and shrinking margins, and people have fewer choices, the savings from the loyalty card-only special offers will shrink. Not so much that they won’t be worth having, but it seems obvious they’ll be more generous with the discounts – if “generous” is the word – in the sign-up phase than they will once they’ve achieved customer lock-in.

The question few shoppers are in a position to answer while they’re strying to lower the cost of filling their shopping carts is what the companies do with the data they collect. BBW took the time to analyze Tesco’s and Sainsburys’ privacy policies, and found that besides identity data they collect detailed purchase histories as well as bank accounts and payment information…which they share with “retail partners, media partners, and service providers”. In Tesco’s case, these include Facebook, Google, and, for those who subscribe to them, Virgin Media and Sky. Hyper-targeted personal ads right there on your screen!

All that sounds creepy enough. But consider what could well come next. Also this week, a cross-party group of 50 MPs and peers and cosinged by BBW, Privacy International and Liberty, wrote to Frasers Group deploring that company’s use of live facial recognition in its stores, which include Sports Direct and the department store chain House of Fraser. Frasers Group’s purpose, like retailers and pub chains were trialing a decade ago , is effectively to keep out people suspected of shoplifting and bad behavior. Note that’s “suspected”, not “convicted”.

What happens as these different privacy invasions start to combine?

A store equipped with your personal shopping history and financial identity plus live facial recognition cameras, knows the instant you walk into the store who you are, what you like to buy, and how valuable a customer your are. Such a system, equipped with some sort of socring, could make very fine judgments. Such as: this customer is suspected of stealing another customer’s handbag, but they’re highly profitable to us, so we’ll let that go. Or: this customer isn’t suspected of anything much but they look scruffy and although they browse they never buy anything – eject! Or even: this journalist wrote a story attacking our company. Show them the most expensive personalized prices. One US entertainment company is already using live facial recognition to bar entry to its venues to anyone who works for any law firm involved in litigation against it. Britain’s data protection laws should protect us against that sort of abuse, but will they survive the upcoming bonfire of retained EU law?

And, of course, what starts with relatively anodyne product advertising becomes a whole lot more sinister when it starts getting applied to politics, voter manipulation and segmentation, and the “pre-crime” systems

Add the possibilities of technology that allows retailers to display personalized pricing in-store, just like an online retailer could do in the privacy of your own browser, Could we get to a scenario where a retailer, able to link your real world identity and purchasing power to your online nd offline movements could perform a detailed calculation of what you’d be willing to pay for a particular item? What would surge pricing for the last remaining stock of the year’s hottest toy on Christmas Eve look like?

This idea allows me to imagine shopping partnerships, where the members compare prices and the partner with the cheapest prices buys that item for the whole group. In this dystopian future, I imagine such gambits would be banned.

Most of this won’t affect people rich enough to grandly refuse to sign up for loyalty cards, and none of it will affect people rich and eccentric enough to do source everything from local, independent shops – and, if they’re allowed, pay cash.

Four years ago, Jaron Lanier toured with the proposal that we should be paid for contributing to commercial social media sites. The problem with this idea was and is that payment creates a perverse incentive for users to violate their own privacy even more than they do already, and that fair payment can’t be calculated when the consequences of disclosure are perforce unknown.

The supermarket situation is no different. People need food security and affordability, They should not have to pay for that with their privacy.

Illustrations: .London supermarket checkout, 2006 (via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Twitter.

Ex libris

So as previously discussed here three years ago and two years ago, on March 24 the US District Court for the Southern District of New York found that the Internet Archive’s controlled digital lending fails copyright law. Half of my social media feed on this subject filled immediately with people warning that publishers want to kill libraries and this judgment is a dangerous step limiting access to information; the other half is going “They’re stealing from authors. Copyright!” Both of these things can be true. And incomplete.

To recap: in 2006 the Internet Archive set up the Open Library to offer access to digitized books under “controlled digital lending”. The system allows each book to be “out” on “loan” to only one person at a time, with waiting lists for popular titles. In a white paper, lawyers David R. Hansen and Kyle K. Courtney call this “format shifting” and say that because the system replicates library lending it is fair use. Also germane: the Archive points to a 2007 California decision that it is in fact a library. Other countries may beg to differ.

When public libraries closed at the beginning of the covid19 pandemic, the Internet Archive announced the National Emergency Library, which suspended the one-copy-at-a-time rule and scrubbed the waiting lists so anyone could borrow any book at any time. The resulting publicity was the first time many people had heard of the Open Library, although authors had already complained. Hachette Book Group, Penguin Random House, HarperCollins, and Wiley filed suit. Shortly afterwards, the Archive shut down the National Emergency Library. The Open Library continues, and the Archive will appeal the judge’s ruling.

On the they’re-killing-the-libraries side: Mike Masnick and Fight for the Future. At Walled Culture, Glyn Moody argues that sharing ebooks helps sell paid copies. Many authors agree with the publishers that their living is at risk; a group of exceptions including Neil Gaiman, Naomi Klein, and Cory Doctorow, have published an open letter defending the Archive.

At Vice, Claire Woodstock lays out some of the economics of library ebook licenses, which eat up budgets but leave libraries vulnerable and empty-shelved when a service is withdrawn. She also notes that the Internet Archive digitizes physical copies it buys or receives as donations, and does not pay for ebook licenses.

Brief digression back to 1996, when Pamela Samuelson warned of the coming copyright battles in Wired. Many of its key points have since either been enshrined into law, such as circumventing copy protection; others, such as requiring Internet Service Providers to prevent users from uploading copyrighted material, remain in play today. Number three on her copyright maximalists’ wish listeliminating first-sale rights for digitally transmitted documents. This is the doctrine that enables libraries to lend books.

It is therefore entirely believable that commercial publishers believe that every library loan is a missed sale. Outside the US, many countries have a public lending right that pays royalties on loans for that sort of reason. The Internet Archive doesn’t pay those, either.

It surely isn’t facing the headwinds public libraries are. In the UK, years of austerity have shrunk library budgets and therefore their numbers and opening hours. In the US, libraries are fighting against book bans; in Missouri, the Republican-controlled legislature voted to defund the state’s libraries entirely, apparently in retaliation.

At her blog, librarian and consultant Karen Coyle, who has thought for decades about the future of libraries, takes three postings to consider the case. First, she offers a backgrounder, agreeing that the Archive’s losing on appeal could bring consequences for other libraries’ digital lending. In the second, she teases out the differences between academic/research libraries and public libraries and between research and reading. While journals and research materials are generally available in electronic format, centuries of books are not, and scanned books (like those the Archive offers) are a poor reading experience compared to modern publisher-created ebooks. These distinctions are crucial to her third posting, which traces the origins of controlled digital lending.

As initially conceived by Michelle M. Wu in a 2011 paper for Law Library Journal, controlled digital lending was a suggestion that law libraries could, either singly or in groups, buy a hard copy for their holdings and then circulate a digitized copy, similar to an Inter-Library Loan. Law libraries serve limited communities, and their comparatively modest holdings have a known but limited market.

By contrast, the Archive gives global access to millions of books it has scanned. In court, it argued that the availability of popular commercial books on its site has not harmed publishers’ revenues. The judge disagreed: the “alleged benefits” of access could not outweigh the market harm to the four publishers who brought the suit. This view entirely devalues the societal role libraries play, and Coyle, like many others, is dismayed that the judge saw the case purely in terms of its effect on the commercial market.

The question I’m left with is this: is the Open Library a library or a disruptor? If these were businesses, it would obviously be the latter: it avoids many of the costs of local competitors, and asks forgiveness not permission. As things are, it seems to be both: it’s a library for users, but a disruptor to some publishers, some authors, and potentially the world’s libraries. The judge’s ruling captures none of this nuance.

Illustrations: 19th century rendering of the Great Library of Alexandria (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Twitter.

Unclear and unpresent dangers

Monthly computer magazines used to fret that their news pages would be out of date by the time the new issue reached readers. This week in AI, a blog posting is out of date before you hit send.

This – Friday – morning, the Italian data protection authority, Il Garante, has ordered ChatGPT to stop processing the data of Italian users until it complies with the General Data Protection Regulation. Il Garante’s objections, per Apple’s translation, posted by Ian Brown: ChatGPT provides no legal basis for collecting and processing its massive store of the personal data used to train the model, and that it fails to filter out users under 13.

This may be the best possible answer to the complaint I’d been writing below.

On Wednesday, the Future of Life Institute published an open letter calling for a six-month pause on developing systems more powerful than Open AI’s current state of the art, GPT4. Barring Elon Musk, Steve Wozniack, and Skype co-founder Jaan Tallinn, most of the signatories are unfamiliar names to most of us, though the companies and institutions they represent aren’t – Pinterest, the MIT Center for Artificial Intelligence, UC Santa Cruz, Ripple, ABN-Amro Bank. Almost immediately, there was a dispute over the validity of the signatures..

My first reaction was on the order of: huh? The signatories are largely people who are inventing this stuff. They don’t have to issue a call. They can just *stop*, work to constrain the negative impacts of the services they provide, and lead by example. Or isn’t that sufficiently performative?

A second reaction: what about all those AI ethics teams that Silicon Valley companies are disbanding? Just in the last few weeks, these teams have been axed or cut at Microsoft and Twitch; Twitter of course ditched such fripperies last November in Musk’s inaugural wave of cost-cutting. The letter does not call to reinstate these.

The problem, as familiar critics such as Emily Bender pointed out almost immediately, is that the threats the letter focuses on are distant not-even-thunder. As she went on to say in a Twitter thread, the artificial general intelligence of the Singularitarian’s rapture is nowhere in sight. By focusing on distant threats – longtermism – we ignore the real and present problems whose roots are being continuously more deeply embedded into the new-building infrastructure: exploited workers, culturally appropriated data, lack of transparency around the models and algorithms used to build these systems….basically, all the ways they impinge upon human rights.

This isn’t the first time such a letter has been written and circulated. In 2015, Stephen Hawking, Musk, and about 150 others similarly warned of the dangers of the rise of “superintelligences”. Just a year later, in 2016, Pro Publica investigated the algorithm behind COMPAS, a risk-scoring criminal justice system in use in US courts in several states. Under Julia Angwin‘s scrutiny, the algorithm failed at both accuracy and fairness; it was heavily racially biased. *That*, not some distant fantasy, was the real threat to society.

“Threat” is the key issue here. This is, at heart, a letter about a security issue, and solutions to security issues are – or should be – responses to threat models. What is *this* threat model, and what level of resources to counter it does it justify?

Today, I’m far more worried by the release onto public roads of Teslas running Full Self Drive helmed by drivers with an inflated sense of the technology’s reliability than I am about all of human work being wiped away any time soon. This matters because, as Jessie Singal, author of There Are No Accidents, keeps reminding us, what we call “accidents” are the results of policy decisions. If we ignore the problems we are presently building in favor of fretting about a projected fantasy future, that, too, is a policy decision, and the collateral damage is not an accident. Can’t we do both? I imagine people saying. Yes. But only if we *do* both.

In a talk this week for a group at the French international research group AI Act. This effort began well before today’s generative tools exploded into public consciousness, and isn’t likely to conclude before 2024. It is, therefore, much more focused on the kinds of risks attached to public sector scandals like COMPAS and those documented in Cathy O’Neil’s 2017 book Weapons of Math Destruction, which laid bare the problems with algorithmic scoring with little to tether it to reality.

With or without a moratorium, what will “AI” look like in 2024? It has changed out of recognition just since the last draft text was published. Prediction from this biological supremacist: it still won’t be sentient.

All this said, as Edwards noted, even if the letter’s proposal is self-serving, a moratorium on development is not necessarily a bad idea. It’s just that if the risk is long-term and existential, what will six months do? If the real risk is the hidden continued centralization of data and power, then those six months could be genuinely destructive. So far, it seems like its major function is as a distraction. Resist.

Illustrations: IBM’s Watson, which beat two of Jeopardy‘s greatest champions in 2011. It has since failed to transform health care.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Twitter.

Memex 2.0

As language models get cheaper, it’s dawned on me what kind of “AI” I’d like to have: a fully personalized chat bot that has been trained on my 30-plus years of output plus all the material I’ve read, watched, listened to, and taken notes on all these years. A clone of my brain, basically, with more complete and accurate memory updated alongside my own. Then I could discuss with it: what’s interesting to write about for this week’s net.wars?

I was thinking of what’s happened with voice synthesis. In 2011, it took the Scottish company Cereproc months to build a text-to-speech synthesizer from recordings of Roger Ebert’s voice. Today, voice synthesizers are all over the place – not personalized like Ebert’s, but able to read a set text plausibly enough to scare voice actors.

I was also thinking of the Stochastic Parrots paper, whose first anniversary was celebrated last week by authors Emily Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell. An important part of the paper advocates for smaller, better-curated language models: more is not always better. I can’t find a stream for the event, but here’s the reading list collected during the proceedings. There’s lots I’d rather eliminate from my personal assistant. Eliminating unwanted options upfront has long been a widspread Internet failure, from shopping sites (“never show me pet items”) to news sites (“never show me fashion trends”). But that sort of selective display is more difficult and expensive than including everything and offering only inclusion filters.

A computational linguistics expert tells me that we’re an unknown amount of time away from my dream of the wg-bot. Probably, if such a thing becomes possible it will be based on someone’s large language model and fine-tuned with my stuff. Not sure I entirely like this idea; it means the model will be trained on stuff I haven’t chosen or vetted and whose source material is unknown, unless we get a grip on forcing disclosure or the proposed BLOOM academic open source language model takes over the world.

I want to say that one advantage to training a chatbot on your own output is you don’t have to worry so much about copyright. However, the reality is that most working writers have sold all rights to most of their work to large publishers, which means that such a system is a new version of digital cholera. In my own case, by the time I’d been in this business for 15 years, more than half of the publications I’d written for were defunct. I was lucky enough to retain at least non-exclusive rights to my most interesting work, but after so many closures and sales I couldn’t begin to guess – or even know how to find out – who owns the rights to the rest of it. The question is moot in any case: unless I choose to put those group reviews of Lotus 1-2-3 books back online, probably no one else will, and if I do no one will care.

On Mastodon, the specter of the upcoming new! improved! version of the copyright wars launched by the arrival of the Internet: “The real generative AI copyright wars aren’t going to be these tiny skirmishes over artists and Stability AI. Its going to be a war that puts filesharing 2.0 and the link tax rolled into one in the shade.” Edwards is referring to this case, in which artists are demanding billions from the company behind the Stable Diffusion engine.

Edwards went on to cite a Wall Street Journal piece that discusses publishers’ alarmed response to what they perceive as new threats to their business. First: that the large piles of data used to train generative “AI” models are appropriated without compensation. This is the steroid-fueled analogue to the link tax, under which search engines in Australia pay newspapers (primarily the Murdoch press) for including them in news search results. A similar proposal is pending in Canada.

The second is that users, satisfied with the answers they receive from these souped-up search services will no longer bother to visit the sources – especially since few, most notably Google, seem inclined to offer citations to back up any of the things they say.

The third is outright plagiarism without credit by the chatbot’s output, which is already happening.

The fourth point of contention is whether the results of generative AI should be themselves subject to copyright. So far, the consensus appears to be no, when it comes to artwork. But some publishers who have begun using generative chatbots to create “content” no doubt claim copyright in the results. It might make more sense to copyright the *prompt*. (And some bright corporate non-soul may yet try.)

At Walled Culture, Glyn Moody discovers that the EU has unexpectedly done something right by requiring positive opt-in to copyright protection against text and data mining. I’d like to see this as a ray of hope for avoiding the worst copyright conflicts, but given the transatlantic rhetoric around privacy laws and data flows, it seems much more likely to incite another trade conflict.

It now dawns on me that the system I outlined in the first paragraph is in fact Vannevar Bush’s Memex. Not the web, which was never sufficiently curated, but this, primed full of personal intellectual history. The “AI” represents those thousands of curating secretaries he thought the future would hold. As if.

Illustrations: Stable Diffusion rendering of “stochastic parrots”, as prompted by Jon Crowcroft.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Twitter.

Performing intelligence

“Oh, great,” I thought when news broke of the release of GPT-4. “Higher-quality deception.”

Most of the Internet disagreed; having gone mad only a few weeks ago over ChatGPT, everyone’s now agog over this latest model. It passed all these tests!

One exception was the journalist Paris Marx, who commented on Twitter: “It’s so funny to me that the AI people think it’s impressive when their programs pass a test after being trained on all the answers.”

Agreed. It’s also so funny to me that they call that “AI” and don’t like it when researchers like computational linguist Emily Bender call it a “stochastic parrot”. At Marx’s Tech Won’t Save Us podcast, Goldsmith professor Dan McQuillan, author of Resisting AI: An Anti-fascist Approach to Artificial Intelligence, calls it a “bullshit engine” whose developers’ sole goal is plausibility – plausibility that, as Bender has said, allows us imaginative humans to think we detect a mind behind it, and the result is to risk devaluing humans.

Let’s walk back to an earlier type of system that has been widely deployed: benefits scoring systems. A couple of weeks ago, Lighthouse Reports and Wired magazine teamed up on an investigation of these systems, calling them “suspicion machines”.

Their work focuses on the welfare benefits system in use in Rotterdam between 2017 and 2021, which uses 315 variables to risk-score benefits recipients according to the likelihood that their claims are fraudulent. In detailed, worked case analyses, they find systemic discrimination: you lose points for being female, for being female and having children (males aren’t asked about children), for being non-white, and for ethnicity (knowing Dutch a requirement for welfare recipients). Other variables include missing meetings, age, and “lacks organizing skills”, which was just one of 54 variables based on case workers’ subjective assessments. Any comment a caseworker adds translates to a 1 added to the risk score, even if it’s positive. The top-scoring 10% are flagged for further investigation.

This is the system that Accenture, the city’s technology partner on the early versions, said at its unveiling in 2018 was an “ethical solution” and promised “unbiased citizen outcomes”. Instead, Wired says, the algorithm “fails the city’s own test of fairness”.

The project’s point wasn’t to pick on Rotterdam; of the dozens of cities they contacted it just happened to be the only one that was willing to share the code behind the algorithm, along with the list of variables, prior evaluations, and the data scientists’ handbook. It even – after being threatened with court action under freedom of information laws, shared the mathematical model itself.

The overall conclusion: the system was so inaccurate it was little better than random sampling “according to some metrics”.

What strikes me, aside from the details of this design, is the initial choice of scoring benefits recipients for risk of fraud. Why not score them for risk of missing out on help they’re entitled to? The UK government’s figures on benefits fraud indicate that in 2021-2022 overpayment (including error as well as fraud) amounted to 4%; and *underpayment* 1.2% of total expenditure. Underpayment is a lot less, but it’s still substantial (£2.6 billion). Yes, I know, the point of the scoring system is to save money, but the point of the *benefits* system is to help people who need it. The suspicion was always there, but the technology has altered the balance.

This was the point the writer Ellen Ullman noted in her 1996 book Close to the Machine”: the hard-edged nature of these systems and their ability to surveil people in new ways, “infect” their owners with suspicion even of people they’ve long trusted and even when the system itself was intended to be helpful. On a societal scale, these “suspicion machines” embed increased division in our infrastructure; in his book, McQuillan warns us to watch for “functionality that contributes to violent separations of ‘us and them’.”

Along those lines, it’s disturbing that Open AI, the owner of ChatGPT and GPT-4 (and several other generative AI gewgaws) has now decided to keep secret the details of its large language models. That is, we have no sight into what data was used in training, what software and hardware methods were used, or how energy-intensive it is. If there’s a machine loose in the world’s computer systems pretending to be human, shouldn’t we understand how it works? It would help with damping down imagining we see a mind in there.

The company’s argument appears to be that because these models could become harmful it’s bad to publish how they work because then bad actors will use them to create harm. In the cybersecurity field we call this “security by obscurity” and there is a general consensus that it does not work as a protection.

In a lengthy article at New York magazine, Elizabeth Weil. quotes Daniel Dennett’s assessment of these machines: “counterfeit people” that should be seen as the same sort of danger to our system as counterfeit money. Bender suggests that rather than trying to make fake people we should be focusing more on making tools to help people.

The thing that makes me tie it to the large language models that are producing GPT is that in both cases it’s all about mining our shared cultural history, with all its flaws and misjudgments, in response to a prompt and pretending the results have meaning and create new knowledge. And *that’s* what’s being embedded into the world’s infrastructure. Have we learned nothing from Clever Hans?

Illustrations: Clever Hans, performing in Leipzig in 1912 (by Karl Krali, via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Twitter.

Re-centralizing

But first, a housekeeping update. Net.wars has moved – to a new address and new blogging software. For details, see here. If you read net.wars via RSS, adjust your feed to https://netwars.pelicancrossing.net. Past posts’ old URLs will continue to work, as will the archive index page, which lists every net.wars column back to November 2001. And because of the move: comments are now open for the first time in probably about ten years. I will also shortly set up a mailing list for those who would rather get net.wars by email.

***

This week the Ada Lovelace Institute held a panel discussion of ethics for researchers in AI. Arguably, not a moment too soon.

At Noema magazine, Timnet Gebru writes, as Mary L Gray and Siddharth Suri have previously, that what today passes for “AI” and “machine learning” is, underneath, the work of millions of poorly-paid marginalized workers who add labels, evaluate content, and provide verification. At Wired, Gebru adds that their efforts are ultimately directed by a handful of Silicon Valley billionaires whose interests are far from what’s good for the rest of us. That would be the “rest of us” who are being used, willingly or not, knowingly or not, as experimental research subjects.

Two weeks ago, for example, a company called Koko ran an experiment offering chatbot-written/human-overseen mental health counseling without informing the 4,000 people who sought help via the “Koko Cares” Discord server. In a Twitter thread. company co-founder Rob Morris said those users rated the bot’s responses highly until they found out a bot had written them.

People can build relationships with anything, including chatbots, as was proved in 1996 with the release of the experimental chatbot therapist Eliza. People found Eliza’s responses comforting even though they knew it was a bot. Here, however, informed consent processes seem to have been ignored. Morris’s response, when widely criticized for the unethical nature of this little experiment was to say it was exempt from informed consent requirements because helpers could opt whether to use the chatbot’s reponses and Koko had no plan to publish the results.

One would like it to be obvious that *publication* is not the biggest threat to vulnerable people in search of help. One would also like modern technology CEOs to have learned the right lesson from prior incidents such as Facebook’s 2012 experiment to study users’ moods when it manipulated their newsfeeds. Facebook COO Sheryl Sandberg apologized for *how the experiment was communicated*, but not for doing it. At the time, we thought that logic suggested that such companies would continue to do the research but without publishing the results. Though isn’t tweeting publication?

It seems clear that scale is part of the problem here, like the old saying, one death is a tragedy; a million deaths are a statistic. Even the most sociopathic chatbot owner is unlikely to enlist an experimental chatbot to respond to a friend or family member in distress. But once a screen intervenes, the thousands of humans on the other side are just a pile of user IDs; that’s part of how we get so much online abuse. For those with unlimited control over the system we must all look like ants. And who wouldn’t experiment on ants?

In that sense, the efforts of the Ada Lovelace panel to sketch out the diligence researchers should follow are welcome. But the reality of human nature is that it will always be possible to find someone unscrupulous to do unethical research – and the reality of business nature is not to care much about research ethics if the resulting technology will generate profits. Listening to all those earnest, worried researchers left me writing this comment: MBAs need ethics. MBAs, government officials, and anyone else who is in charge of how new technologies are used and whose decisions affect the lives of the people those technologies are imposed upon.

This seemed even more true a day later, at the annual activists’ gathering Privacy Camp. In a panel on the proliferation of surveillance technology at the borders, speakers noted that every new technology that could be turned to helping migrants is instead being weaponized against them. The Border Violence Monitoring Network has collected thousands of such testimonies.

The especially relevant bit came when Hope Barker, a senior policy analyst with BVMN, noted this problem with the forthcoming AI Act: accountability is aimed at developers and researchers, not users.

Granted, technology that’s aborted in the lab isn’t available for abuse. But no technology stays the same after leaving the lab; it gets adapted, altered, updated, merged with other technologies, and turned to uses the researchers never imagined – as Wendy Hall noted in moderating the Ada Lovelace panel. And if we have learned anything from the last 20 years it is that over time technology services enshittify, to borrow Cory Doctorow’s term in a rant which covers the degradation of the services offered by Amazon, Facebook, and soon, he predicts, TikTok.

The systems we call “AI” today have this in common with those services: they are centralized. They are technologies that re-advantage large organizations and governments because they require amounts of data and computing power that are beyond the capabilities of small organizations and individuals to acquire. We can only rent them or be forced to use them. The ur-evil AI, HAL in Stanley Kubrick’s 2001: A Space Odyssey taught us to fear an autonomous rogue. But the biggest danger with “AIs” of the type we are seeing today, that are being put into decision making and law enforcement, is not the technology, nor the people who invented it, but the expanding desires of its controller.

Illustrations: HAL, in 2001.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns back to November 2001. Comment here, or follow on Twitter.