The master switch

In his 2010 book, The Master Switch, Columbia law professor Tim Wu quotes the television news pioneer Fred W. Friendly, who said in a 1970 article for Saturday Review that before any question of the First Amendment and free speech, is “who has exclusive control of the master switch. In his 1967 memoir, Due to Circumstances Beyond Our Control, Friendly tells numerous stories that illustrate the point, beginning with his resignation of the presidency of CBS News after the network insisted on showing a rerun of I Love Lucy rather than carry live the first Senate hearings on the US involvement in Vietnam.

This is the switch that Amazon founder Jeff Bezos flipped this week when he blocked the editorial board of the Washington Post, which he owns, from endorsing Kamala Harris and Tim Walz in the US presidential election. At that point, every fear people had in 2013, when Bezos paid $250 million to save the struggling 76-time Pulitzer prize-paper famed for breaking Watergate, came true. Bezos, like William Randolph Hearst, Rupert Murdoch, and others before him, exerted his ownership control. (See also the late, great film critic Roger Ebert on the day Rupert Murdoch took over the Chicago Sun-Times.)

If you think of the Washington Post as just a business, as opposed to a public service institution, you can see why Bezos preferred to hedge his bets. But, as former Post journalist Dan Froomkin called it in February 2023, ten years post-sale, the newspaper had reverted to its immediately pre-Bezos state, laying off staff and losing money. Then, Froomkin warned that Bezos’ newly-installed “lickspittle” publisher, editor, and editorial editor lacked vision and suggested Bezos turn it into a non-profit, give it an endowment, and leave it alone.

By October 2023, Froomkin was arguing that the Post had blown it by failing to cover the decade’s most important story, the threat to the US’s democratic system posed by “the increasingly demented and authoritarian Republican Party”. As of yesterday, more than 250,000 subscribers had canceled, literally decimating its subscriber base, though barely, as Jason Koebler writes at 404 Media, a rounding error in Bezos’ wealth.

Almost simultaneously, a similar story was playing out 3,000 miles across the country at the LA Times. There, owner Patrick Soon-Shiong overrode the paper’s editorial board’s intention to endorse Harris/Walz. Several board members have since resigned, along with editorials editor Mariel Garza.

At Columbia Journalism Review, Jeff Jarvis uses Timothy Snyder’s term, “anticipatory obedience” to describe these situations.

On his Mea Culpa podcast, former Trump legal fixer Michael Cohen has frequently issued a hard-to-believe warning that if Trump is elected he will assemble the country’s billionaires and take full control of their assets, Putin-style. As unAmerican as that sounds, Cohen has been improbably right before; in 2019 Congressional testimony he famously predicted that Trump would never allow a peaceful transition of power. If Trump wins and proves Cohen correct, anticipatory obedience won’t save Bezos or any other billionaire.

The Internet was supposed to provide an escape from this sort of control (in the 1990s, pundits feared The Drudge Report!). Into this context, several bits of social media news also dropped. Bluesky announced $15 million in venture capital funding and a user base of 13 million. Reddit announced its first-ever profit, apparently solely due to the deals the 19-year-old service signed to give Google and OpenAI to access user postings and use AI to translate users’ posts into multiple languages. Finally, the owner of the Mastodon server botsin.space, which allows users to run bots on Mastodon, is shutting down, ending new account signups and shifting to read-only by December. The owner blames unsustainably increasing costs as the user base and postings continue to grow.

Even though Bluesky is incorporated as a public benefit LLC, the acceptance of venture capital gives pause: venture capital always looks for a lucrative exit rather than value for users. Reddit served tens of millions of users for 19 years without ever making any money; it’s only profitable now because AI developers want its data.

Bluesky’s board includes the notable free speech advocate Techdirt’s Mike Masnick, who this week blasted the Washington Post’s decision in scathing terms. Masnick’s paper proposing promoting free speech by developing protocols rather than platforms serves as a sort of founding document. Platforms centralize user data and share it back out again; protocols are standards anyone can use to write compliant software to enable new connections. Think proprietary (Apple) versus open source (Linux, email, the web).

The point is this: platforms either start with or create billionaire owners; protocols allow participation by both large and small owners. That still leaves the long-term problem of how to make such services sustainable. Koebler writes of the hard work of going independent, but notes that the combination of new technology and the elimination of layers of management and corporate executives makes it vastly cheaper than before. Bluesky so far has no advertising, but plans to offer higher-level features by subscription, still implying a centralized structure. Mastodon instances survive on user donations and volunteer administrators. Its developers should target making it much easier and more efficient to run their instances: democratize the master switch.

Illustrations: Charles Foster Kane (Orson Welles) in his newsroom in the 1941 film Citizen Kane, (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Follow the business models

In a market that enabled the rational actions of economists’ fantasies, consumers would be able to communicate their preferences for “smart” or “dumb” objects by exercising purchasing power. Instead, everything from TVs and vacuum cleaners to cars is sprouting Internet connections and rampant data collection.

I would love to believe we will grow out of this phase as the risks of this approach continue to become clearer, but I doubt it because business models will increasingly insist on the post-sale money, which never existed in the analog market. Subscriptions to specialized features and embedded ads seem likely to take ever everything. Essentially, software can change the business model governing any object’s manufacture into Gillette’s famous gambit: sell the razors cheap, and make the real money selling razor blades. See also in particular printer cartridges. It’s going to be everywhere, and we’re all going to hate it.

***

My consciousness of the old ways is heightened at the moment because I spent last weekend participating in a couple of folk music concerts around my old home town, Ithaca, NY. Everyone played acoustic instruments and sang old songs to celebrate 58 years of the longest-running folk music radio show in North America. Some of us hadn’t really met for nearly 50 years. We all look older, but everyone sounded great.

A couple of friends there operate a “rock shop” outside their house. There’s no website, there’s no mobile app, just a table and some stone wall with bits of rock and other findings for people to take away if they like. It began as an attempt to give away their own small collection, but it seems the clearing space aspect hasn’t worked. Instead, people keep bringing them rocks to give away – in one case, a tray of carefully laid-out arrowheads. I made off with a perfect, peach-colored conch shell. As I left, they were taking down the rock shop to make way for fantastical Halloween decorations to entertain the neighborhood kids.

Except for a brief period in the 1960s, playing folk music has never been lucrative. However it’s still harder now: teens buy CDs to ensure they can keep their favorite music, and older people buy CDs because they still play their old collections. But you can’t even *give* a 45-year-old a CD because they have no way to play it. At the concert, Mike Agranoff highlighted musicians’ need for support in an ecosystem that now pays them just $0.014 (his number) for streaming a track.

***

With both Halloween and the US election scarily imminent, the government the UK elected in July finally got down to its legislative program this week.

Data protection reform is back in the form of the the Data Use and Access Bill, < a href="https://www.theregister.com/2024/10/24/uk_proposes_new_data_law/">Lindsay Clark reports at The Register, saying the bill is intended to improve efficiency in the NHS, the police force, and businesses. It will involve making changes to the UK’s implementation of the EU’s General Data Protection Regulation. Care is needed to avoid putting the UK’s adequacy decision at risk. At the Open Rights Group Mariano della Santi warns that the bill weakens citizens’ protection against automated decision making. At medConfidential, Sam Smith details the lack of safeguards for patient data.

At Computer Weekly, Bill Goodwin and Sebastian Klovig Skelton outline the main provisions and hopes: improve patient care, free up police time to spend more protecting the public, save money.

‘Twas ever thus. Every computer system is always commissioned to save money and improve efficiency – they say this one will save 140,000 a years of NHS staff time! Every new computer system also always brings unexpected costs in time and money and messy stages of implementation and adaptation during which everything becomes *less* efficient. There are always hidden costs – in this case, likely the difficulties of curating data and remediating historical bias. An easy prediction: these will be non-trivial.

***

Also pending is the draft United Nations Convention Against Cybercrime; the goal is to get it through the General Assembly by the end of this year.

Human Rights Watch writes that 29 civil society organizations have written to the EU and member states asking them to vote against the treaty’s adoption and consider alternative approaches that would safeguard human rights. The EFF is encouraging all states to vote no.

Internet historians will recall that there is already a convention on cybercrime, sometimes called the Budapest Convention. Drawn up in 2001 by the Council of Europe to come into force in 2004, it was signed by 70 countries and ratified by 68. The new treaty has been drafted by a much broader range of countries, including Russia and China, is meant to be consistent with that older agreement. However, the hope is it will achieve the global acceptance its predecessor did not, in part because of the broader

However, opponents are concerned that the treaty is vague, failing to limit its application to crimes that can only be committed via a computer, and lacks safeguards. It’s understandable that law enforcement, faced with the kinds of complex attacks on computer systems we see today want their path to international cooperation eased. But, as EFF writes, that eased cooperation should not extend to “serious crimes” whose definition and punishment is left up to individual countries.

Illustrations: Halloween display seen near Mechanicsburg, PA.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Review: The Web We Weave

The Web We Weave
By Jeff Jarvis
Basic Books
ISBN: 9781541604124

Sometime in the very early 1990s, someone came up to me at a conference and told me I should read the work of Robert McChesney. When I followed the instruction, I found a history of how radio and TV started as educational media and wound up commercially controlled. Ever since, this is the lens through which I’ve watched the Internet develop: how do we keep the Internet from following that same path? If all you look at is the last 30 years of web development, you might think we can’t.

A similar mission animates retired CUNY professor Jeff Jarvis in his latest book, The Web We Weave. In it, among other things, he advocates reanimating the open web by reviving the blogs many abandoned when Twitter came along and embracing other forms of citizen media. Phenomena such as disinformation, misinformation, and other harms attributed to social media, he writes, have precursor moral panics: novels, comic books, radio, TV, all were once new media whose evils older generations fretted about. (For my parents, it was comic books, which they completely banned while ignoring the hours of TV I watched.) With that past in mind, much of today’s online harms regulation leaves him skeptical.

As a media professor, Jarvis is interested in the broad sweep of history, setting social media into the context that began with the invention of the printing press. That has its benefits when it comes to later chapters where he’s making policy recommendations on what to regulate and how. Jarvis is emphatically a free-speech advocate.

Among his recommendations are those such advocates typically support: users should be empowered, educated, and taught to take responsibility, and we should develop business models that support good speech. Regulation, he writes, should include the following elements: transparency, accountability, disclosure, redress, and behavior rather than content.

On the other hand, Jarvis is emphatically not a technical or design expert, and therefore has little to say about the impact on user behavior of technical design decisions. Some things we know are constants. For example, the willingness of (fully identified) online communicators to attack each other was noted as long ago as the 1980s, when Sara Kiesler studied the first corporate mailing lists.

Others, however, are not. Those developing Mastodon, for example, deliberately chose not to implement the ability to quote and comment on a post because they believed that feature fostered abuse and pile-ons. Similarly, Lawrence Lessig pointed out in 1999 in Code and Other Laws of Cyberspae (PDF) that you couldn’t foment a revolution using AOL chatrooms because they had a limit of 23 simultaneous users.

Understanding the impact of technical decisions requires experience, experimentation, and, above all, time. If you doubt this, read Mike Masnick’s series at Techdirt on Elon Musk’s takeover and destruction of Twitter. His changes to the verification system alone have undermined the ability to understand who’s posting and decide how trustworthy their information is.

Jarvis goes on to suggest we should rediscover human scale and mutual obligation, both crucial as the covid pandemic progressed. The money will always favor mass scale. But we don’t have to go that way.

Choice

The first year it occurred to me that a key consideration in voting for the US president was the future composition of the Supreme Court was 1980: Reagan versus Carter. Reagan appointed the first female justice, Sandra Day O’Connor – and then gave us Antonin Scalia and Anthony Kennedy. Only O’Connor was appointed during Reagan’s first term, the one that woulda, shoulda, coulda been Carter’s second.

Watching American TV shows and movies from the 1980s and 1990s is increasingly sad. In some – notably Murphy Brown – a pregnant female character wrestles with deciding what to do. Even when not pregnant, those characters live inside the confidence of knowing they have choice.

At the time, Murphy (Candice Bergen) was pilloried for choosing single motherhood. (“Does [Dan Quayle] know she’s fictional?” a different sitcom asked, after the then-vice president critized her “lifestyle”.) Had Murphy opted instead for an abortion, I imagine she’d have been just as vilified rather than seen as acting “responsibly”. In US TV history, it may only be on Maude in 1972 that an American lead character, Maude (Bea Arthur), is shown actually going through with an abortion. Even in 2015 in an edgy comedy like You’re the Worst, that choice is given to the sidekick. It’s now impossible to watch any of those scenes without feeling the loss of agency.

In the news, pro-choice activists warned that overturning Roe v. Wade would bring deaths, and so it has, but not in the same way as they did in the illegal-abortion 1950s, when termination could be dangerous. Instead, women are dying because their health needs fall in the middle of a spectrum that has purely elective abortion at one end and purely involuntary miscarriage at the other. These are not distinguishable *physically*, but can be made into evil versus blameless morality tales (though watch that miscarrying mother, maybe she did something).

Even those who still have a choice may struggle to access it. Only one doctor performs abortions in Mississippi ; he also works in Alabama and Tennessee.

So this time women are dying or suffering from lack of care when doctors can’t be sure what they are allowed do under laws that are written by people with shockingly limited medical knowledge.

Such was the case of Amber Thurman, a 28-year-old Georgian medical assistant who died of septic shock after fetal tissue was incompletely expelled after a medication abortion, which she’d had to travel hundreds of miles to North Carolina to get. It’s a very rare complication, but her life could probably have been saved by prompt action – but the hospital had no policy in place for septic abortions under Georgia’s then-new law. There have been many more awful stories since – many not deaths but fraught survivals of avoidable complications.

If anti-abortion activists are serious about their desire to save the life of every unborn child, there are real and constructive things they can do. They could start by requiring hospitals to provide obstetrics units and states to imrpove provision for women’s health. According to March of Dimes, 5.5 million American women in are caught in the one-third of US counties it calls “maternity deserts”. Most affected are those in the states of North Dakota, South Dakota, Alaska, Oklahoma, and Nebraska. In Texas, which banned abortion after six weeks in 2021 and now prohibits it except to save the mother’s life, maternal mortality rose 56% between 2019 and 2022. Half of Texas counties, Stephanie Taladrid reported at The New Yorker in January, have no specialists in women’s health.

“Pro-life” could also mean pushing to support families. American parents have less access to parental leave than their counterparts in other developed countries. Or they could fight to redress other problems, like the high rate of Black maternal mortality.

Instead, the most likely response to the news that abortion rates have actually gone up in the US since the Dobbs decision is efforts to increase surveillance, criminalization, and restriction. In 2022, I imagined how this might play out in a cashless society, where linked systems could prevent a pregnant woman from paying for anything that might help her obtain an abortion: travel, drugs, even unhealthy foods,

This week, at The Intercept, Debbie Nathan reports on a case in which a police sniffer dog flagged an envelope that, opened under warrant, proved to contain abortion pills. It’s not clear, she writes, whether the sniffer dogs actually detect misopristol and mifepristone, or traces of contraband drugs, or just responding to an already-suspicious handler’s subtle cues, like Clever Hans. Using the US Postal Service’s database of images of envelopes, inspectors were able to identify other parcels from the same source and their recipients. A hostile administration could press for – in fact, Republican vice-presidential candidate JD Vance has already demanded – renewed enforcement of the not-dead-only-sleeping Comstock Act (1873), which criminalizes importing and mailing items “intended for producing abortion, or for any indecent or immoral use”.

There are so many other vital issues at stake in this election, but this one is personal. I spent my 20s traveling freely across the US to play folk music. Imagine that with today’s technology and states that see every woman of child-bearing age as a suspected criminal.

Illustrations: Murphy Brown (Candice Bergen) with baby son Avery (Haley Joel Osment).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

A hole is a hole

We told you so.

By “we” I mean thousands, of privacy advocates, human rights activists, technical experts, and information security journalists.

By “so”, I mean: we all said repeatedly over decades that there is no such thing as a magic hole that only “good guys” can use. If you build a supposedly secure system but put in a hole to give the “authorities” access to communications, that hole can and will be exploited by “bad guys” you didn’t want spying on you.

The particular hole Chinese hackers used to spy on the US is the Communications Assistance for Law Enforcement Act (1994). CALEA mandates that telecommunications providers design their equipment so that they can wiretap any customer if law enforcement presents a warrant. At Techcrunch, Zack Whittaker recaps much of the history, tracing technology giants’ new emphasis on end-to-end encryption to the 2013 Snowden revelations of the government’s spying on US citizens.

The mid-1990s were a time of profound change for telecommunications: the Internet was arriving, exchanges were converting from analog to digital, and deregulation was providing new competition for legacy telcos. In those pre-broadband years, hundreds of ISPs offered dial-up Internet access. Law enforcement could no longer just call up a single central office to place a wiretap. When CALEA was introduced, critics were clear and prolific; for an in-depth history see Susan Landau’s and Whit Diffie’s book, Privacy on the Line (originally published 1998, second edition 2007). The net.wars archive includes a compilation of years of related arguments, and at Techdirt, Mike Masnick reviews the decades of law enforcement insistence that they need access to encrypted text. “Lawful access” is the latest term of art.

In the immediate post-9/11 shock, some of those who insisted on the 1990s version of today’s “lawful access” – key escrow, took the opportunity to tell their opponents (us) that the attacks proved we’d been wrong. One such was the just-departed Jack Straw, the home secretary from 1997 to (June) 2001, who blamed BBC Radio Four and “…large parts of the industry, backed by some people who I think will now recognise they were very naive in retrospect”. That comment sparked the first net.wars column. We could now say, “Right back atcha.”

Whatever you call an encryption backdoor, building a hole into communications security was, is, and will always be a dangerous idea, as the Dutch government recently told the EU. Now, we have hard evidence.

***

The time is long gone when people used to be snobbish about Internet addresses (see net.wars-the-book, chapter three). Most of us are therefore unlikely to have thought much about the geekishly popular “.io”. It could be a new-fangled generic top-level domain – but it’s not. We have been reading linguistic meaning into what is in fact a country code. Which is all fine and good, except that the country it belongs to is the Chagos Islands, also known as the British Indian Ocean Territory, which I had never heard of until the British government announced recently that it will hand the islands back to Mauritius (instead of asking the Chagos Islanders what they want…). Gareth Edwards made the connection: when that transfer happens, .io will cease to exist (h/t Charles Arthur’s The Overspill).

Edwards goes on to discuss the messy history of orphaned country code domains: Yugoslavia, and the Soviet Union. As a result, ICANN, the naming authority, now has strict rules that mandate termination in such cases. This time, there’s a lot at stake: .io is a favorite among gamers, crypto companies, and many others, some of them substantial businesses. Perhaps a solution – such as setting .io up anew as a gTLD with its domains intact – will be created. But meantime, it’s worth noting that the widely used .tv (Tuvalu), .fm (Federated States of Micronesia), and .ai (Anguilla) are *also* country code domains.

***

The story of what’s going on with Automattic, the owner of the blogging platform WordPress.com, and WP Engine, which provides hosting and other services for businesses using WordPress, is hella confusing. It’s also worrying: WordPress, which is open source content management software overseen by the WordPress Foundation, powers a little over 40% of the Internet’s top ten million websites and more than 60% of sites overall (including this one).

At Heise Online, Kornelius Kindermann offers one of the clearer explanations: Automattic, whose CEO, Matthew Mullenweg is also a director of the WordPress Foundation and a co-creator of the software, wants WP Engine, which has been taken over by the investment company Silver Lake, to pay “trademark royalties” of 8% to the WordPress Foundation to support the software. WP Engine doesn’t wanna. Kindermann estimates the sum involved at $35 million, After the news of all that broke, 159 employees have announced they are leaving Automattic.

The more important point that, like the users of the encrypted services governments want to compromise, the owners of .io domains, or, ultimately, the Chagos Islanders themselves, WP Engine’s customers, some of them businesses worth millions, are hostages of uncertainty surrounding the decisions of others. Open source software is supposed to give users greater control. But as always, complexity brings experts and financial opportunities, and once there’s money everyone wants some of it.

Illustrations: View of the Chagos Archipelago taken during ISS Expedition 60 (NASA, via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Review: Supremacy

Supremacy: AI, ChatGPT, and the Race That Will Change the World
By Parmy Olson
Macmillan Business
ISBN: 978-1035038220

One of the most famous books about the process of writing software is Frederick Brooks’ The Mythical Man Month. The essay that gives the book its title makes the point that you cannot speed up the process by throwing more and more people at it. The more people you have, the more they have to all communicate with each other, and the pathways multiply exponentially. Think of it this way: 500 people can’t read a book faster than five people can.

Brooks’ warning immediately springs to mind when Parmy Olson reports, late in her new book, Supremacy, that Microsoft CEO Sadya Nadella was furious to discover that Microsoft’s 5,000 direct employees working on AI lagged well behind the rapid advances being made by the fewer than 200 working working at OpenAI. Some things just aren’t improved by parallel processing.

The story Olson tells is a sad one: two guys, both eager to develop an artificial general intelligence in order to save, or least help, humanity, who both wind up working for large commercial companies whose primary interests are to 1) make money and 2) outcompete the other guy. For Demis Hassabis, the company was Google, which bought his DeepMind startup in 2014. For Sam Altman, founder of OpenAI, it was Microsoft. Which fits: Hassabis’s approach to “solving AI” was to let them teach themselves by playing games, hoping to drive science discovery; Altman sought to solve real-world problems and bring everyone wealth. Too late for Olson’s book, Hassabis has achieved enough of a piece of his dream to have been one of three awarded the 2024 Nobel Prize in chemistry for using AI to predict how proteins will fold.

For both the reason was the same: the resources they sought to work in AI – data, computing power, and high-priced personnel – are too expensive for either traditional startup venture capital funding or for academia. (Cure Vladen Joler, at this year’s Computers, Privacy, and Data Protection, noting that AI is arriving “pre-monopolized”.) As Olson tells the story, they both tried repeatedly to keep the companies they founded independent. Yet, both have wound up positioned to run the companies whose money they took apparently believing, like many geek founders with more IQ points than sense, that they would not have to give up control.

In comparing and contrasting the two founders, Olson shows where many of today’s problems came from. Allying themselves with Big Tech meant giving up on transparency. The ethicists who are calling out these companies over real and present harms caused by the tools they’ve built, such as bias, discrimination, and exploitation of workers performing tasks like labeling data, have 1% or less of the funding of those pushing safety for superintelligences that may never exist.

Olson does a good job of explaining the technical advances that led to the breakthroughs of recent years, as well as the business and staff realities of their different paths. A key point she pulls out is the extent to which both Google and Microsoft have become the kind of risk-averse, slow-moving, sclerotic company they despised when they were small, nimble newcomers.

Different paths, but ultimately, their story is the same: they fought the money, and the money won.

Blown

“This is a public place. Everyone has the right to be left in peace,” Jane (Vanessa Redgrave) tells Thomas (David Hemmings), whom she’s just spotted photographing her with her lover in the 1966 film Blow-Up, by Michelangelo Antonioni. The movie, set in London, proceeds as a mystery in which Thomas’s only tangible evidence is a grainy, blown-up shot of a blob that may be a murdered body.

Today, Thomas would probably be wielding a latest-model smartphone instead of a single lens reflex film camera. He would not bother to hide behind a tree. And Jane would probably never notice, much less challenge Thomas to explain his clearly-not-illegal, though creepy, behavior. Phones and cameras are everywhere. If you want to meet a lover and be sure no one’s photographing you, you don’t go to a public park, even one as empty as the film finds Maryon Park. Today’s 20-somethings grew up with that reality, and learned early to agree some gatherings are no-photography zones.

Even in the 1960s individuals had cameras, but taking high-quality images at a distance was the province of a small minority of experts; Antonioni’s photographer was a professional with his own darkroom and enlarging equipment. The first CCTV cameras went up in the 1960s; their proliferation became public policy issue in the 1980s, and was propagandized as “for your safety without much thought in the post-9/11 2000s. In the late 2010s, CCTV surveillance became democratized: my neighbor’s Ring camera means no one can leave an anonymous gift on their doorstep – or (without my consent) mine.

I suspect one reason we became largely complacent about ubiquitous cameras is that the images mostly remained unidentifiable, or at least unidentified. Facial recognition – especially the live variant police seem to feel they have the right to set up at will – is changing all that. Which all leads to this week, when Joseph Cox at 404 Media reports ($) (and Ars Technica summarizes) that two Harvard students have mashed up a pair of unremarkable $300 Meta Ray-Bans with the reverse image search service Pimeyes and a large language model to produce I-XRAY, an app that identifies in near-real time most of the people they pass on the street, including their name, home address, and phone number.

The students – AnhPhu Nguyen and Caine Ardayfio – are smart enough to realize the implications, imagining for Cox the scenario of a random male spotting a young woman and following her home. This news is breaking the same week that the San Francisco Standard and others are reporting that two men in San Francisco stood in front of a driverless Waymo taxi to block it from proceeding while demanding that the female passenger inside give them her phone number (we used to give such males the local phone number for time and temperature).

Nguyen and Ardayfio aren’t releasing the code they’ve written, but what two people can do, others with fewer ethics can recreate independently, as 30 years of Black Hat and Def Con have proved. This is a new level of democratizated surveillance. Today, giant databases like Clearview AI are largely only accessible to governments and law enforcement. But the data in them has been scraped from the web, like LLMs’ training data, and merged with commercial sources

This latest prospective threat to privacy has been created by the marriage of three technologies that were developed separately by different actors without regard to one another and, more important, without imagining how one might magnify the privacy risks of the others. A connected car with cameras could also run I-XRAY.

The San Francisco story is a good argument against allowing cars on the roads without steering wheels, pedals, and other controls or *something* to allow a passenger to take charge to protect their own safety. In Manhattan cars waiting at certain traffic lights often used to be approached by people who would wash the windshield and demand payment. Experienced drivers knew to hang back at red lights so they could roll forward past the oncoming would-be washer. How would you do this in a driverless car with no controls?

We’ve long known that people will prank autonomous cars. Coverage focused on the safety of the *cars* and the people and vehicles surrounding them, not the passengers. Calling a remote technical support line for help is never going to get a good enough response.

What ties these two cases together – besides (potentially) providing new ways to harass women – is the collision between new technologies and human nature. Plus, the merger of three decades’ worth of piled-up data and software that can make things happen in the physical world.

Arguably, we should have seen this coming, but the manufacturers of new technology have never been good at predicting what weird things their users will find to do with it. This mattered less when the worst outcome was using spreadsheet software to write letters. Today, that sort of imaginative failure is happening at scale in software that controls physical objects and penetrates the physical world. The risks are vastly greater and far more unsettling. It’s not that we can’t see the forest for the trees; it’s that we can’t see the potential for trees to aggregate into a forest.

Illustrations: Jane (Vanessa Redgrave) and her lover, being photographed by Thomas (David Hemmings) in Michelangelo Antonioni’s 1966 film, Blow-Up.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Pass the password

So this week I had to provide a new password for an online banking account. This is always fraught, even if you use a password manager. Half the time whatever you pick fails the site’s (often hidden) requirements – you didn’t stand on your head and spit a nickel, or they don’t allow llamas, or some other damn thing. This time, the specified failures startled me: it was too long, and special characters and spaces were not allowed. This is their *new* site, created in 2022.

They would have done well to wait for the just-released proposed password guidelines from the US National Institute for Standards and Technology. Most of these rules should have been standard long ago – like only requiring users to change passwords if there’s a breach. We can, however, hope they will be adopted now and set a consistent standard. To date, the fact that everyone has a different set of restrictions means that no secure strategy is universally valid – neither the strong passwords software generates nor the three random words the UK’s National Cyber Security Centre has long recommended.

The banking site we began with fails at least four of the nine proposed new rules: the minimum password length (six) is too short – it should be eight, preferably 15; all Unicode characters and printing ASCII characters should be acceptable, including spaces; maximum length should be at least 64 characters (the site says 16); there should be no composition rules mandating upper and lower case, numerals, and so on (which the site does). At least the site’s rules mean it won’t invisibly truncate the password so it’s impossible to guess how much of it has actually been recorded. On the plus side, a little surprisingly, the site did require me to choose my own password hint question. The fact that most sites use the same limited set of questions opens the way for reused answers across the web, effectively undoing the good of not reusing passwords in the first place.

This is another of those cases where the future was better in the past: for at least 20 years passwords have been about to be superseded by…something – smart cards, dongles, biometrics, picklists and typing patterns, images, lately, passkeys and authenticator apps. All have problems limiting their reach. Single signons are still offered by Facebook, Google, and others, but the privacy risks are (I hope) widely understood. The “this is the next password” crowd have always underestimated the complexity of replacing deeply embedded habits. We all hate passwords – but they are simple to understand and work across multiple devices. And we’re used to them. Encryption didn’t succeed with mainstream users until it became invisibly embedded inside popular services. I’ll guess that password replacements will only succeed if they are similarly invisible. Most are too complicated.

In all these years, despite some improvements, passwords have only gotten more frustrating. Most rules are still devised with desktop/laptop computers in mind – and yield passwords that are impossible to type on phones. No one takes a broad view of the many contexts in which we might have to enter passwords. Outmoded threat models are everywhere. Decades after the cybercafe era, sites, operating systems, and other software all still code as if shoulder surfing were still an important threat model. So we end up with sillinesses like struggling to type masked wifi passwords while sitting in coffee shops where they’re prominently displayed, and masks for one-time codes sent for two-factor authentication. So you fire up a text editor to check your typing and then copy and paste…

Meanwhile, the number of passwords we have to manage continues to escalate. In a recent extended Mastodon thread, digital services expert Jonathan Kamens documented his adventures updating the email addresses attached to 1,200-plus accounts. He explained: using a password manager makes it easy to create and forget new accounts by the hundred.

In a 2017 presentation, Columbia professor Steve Bellovin provides a rethink of all this, much of it drawn from his his 2016 book Thinking Security. Most of what we think we know about good password security, he argues, is based on outmoded threat models and assumptions – outmoded because the computing world has changed, but also because attackers have adapted to those security practices. Even the focus on creating *strong* passwords is outdated: password strength is entirely irrelevant to modern phishing attacks, to compromised servers and client hosts, and to attackers who can intercept communications or control a machine at either end (for example, via a keylogger). A lot depends if you’re a high-risk individual likely to be targeted personally by a well-resourced attacker, a political target, or, like most of us, a random target for ordinary criminals.

The most important thing, Bellovin says, is not to reuse passwords, so that if a site is cracked the damage is contained to that one site. Securing the email address used to reset passwords is also crucial. To that end, it can be helpful to use a separate and unpublished email address for the most sensitive sites and services – anything to do with money or health, for example.

So NIST’s new rules make some real sense, and if organizations can be persuaded to adopt them consistently all our lives might be a little bit less fraught. NIST is accepting comments through October 7.

Illustrations: XKCD on password strength.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

This perfect day

To anyone remembering the excitement over DNA testing just a few years ago, this week’s news about 23andMe comes as a surprise. At CNN, Allison Morrow reports that all seven board members have resigned to protest CEO Anne Wojcicki’s plan to take the company private by buying up all the shares she doesn’t already own at 40 cents each (closing price yesterday was 0.3301. The board wanted her to find a buyer offering a better price.

In January, Rolfe Winkler reported at the Wall Street Journal ($) that 23andMe is likely to run out of cash by next year. Its market cap has dropped from $6 billion to under $200 million. He and Morrow catalogue the company’s problems: it’s never made a profit nor had a sustainable business model.

The reasons are fairly simple: few repeat customers. With DNA testing, as Winkler writes, “Customers only need to take the test once, and few test-takers get life-altering health results.” 23andMe’s mooted revolution in health care instead was a fad. Now, the company is pivoting to sell subscriptions to weight loss drugs.

This strikes me as an extraordinarily dangerous moment: the struggling company’s sole unique asset is a pile of more than 10 million DNA samples whose owners have agreed they can be used for research. Many were alarmed when, in December 2023, hackers broke into 1.7 million accounts and gained access to 6.9 million customer profiles<, though. The company said the hacked data did not include DNA records but did include family trees and other links. We don't think of 23andMe as a social network. But the same affordances that enabled Cambridge Analytica to leverage a relatively small number of user profiles to create a mass of data derived from a much larger number of their Friends worked on 23andMe. Given the way genetics works, this risk should have been obvious.

In 2004, the year of Facebook’s birth, the Australian privacy campaigner Roger Clarke warned in Very Black “Little Black Books” that social networks had no business model other than to abuse their users’ data. 23andMe’s terms and conditions promise to protect user privacy. But in a sale what happens to the data?

The same might be asked about the data that would accrue from Oracle CEO Larry Ellison‘s surveillance-embracing proposals this week. Inevitably, commentators invoked George Orwell’s 1984. At Business Insider, Kenneth Niemeyer was first to report: “[Ellison] said AI will usher in a new era of surveillance that he gleefully said will ensure ‘citizens will be on their best behavior.'”

The all-AI-surveillance all-the-time idea could only be embraced “gleefully” by someone who doesn’t believe it will affect him.

Niemeyer:

“Ellison said AI would be used in the future to constantly watch and analyze vast surveillance systems, like security cameras, police body cameras, doorbell cameras, and vehicle dashboard cameras.

“We’re going to have supervision,” Ellison said. “Every police officer is going to be supervised at all times, and if there’s a problem, AI will report that problem and report it to the appropriate person. Citizens will be on their best behavior because we are constantly recording and reporting everything that’s going on.”

Ellison is twenty-six years behind science fiction author David Brin, who proposed radical transparency in his 1998 non-fiction outing, The Transparent Society. But Brin saw reciprocity as an essential feature, believing it would protect privacy by making surveillance visible. Ellison is claiming that *inscrutable* surveillance will guarantee good behavior.

At 404 Media, Jason Koebler debunks Ellison point by point. Research and other evidence shows securing schools is unlikely to make them safer; body cameras don’t appear to improve police behavior; and all the technologies Ellison talks about have problems with accuracy and false positives. Indeed, the mayor of Chicago wants to end the city’s contract with ShotSpotter (now SoundThinking), saying it’s expensive and doesn’t cut crime; some research says it slows police 911 response. Worth noting Simon Spichak at Brain Facts, who finds with AI tools humans make worse decisions. So…not a good idea for police.

More disturbing is Koebler’s main point: most of the technology Ellison calls “future” is already here and failing to lower crime rates or solve its causes – while being very expensive. Ellison is already out of date.

The book Ellison’s fantasy evokes for me is the less-known This Perfect Day, by Ira Levin, written in 1970. The novel’s world is run by a massive computer (“Unicomp”) that decides all aspects of individuals’ lives: their job, spouse, how many children they can have. Enforcing all this are human counselors and permanent electronic bracelets individuals touch to ubiquitous scanners for permission.

Homogeneity rules: everyone is mixed race, there are only four boys’ and girls’ names, they eat “totalcakes”, drink cokes, wear identical clothing. For the rest, regularly administered drugs keep everyone healthy and docile. “Fight” is an abominable curse word. The controlled world over which Unicomp presides is therefore almost entirely benign: there is no war, crime, and little disease. It rains only at night.

Naturally, the novel’s hero rebels, joins a group of outcasts (“the Incurables”), and finds his way to the secret underground luxury bunker where a few “Programmers” help Unicomp’s inventor, Wei Li Chun, run the world to his specification. So to me, Ellison’s plan is all about installing himself as world ruler. Which, I mean, who could object except other billionaires?

Illustrations: The CCTV camera on George Orwell’s Portobello Road house.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

The brittle state

We’re now almost a year on from Rishi Sunak’s AI Summit, which failed to accomplish any of its most likely goals: cement his position as the UK’s prime minister; establish the UK as a world leader in AI fearmongering; or get him the new life in Silicon Valley some commentators seemed to think he wanted.

Arguably, however, it has raised belief that computer systems are “intelligent” – that is, that they understand what they’re calculating. The chatbots based on large language models make that worse, because, as James Boyle cleverly wrote, for the first time in human history, “sentences do not imply sentience”. Mix in paranoia over the state of the world and you get some truly terrifying systems being put into situations where they can catastrophically damage people’s lives. We should know better by now.

The Open Rights Group (I’m still on its advisory council) is campaigning against the Home Office’s planned eVisa scheme. In the previouslies: between 1948 and 1971, people from Caribbean countries, many of whom had fought for Britain in World War II, were encouraged to help the UK rebuild the economy post-war. They are known as the “Windrush generation” after the first ship that brought them. As Commonwealth citizens, they didn’t need visas or documentation; they and their children had the automatic right to live and work here.

Until 1973, when the law changed; later arrivals needed visas. The snag was that earlier arrivals had no idea they had any reason to worry….until the day they discovered, when challenged, that they had no way to prove they were living here legally. That day came in 2017, when then-prime minister, Theresa May (who this week joined the House of Lords) introduced the hostile environment. Intended to push illegal immigrants to go home, this law moves the “border” deep into British life by requiring landlords, banks, and others to conduct status checks. The result was that some of the Windrush group – again, legal residents – were refused medical care, denied housing, or deported.

When Brexit became real, millions of Europeans resident in the UK were shoved into the same position: arrived legally, needing no documentation, but in future required to prove their status. This time, the UK issued them documents confirming their status as permanently settled.

Until December 31, 2024, when all those documents with no expiration date will abruptly expire because the Home Office has a new system that is entirely online. As ORG and the3million explain it, come January 1, 2025, about 4 million people will need online accounts to access the new system, which generates a code to give the bank or landlord temporary access to their status. The new system will apparently hit a variety of databases in real time to perform live checks.

Now, I get that the UK government doesn’t want anyone to be in the country for one second longer than they’re entitled to. But we don’t even have to say, “What could possibly go wrong?” because we already *know* what *has* gone wrong for the Windrush generation. Anyone who has to prove their status off the cuff in time-sensitive situations really needs proof they can show when the system fails.

A proposal like this can only come from an irrational belief in the perfection – or at least, perfectability – of computer systems. It assumes that Internet connections won’t be interrupted, that databases will return accurate information, and that everyone involved will have the necessary devices and digital literacy to operate it. Even without ORG’s and the3million’s analysis, these are bonkers things to believe – and they are made worse by a helpline that is only available during the UK work day.

There is a lot of this kind of credulity about, most of it connected with “AI”. AP News reports that US police departments are beginning to use chatbots to generate crime reports based on the audio from their body cams. And, says Ars Technica, the US state of Nevada will let AI decide unemployment benefit claims, potentially producing denials that can’t be undone by a court. BrainFacts reports that decision makers using “AI” systems are prone to automation bias – that is, they trust the machine to be right. Of course, that’s just good job security: you won’t be fired for following the machine, but you might for overriding it.

The underlying risk with all these systems, as a security experts might say, is complexity: more complex means being more susceptible to inexplicable failures. There is very little to go wrong with a piece of paper that plainly states your status, for values of “paper” including paper, QR codes downloaded to phones, or PDFs saved to a desktop/laptop. Much can go wrong with the system underlying that “paper”, but, crucially, when a static confirmation is saved offline, managing that underlying complexity can take place when the need is not urgent.

It ought to go without saying that computer systems with a profound impact on people’s lives should be backed up by redundant systems that can be used when they fail. Yet the world the powers that be apparently want to build is one that underlines their power to cause enormous stress for everyone else. Systems like eVisas are as brittle as just-in-time supply chains. And we saw what happens to those during the emergency phase of the covid pandemic.

Illustrations: Empty supermarket shelves in March 2020 (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.