Beware the duck

Once upon a time, “convergence” was a buzzword. That was back in the days when audio was on stereo systems, television was on a TV, and “communications” happened on phones that weren’t computers. The word has disappeared back into its former usage pattern, but it could easily be revived to describe what’s happening to content as humans dive into using generative tools.

Said another way. Roughly this time last year, the annual technology/law/pop culture conference Gikii was awash in (generative) AI. That bubble is deflating, but in the experiments that nonetheless continue a new topic more worthy of attention is emerging: artificial content. It’s striking because what happens at this gathering, which mines all types of popular culture for cues for serious ideas, is often a good guide to what’s coming next in futurelaw.

That no one dared guess which of Zachary Cooper‘s pair of near-identicalaudio clips was AI-generated, and which human-performed was only a starting point. One had more static? Cooper’s main point: “If you can’t tell which clip is real, then you can’t decide which one gets copyright.” Right, because only human creations are eligible (although fake bands can still scam Spotify).

Cooper’s brief, wild tour of the “generative music underground” included using AI tools to create songs whose content is at odds with their genre, whole generated albums built by a human producer making thousands of tiny choices, and the new genre “gencore”, which exploits the qualities of generated sound (Cher and Autotune on steroids). Voice cloning, instrument cloning, audio production plugins, “throw in a bass and some drums”….

Ultimately, Cooper said, “The use of generative AI reveals nothing about the creative relationship to work; it destabilizes the international market by having different authorship thresholds; and there’s no means of auditing any of it.” Instead of uselessly trying to enforce different rights predicated on the use or non-use of a specific set of technologies, he said, we should tackle directly the challenges new modes of production pose to copyright. Precursor: the battles over sampling.

Soon afterwards, Michael Veale was showing us Civitai, an Idaho-based site offering open source generative AI tools, including fine-tuned models. “Civitai exists to democratize AI media creation,” the site explains. “Everything has a valid legal purpose,” Veale said, but the way capabilities can be retrained and chained together to create composites makes it hard to tell which tools, if any, should be taken down, even for creators (see also the puzzlement as Redditors try to work this out). Even environmental regulation can’t help, as one attendee suggested: unlike large language models, these smaller, fine-tuned models (as Jon Crowcroft and I surmised last year would be the future) are efficient; they can run on a phone.

Even without adding artificial content there is always an inherent conflict when digital meets an analog spectrum. This is why, Andy Phippen said, the threshold of 18 for buying alcohol and cigarettes turns into a real threshold of 25 at retail checkouts. Both software and humans fail at determining over-or-under-18, and retailers fear liability. Online age verification as promoted in the Online Safety Act will not work.

If these blurred lines strain the limits of current legal approaches, others expose gaps in the law. Andrea Matwyshyn, for example, has been studying parallels I’ve also noticed between early 20th century company towns and today’s tech behemoths’ anti-union, surveillance-happy working practices. As a result, she believes that regulatory authorities need to start considering closely the impact of data aggregation when companies merge and look for company town-like dynamics”.

Andelka Phillips parodied the overreach of app contracts by imagining the EULA attached to “ThoughtReader app”. A sample clause: “ThoughtReader may turn on its service at any time. By accepting this agreement, you are deemed to accept all monitoring of your thoughts.” Well, OK, then. (I also had a go at this here, 19 years ago.)

Emily Roach toured the history of fan fiction and the law to end up at Archive of Our Own, a “fan-created, fan-run, nonprofit, noncommercial archive for transformative fanworks, like fanfiction, fanart, fan videos, and podfic”, the idea being to ensure that the work fans pour their hearts into has a permanent home where it can’t be arbitrarily deleted by corporate owners. The rules are strict: not so much as a “buy me a coffee” tip link that could lead to a court-acceptable claim of commercial use.

History, the science fiction writer Charles Stross has said, is the science fiction writer’s secret weapon. Also at Gikii: Miranda Mowbray unearthed the 18th century “Digesting Duck” automaton built by Jacques de Vauconson. It was a marvel that appeared to ingest grain and defecate waste and that in its day inspired much speculation about the boundaries between real and mechanical life. Like the amazing ancient Greek automata before it, it was, of course, a purely mechanical fake – it stored the grain in a small compartment and released pellets from a different compartment – but today’s humans confused into thinking that sentences mean sentience could relate.

Illustrations: One onlooker’s rendering of his (incorrect) idea of the interior of Jacques de Vaucanson’s Digesting Duck (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

A three-hour tour

It should be easy for the UK’s Competition Authority to shut down the proposed merger of Vodafone and Three, two of the UK’s four major mobile network providers. Remaining as competition post-merger would be EE (owned by BT) and Virgin Media O2 (owned by the Spanish company Telefónica and the US-listed company Liberty Global).

The trade union Unite is correctly calling the likely consequences: higher prices, fewer choices, job losses, and poorer customer service. In response, Vodafone and Three are dangling a shiny object of temptation: investment in building 5G network.

Well, hogwash. I would say “Don’t do this” even if I weren’t a Three customer (who left Vodafone years ago). Let them agree to collaborate on building a sbared network and compete on quality and services, but not merge. See the US broadband market, where prices are high, speeds are low, and frustrated consumers rarely have more than one option and take heed.

***

It’s a relief to see some sanity arriving around generative AI. As a glance at the archives will show, I’ve never been a fan; last year Jon Crowcroft and I predicted the eventual demise of large language models due to model collapse. Now, David Gray Widder and Mar Hicks warn in a paper that although the generative AI bubble is deflating, its damage will persist: “…carbon can’t be put back in the ground, workers continue to need to fend off AI’s disciplining effects, and the poisonous effect on our information commons will be hard to undo.”

This week offers worked examples. Re disinformation, at The Verge Sarah Jeong describes the change in our relationship with photographs arriving with new smartphones’ ability to fake realistic images. At The Register, Dan Robinson reports that data centers and AI are causing a substantial rise in water use in the US state of Virginia.

As evidence of the deflating bubble, Widder and Hicks cite the recent Goldman Sachs report arguing that generative AI is unlikely ever to pay back its investment.

And yet: to exploit generative AI, companies and governments are reversing or delaying programs to lower carbon emissions. Also alarmingly, Widder and Hicks wonder if generative AI was always meant to fail and its promoters’ only real goals were to scoop up profits and use the inevitability narrative to make generative AI a vector for embedding infrastructural dependencies (for example, on cloud computing).

That outcome doesn’t have to have been a plan – or a conspiracy theory, just as robber barons don’t actually need to conspire in order to serve each other’s interests. It could just as well be a circumstances-led pivot. But companies that have put money into generative AI will want to scrounge whatever return they can get. So the idea that we will be left with infrastructure that’s a poor fit for our actual needs is a disturbing – and entirely possible – outcome.

***

It’s fascinating – and an example of how you never know where new technologies will lead – to learn that people are using DNA testing to prove they qualify for citizenship in other countries such as Ireland, where a single grandparent will get you in. In some cases, such as the children of unmarried Irish women who were transported to England, this use of DNA testing rights historic wrongs. For others, it opens new opportunities such as the right to live in the EU. Unfortunately, it’s easy to imagine that in countries where citizenship by birthright is a talking point for the right wing this type of DNA testing could be mooted as a requirement. I’d like to think that rounding up babies for deportation is beyond even the most bigoted authoritarians, but…

***

The controversial British technology entrepreneur Mike Lynch has died a billionaire’s death; his superyacht sank in a tornado off the coast of Sicily. I interviewed him for Salon in 2000, when he was newly Britain’s first software billionaire. It was the first time I heard of the theorem developed by Thomas Bayes, an 18th century minister and mathematician (which now is everywhere), and for a long time afterwards I wasn’t certain I’d correctly understood his comments about perception and philosophy. This was exacerbated by early experience with his software in 1996, when it was still a consumer desktop search product fronted by an annoying cartoon dog – I thought it unusably slow compared to pre-Google search engines. By 2000, Autonomy had pivoted to enterprise software, which seemed a better fit.

In 2011, Sharon Bertsch McGrayne‘s book, The Theory That Would Not Die, explained things more clearly. That year, Lynch hit a business peak by selling Autonomy to Hewlett-Packard for $11 billion. A year later, he left HP, and set up Invoke Capital to invest in companies with fundamental technology ideas that scale.

Soon afterwards, HP wrote down $8.8 billion and accused Lynch of accounting fraud. The last 12 years of his life were spent in courtrooms: first a UK civil case, decided for HP in 2022, which Lynch was appealing, then a fight against extradition, and finally a criminal trial in the US, where former Autonomy CFO Sushovan Hussein had already been sent to jail for five years. Lynch’s fatal yacht trip was to celebrate his acquittal.

Illustrations: A Customs and Border Protection scientist reads a DNA profile to determine the origin of a commodity (via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

The fear factor

Be careful what you allow the authorities to do to people you despise, because one day those same tools will be turned against you.

In the last few weeks, the shocking stabbing of three young girls at a dance class in Southport became the spark to ignite riots across the UK by people who apparently believed social media theories that the 17-year-old boy responsible was Muslim, a migrant, or a terrorist. With the boy a week from his 18th birthday, the courts ruled police could release his name in order to make clear he was not Muslim and born in Wales. It failed to stop the riots.

Police and the courts have acted quickly; almost 800 people have been arrested, 350 have been charged, and hundreds are in custody. In a moving development, on a night when more than 100 riots were predicted, tens of thousands of ordinary citizens thronged city streets and formed protective human chains around refugee centers in order to block the extremists. The riots have quieted down, but police are still busy arresting newly-identified suspects. And the inevitable question is being asked: what do we do next to keep the streets safe and calm?

London mayor Sadiq Khan quickly called for a review of the Online Safety Act, saying he doesn’t believe it’s fit for purpose. Cabinet minister Nick Thomas-Symonds (Labour-Torfaen) has suggested the month-old government could change the law.

Meanwhile, prime minister Keir Starmer favours a wider rollout of live facial recognition to track thugs and prevent them from traveling to places where they plan to cause social unrest, copying systems the police use to prevent football hooligans from even boarding trains to matches. This proposal is startling because: before standing for Parliament Starmer was a human rights lawyer. One could reasonably expect him to know that facial recognition systems have a notorious history of inaccuracy due to biases baked into their algorithms via training data, and that in the UK there is no regulatory framework to provide oversight. Silkie Carlo, the director of Big Brother Watch immediately called the proposal “alarming” and “ineffective”, warning that it turns people into “walking ID cards”.

As the former head of Liberty, Shami Chakrabarti used to say when ID cards were last proposed, moves like these fundamentally change the relationship between the citizen and the state. Such a profound change deserves more thought than a reflex fear reaction in a crisis. As Ciaran Thapar argues at the Guardian, today’s violence has many causes, beginning with the decay of public services for youth, mental health, and , and it’s those causes that need to be addressed. Thapar invokes his memories of how his community overcame the “open, violent racism” of the 1980s Thatcher years in making his recommendations.

Much of the discussion of the riots has blamed social media for propagating hate speech and disinformation, along with calls for rethinking the Online Safety Act. This is also frustrating. First of all, the OSA, which was passed in 2023, isn’t even fully implemented yet. When last seen, Ofcom, the regulator designated to enforce it, was in the throes of recruiting people by the dozen, working out what sites will be in scope (about 150,000, they said), and developing guidelines. Until we see the shape of the regulation in practice, it’s too early to say the act needs expansion.

Second, hate speech and incitement to violence are already illegal under other UK laws. Just this week, a woman was jailed for 15 months for a comment to a Facebook group with 5,100 members that advocated violence against mosques and the people inside them. The OSA was not needed to prosecute her.

And third, while Elon Musk and Mark Zuckerberg definitely deserve to have anger thrown their way, focusing solely on the ills of social media makes no sense given the decades that right-wing newspapers have spent sowing division and hatred. Even before Musk, Twitter often acted as a democratization of the kind of angry, hate-filled coverage long seen in the Daily Mail (and others). These are the wedges that created the divisions that malicious actors can now exploit by disseminating disinformation, a process systematically explained by Renee DiResta in her new book, Invisible Rulers.

The FBI’s investigation of the January 6, 2021 insurrection at the US Capitol provides a good exemplar for how modern investigations can exploit new technologies. Law enforcement applied facial recognition to CCTV footage and massive databases, and studied social media feeds, location data and cellphone tracking, and other data. As Charlie Warzel and Stuart A. Thompson wrote at the New York Times in 2021, even though most of us agree with the goal of catching and punishing insurrectionists and rioters, the data “remains vulnerable to use and abuse” against protests of other types – such as this year’s pro-Palestinian encampments.

The same argument applies in the UK. Few want violence in the streets. But the unilateral imposition of live facial recognition, among other tracking technologies, can’t be allowed. There must be limits and safeguards. ID cards issued in wartime could be withdrawn when peace came; surveillance technologies, once put in place, tend to become permanent.

Illustrations: The CCTV camera at 22 Portobello Road, where George Orwell once lived.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Gather ye lawsuits while ye may

Most of us howled with laughter this week when the news broke that Elon Musk is suing companies for refusing to advertise on his exTwitter platform. To be precise, Musk is suing the World Federation of Advertisers, Unilever, Mars, CVS, and Ørsted in a Texas court.

How could Musk, who styles himself a “free speech absolutist”, possibly force companies to advertise on his site? This is pure First Amendment stuff: both the right to free speech (or to remain silent) and freedom of assembly. It adds to the nuttiness of it all that last November Musk was telling advertisers to “go fuck yourselves” if they threatened him with a boycott. Now he’s mad because they responded in kind.

Does the richest man in the world even need advertisers to finance his toy?

At Techdirt, Mike Masnick catalogues the “so much stupid here”.

The WFA initiative that offends Musk is the Global Alliance for Responsible Media, which develops guidelines for content moderation – things like a standard definition for “hate speech” to help sites operate consistent and transparent policies and reassure advertisers that their logos don’t appear next to horrors like the livestreamed shooting in Christchurch, New Zealand. GARM’s site says: membership is voluntary, following its guidelines is voluntary, it does not provide a rating service, and it is apolitical.

Pre-Musk, Twitter was a member. After Musk took over, he pulled exTwitter out of it – but rejoined a month ago. Now, Musk claims that refusing to advertise on his site might be a criminal matter under RICO. So he’s suing himself? Blink.

Enter US Republicans, who are convinced that content moderation exists only to punish conservative speech. On July 10, House Judiciary Committee, under the leadership of Jim Jordan (R-OH), released an interim report on its ongoing investigation of GARM.

The report says GARM appears to “have anti-democratic views of fundamental American freedoms” and likens its work to restraint of trade Among specific examples, it says GARM’s recommended that its members stop advertising on exTwitter, threatened Spotify when podcaster Joe Rogan told his massive audience that young, healthy people don’t need to be vaccinated against covid, and considered blocking news sites such as Fox News, Breitbart, and The Daily Wire. In addition, the report says, GARM advised its members to use fact-checking services like NewsGuard and the Global Disinformation Index “which disproportionately label right-of-center news sites as so-called misinformation”. Therefore, the report concludes, GARM’s work is “likely illegal under the antitrust laws”.

I don’t know what a court would have made of that argument – for one thing, GARM can’t force anyone to follow its guidelines. But now we’ll never know. Two days after Musk filed suit, the WFA announced it’s shuttering GARM immediately because it can’t afford to defend the lawsuit and keep operating even though it believes it’s complied with competition rules. Such is the role of bullies in our public life.

I suppose Musk can hope that advertisers decide it’s cheaper to buy space on his site than to fight the lawsuit?

But it’s not really a laughing matter. GARM is just one of a number of initiatives that’s come under attack as we head into the final three months of campaigning before the US presidential election. In June, Renee DiResta, author of the new book Invisible Rulers, announced that her contract as the research manager of the Stanford Internet Observatory was not being renewed. Founding director Alex Stamos was already gone. Stanford has said the Observatory will continue under new leadership, but no details have been published. The Washington Post says conspiracy theorists have called DiResta and Stamos part of a government-private censorship consortium.

Meanwhile, one of the Observatory’s projects, a joint effort with the University of Washington called the Election Integrity Partnership, has announced, in response to various lawsuits and attacks, that it will not work on the 2024 or future elections. At the same time, Meta is shutting down CrowdTangle next week, removing a research tool that journalists and academics use to study content on Facebook and Instagram. While CrowdTangle will be replaced with Meta Content Library, access will be limited to academics and non-profits, and those who’ve seen it say it’s missing useful data that was available through CrowdTangle.

The concern isn’t the future of any single initiative; it’s the pattern of these things winking out. As work like DiResta’s has shown, the flow of funds financing online political speech (including advertising) is dangerously opaque. We need access and transparency for those who study it, and in real time, not years after the event.

In this, as so much else, the US continues to clash with the EU, which it accused in December of breaching its rules with respect to disinformation, transparency, and extreme content. Last month, it formally charged Musk’s site for violating the Digital Services Act, for which Musk could be liable for a fine of up to 6% of exTwitter’s global revenue. Among the EU’s complaints is the lack of a searchable and reliable advertisement repository – again, an important element of the transparency we need. Its handling of disinformation and calls to violence during the current UK riots may be added to the investigation.

Musk will be suing *us*, next.

Illustrations: A cartoon caricature of Christina Rossetti by her brother Dante Gabriel Rossetti 1862, showing her having a tantrum after reading The Times’ review of her poetry (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Review: Invisible Rulers

Invisible Rulers: The People Who Turn Lies Into Reality
by Renée DiResta
Hachette
ISBN: 978-1-54170337-7

For the last week, while violence has erupted in British cities, commentators asked, among other things: what has social media contributed to the inflammation? Often, the focus lands on specific famous people such as Elon Musk, who told exTwitter that the UK is heading for civil war (which basically shows he knows nothing about the UK).

It’s a particularly apt moment to read Renée DiResta‘s new book, Invisible Rulers: The People Who Turn Lies Into Reality. Until June, DiResta was the technical director of the Stanford Internet Observatory, which studies misinformation and disinformation online.

In her book, DiResta, like James Ball in The Other Pandemic and Naomi Klein in Doppelganger, traces how misinformation and disinformation propagate online. Where Ball examined his subject from the inside out (having spent his teenaged years on 4chan) and Klein is drawn from the outside in, DiResta’s study is structural. How do crowds work? What makes propaganda successful? Who drives engagement? What turns online engagement into real world violence?

One reason these questions are difficult to answer is the lack of transparency regarding the money flowing to influencers, who may have audiences in the millions. The trust they build with their communities on one subject, like gardening or tennis statistics, extends to other topics when they stray. Someone making how-to knitting videos one day expresses concern about their community’s response to a new virus, finds engagement, and, eventually, through algorithmic boosting, greater profit in sticking to that topic instead. The result, she writes, is “bespoke realities” that are shaped by recommendation engines and emerge from competition among state actors, terrorists, ideologues, activists, and ordinary people. Then add generative AI: “We can now mass-produce unreality.”

DiResta’s work on this began in 2014, when she was checking vaccination rates in the preschools she was looking at for her year-old son in the light of rising rates of whooping cough in California. Why, she wondered, were there all these anti-vaccine groups on Facebook, and what went on in them? When she joined to find out, she discovered a nest of evangelists promoting lies to little opposition, a common pattern she calls “asymmetry of passion”. The campaign group she helped found succeeded in getting a change in the law, but she also saw that the future lay in online battlegrounds shaping public opinion. When she presented her discoveries to the Centers for Disease Control, however, they dismissed it as “just some people online”. This insouciance would, as she documents in a later chapter, come back to bite during the covid emergency, when the mechanisms already built whirred into action to discredit science and its institutions.

Asymmetry of passion makes those holding extreme opinions seem more numerous than they are. The addition of boosting algorithms and “charismatic leaders” such as Musk or Robert F. Kennedy, Jr (your mileage may vary) adds to this effect. DiResta does a good job of showing how shifts within groups – anti-vaxx groups that also fear chemtrails and embrace flat earth, flat earth groups that shift to QAnon – lead eventually from “asking questions” to “take action”. See also today’s UK.

Like most of us, DiResta is less clear on potential solutions. She gives some thought to the idea of prebunking, but more to requiring transparency: platforms around content moderation decisions, influencers around their payment for commercial and political speech, and governments around their engagement with social media platforms. She also recommends giving users better tools and introducing some friction to force a little more thought before posting.

The Observatory’s future is unclear, as several other key staff have left; Stanford told The Verge in June that the Observatory would continue under new leadership. It is just one of several election integrity monitors whose future is cloudy; in March Facebook announced it would shut down research tool CrowdTangle on August 14. DiResta’s book is an important part of its legacy.

Twenty comedians walk into a bar…

The Internet was, famously, created to withstand a bomb outage. In 1998 Matt Blaze and Steve Bellovin said it, in 2002 it was still true, and it remains true today, after 50 years of development: there are more efficient ways to kill the Internet than dropping a bomb.

Take today. The cybersecurity company Crowdstrike pushed out a buggy update, and half the world is down. Airports, businesses, the NHS appointment booking system, supermarkets, the UK’s train companies, retailers…all showing the Blue Screen of Death. Can we say “central points of failure”? Because there are two: Crowdstrike, whose cybersecurity is widespead, and Microsoft, whose Windows operating system is everywhere.

Note this hasn’t killed the *Internet*. It’s temporarily killed many systems *connected to* the Internet. But if you’re stuck in an airport where nothing’s working and confronted with a sign that says “Cash only” when you only have cards…well, at least you can go online to read the news.

The fix will be slow, because it involves starting the computer in safe mode and manually deleting files. Like Y2K remediation, one computer at a time.

***

Speaking of things that don’t work, three bits from the generative AI bubble. First, last week Goldman Sachs issued a scathing report on generative AI that concluded it is unlikely to ever repay the trillion-odd dollars companies are spending on it, while its energy demands could outstrip available supply. Conclusion: generative AI is a bubble that could nonetheless take a long time to burst.

Second, at 404 Media Emanuel Weiburg reads a report from the Tony Blair Institute that estimates that 40% of tasks performed by public sector workers could be partially automated. Blair himself compares generative AI to the industrial revolution. This comparison is more accurate than he may realize, since the industrial revolution brought climate change, and generative AI pours accelerant on it.

TBI’s estimate conflicts with that provided to Goldman by MIT economist Daron Acemoglu, who believes that AI will impact at most 4.6% of tasks in the next ten years. The source of TBI’s estimate? ChatGPT itself. It’s learned self-promotion from parsing our output?

Finally, in a study presented at ACM FAccT, four DeepMind researchers interviewed 20 comedians who do live shows and use AI to participate in workshops using large language models to help write jokes. “Most participants felt the LLMs did not succeed as a creativity support tool, by producing bland and biased comedy tropes, akin to ‘cruise ship comedy material from the 1950s, but a bit less racist’.” Last year, Julie Seabaugh at the LA Times interviewed 13 professional comedians and got similar responses. Ahmed Ahmed compared AI-generated comedy to eating processed foods and, crucially, it “lacks timing”.

***

Blair, who spent his 1997-2007 premiership pushing ID cards into law, has also been trying to revive this longheld obsession. Two days after Keir Starmer took office, Blair published a letter in the Sunday Times calling for its return. As has been true throughout the history of ID cards (PDF), every new revival presents it as a solution to a different problem. Blair’s 2024 reason is to control immigration (and keep the far-right Reform party at bay). Previously: prevent benefit fraud, combat terorism, streamline access to health, education, and other government services (“the entitlement card”), prevent health tourism.

Starmer promptly shot Blair down: “not part of the government’s plans”. This week Alan West, a home office minister 2007-2010 under Gordon Brown, followed up with a letter to the Guardian calling for ID cards because they would “enhance national security in the areas of terrorism, immigration and policing; facilitate access to online government services for the less well-off; help to stop identity theft; and facilitate international travel”.

Neither Blair (born 1953) nor West (born 1948) seems to realize how old and out of touch they sound. Even back then, the “card” was an obvious decoy. Given pervasive online access, a handheld reader, and the database, anyone’s identity could be checked anywhere at any time with no “card” required.

To sound modern they should call for institutionalizing live facial recognition, which is *already happening* by police fiat. Or sprinkled AI bubble on their ID database.

Databases and giant IT projects that failed – like the Post Office scandal – that was the 1990s way! We’ve moved on, even if they haven’t.

***

If you are not a deposed Conservative, Britain this week is like waking up sequentially from a series of nightmares. Yesterday, Keir Starmer definitively ruled out leaving the European Convention on Human Rights – Starmer’s background as a human rights lawyer to the fore. It’s a relief to hear after 14 years of Tory ministers – David Cameron,, Boris Johnson, Suella Braverman, Liz Truss, Rishi Sunak – whining that human rights law gets in the way of their heart’s desires. Like: building a DNA database, deporting refugees or sending them to Rwanda, a plan to turn back migrants in boats at sea.

Principles have to be supported in law; under the last government’s Public Order Act 2023 curbing “disruptive protest”, yesterday five Just Stop Oil protesters were jailed for four and five years. Still, for that brief moment it was all The Brotherhood of Man.

Illustrations: Windows’ Blue Screen of Death (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Outbound

As the world and all knows by now, the UK is celebrating this year’s American Independence Day by staging a general election. The preliminaries are mercifully short by US standards, in that the period between the day it was called and the day the winners will be announced is only about six weeks. I thought the announcement would bring more sense of relief than it did. Instead, these six weeks seem interminable for two reasons: first, the long, long wait for the announcement, and second, the dominant driver for votes is largely negative – voting against, rather than voting for.

Labour, which is in polling position to win by a lot, is best served by saying and doing as little as possible, lest a gaffe damage its prospects. The Conservatives seem to be just trying not to look as hopeless as they feel. The only party with much exuberance is the far-right upstart Reform, which measures success in terms of whether it gets a larger share of the vote than the Conservatives and whether Nigel Farage wins a Parliamentary seat on his eighth try. And the Greens, who are at least motivated by genuine passion for their cause, and whose only MP is retiring this year. For them, sadly, success would be replacing her.

Particularly odd is the continuation of the trend visible in recent years for British right-wingers to adopt the rhetoric and campaigning style of the current crop of US Republicans. This week, they’ve been spinning the idea that Labour may win a dangerous “supermajority”. “Supermajority” has meaning in the US, where the balance of powers – presidency, House of Representatives, Senate – can all go in one party’s direction. It has no meaning in the UK, where Parliament is sovereign. All it means is Labour could wind up with a Parliamentary majority so large that they can pass any legislation they want. But this has been the Conservatives’ exact situation for the last five years, ever since the 2019 general election gave Boris Johnson a majority of 86. We should probably be grateful they largely wasted the opportunity squabbling among themselves.

This week saw the launch, day by day, of each party manifesto in turn. At one time, this would have led to extensive analysis and comparisons. This year, what discussion there is focuses on costs: whose platform commits to the most unfunded spending, and therefore who will raise taxes the most? Yet my very strong sense is that few among the electorate are focused on taxes; we’d all rather have public services that work and an end to the cost-of-living crisis. You have to be quite wealthy before private health care offers better value than paying taxes. But here may lie the explanation for both this and the weird Republican-ness of 2024 right-wing UK rhetoric: they’re playing to the same wealthy donors.

In this context, it’s not surprising that there’s not much coverage of what little the manifestos have to say about digital rights or the Internet. The exception is Computer Weekly, which finds the Conservatives promising more of the same and Labour offering a digital infrastructure plan, which includes building data centers and easing various business regulations but not to reintroduce the just-abandoned Data Protection and Digital Information bill.

In the manifesto itself: “Labour will build on the Online Safety Act, bringing forward provisions as quickly as possible, and explore further measures to keep everyone safe online, particularly when using social media. We will also give coroners more powers to access information held by technology companies after a child’s death.” The latter is a reference to recent cases such as that of 14-year-old Molly Russell, whose parents fought for five years to gain access to her Instagram account after her death.

Elsewhere, the manifesto also says, “Too often we see families falling through the cracks of public services. Labour will improve data sharing across services, with a single unique identifier, to better support children and families.”

“A single unique identifier” brings a kind of PTSD flashback: the last Labour government, in power from 1997 to 2010, largely built the centralized database state, and was obsessed with national ID cards, which were finally killed by David Cameron’s incoming coalition government. At the time, one of the purported benefits was streamlining government interaction. So I’m suspicious: this number could easily be backed by biometrics and checked via phone apps on the spot, anywhere and grow into…?

In terms of digital technologies, the LibDems mostly talk about health care, mandating interoperability for NHS systems and improving both care and efficiency. That can only be assessed if the detail is known. Also of interest: the LibDems’ proposed anti-SLAPP law, increasingly needed.

The LibDems also commit to advocate for a “Digital Bill of Rights”. I’m not sure it’s worth the trouble: “digital rights” as a set of civil liberties separate from human rights is antiquated, and many aspects are already enshrined in data protection, competition, and other law. In 2019, under the influence of then-deputy leader Tom Watson, this was a Labour policy. The LibDems are unlikely to have any power; but they lead in my area.

I wish the manifestos mattered and that we could have a sensible public debate about what technology policy should look like and what the priorities should be. But in a climate where everyone votes to get one lot out, the real battle begins on July 5, when we find out what kind of bargain we’ve made.

Illustrations: Polling station in Canonbury, London, in 2019 (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Last year’s news

It was tempting to skip wrapping up 2023, because at first glance large language models seemed so thoroughly dominant (and boring to revisit), but bringing the net.wars archive list up to date showed a different story. To be fair, this is partly personal bias: from the beginning LLMs seemed fated to collapse under the weight of their own poisoning; AI Magazine predicted such an outcome as early as June.

LLMs did, however, seem to accelerate public consciousness of three long-running causes of concern: privacy and big data; corporate cooption of public and private resources; and antitrust enforcement. That acceleration may be LLMs’ more important long-term effect. In the short term, the justifiably bigger concern is their propensity to spread disinformation and misinformation in the coming year’s many significant elections.

Enforcement of data protection laws has been slowly ramping up in any case, and the fines just keep getting bigger, culminating in May’s fine against Meta for €1.2 billion. Given that fines, no matter how large, seem insignificant compared to the big technology companies’ revenues, the more important trend is issuing constraints on how they do business. That May fine came with an order to stop sending EU citizens’ data to the US. Meta responded in October by announcing a subscription tier for European Facebook users: €160 a year will buy freedom from ads. Freedom from Facebook remains free.

But Facebook is almost 20 years old; it had years in which to grow without facing serious regulation. By contrast, ChatGPT, which OpenAI launched just over a year ago, has already faced investigation by the US Federal Trade Commission and been banned temporarily by the Italian data protection authority (it was reinstated a month later with conditions). It’s also facing more than a dozen lawsuits claiming copyright infringement; the most recent of these was filed just this week by the New York Times. It has settled one of these suits by forming a partnership with Axel Springer.

It all suggests a lessening tolerance for “ask forgiveness, not permission”. As another example, Clearview AI has spent most of the four years since Kashmir Hill alerted the world to its existence facing regulatory bans and fines, and public disquiet over the rampant spread of live facial recognition continues to grow. Add in the continuing degradation of exTwitter, the increasing number of friends who say they’re dropping out of social media generally, and the revival of US antitrust actions with the FTC’s suit against Amazon, and it feels like change is gathering.

It would be a logical time, for an odd reason: each of the last few decades as seen through published books has had a distinctive focus with respect to information technology. I discovered this recently when, for various reasons, I reorganized my hundreds of books on net.wars-type subjects dating back to the 1980s. How they’re ordered matters: I need to be able to find things quickly when I want them. In 1990, a friend’s suggestion of categorizing by topic seemed logical: copyright, privacy, security, online community, robots, digital rights, policy… The categories quickly broke down and cross-pollinated. In rebuilding the library, what to replace it with?

The exercise, which led to alphabetizing by author’s name within decade of publication, revealed that each of the last few decades has been distinctive enough that it’s remarkably easy to correctly identify a book’s decade without turning to the copyright page to check. The 1980s and 1990s were about exploration and explanation. Hype led us into the 2000s, which were quieter in publishing terms, though marked by bursts of business books that spanned the dot-com boom, bust, and renewal. The 2010s brought social media, content moderation, and big data, and a new set of technologies to hype, such as 3D printing and nanotechnology (about which we hear nothing now). The 2020s, it’s too soon to tell…but safe to say disinformation, AI, and robots are dominating these early years.

The 2020s books to date are trying to understand how to rein in the worst effects of Big Tech: online abuse, cryptocurrency fraud, disinformation, the loss of control as even physical devices turn into manufacturer-controlled subscription services, and, as predicted in 2018 by Christian Wolmar, the ongoing failure of autonomous vehicles to take over the world as projected just ten years ago.

While Teslas are not autonomous, the company’s Silicon Valley ethos has always made them seem more like information technology than cars. Bad idea, as Reuters reports; its investigation found a persistent pattern of mishaps such as part failures and wheels falling off – and an equally persistent pattern of the company blaming the customer, even when the car was brand new. If we don’t want shoddy goods and data invasion with everything to be our future, fighting back is essential. In 2032, I hope looking back shows that story.

The good news going into 2024 is, as the Center for the Public Domain at Duke University, Public Domain Review and Cory Doctorow write, the bumper crop of works entering the public domain: sound recordings (for the first time in 40 years), DH Lawrence’s Lady Chatterley’s Lover, Agatha Christie’s The Mystery of the Blue Train, Ben Hecht and Charles MacArthur’s play The Front Page. and the first of Mickey Mouse. Happy new year.

Illustrations: Promotional still from the 1928 production of The Front Page, which enters the public domain on January 1, 2024 (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

Surveillance machines on wheels

After much wrangling and with just a few days of legislative time between the summer holidays and the party conference season, on Tuesday night the British Parliament passed the Online Safety bill, which will become law as soon as it gets royally signed (assuming they can find a pen that doesn’t leak). The government announcement brims with propagandist ecstasy, while the Open Rights Group’s statement offers the reality: Briton’s online lives will be less secure as a result. Which means everyone’s will.

Parliament – and the net.wars archive – dates the current version of this bill to 2022, and the online harms white paper on which it’s based to 2020. But it *feels* like it’s been a much longer slog; I want to say six years.

This is largely because the fight over two key elements – access to encrypted messaging and age verification – *is* that old. Age verification was enshrined in the Digital Economy Act (2017), and we reviewed the contenders to implement it in 2016. If it’s ever really implemented, age verification will make Britain the most frustrating place in the world to be online.

Fights over strong encryption have been going on for 30 years. In that time, no new mathematics has appeared to change the fact that it’s not possible to create a cryptographic hole that only “good guys” can use. Nothing will change about that; technical experts will continue to try to explain to politicians that you can have secure communications or you can have access on demand, but you can’t have both.

***

At the New York Times, Farhood Manjou writes that while almost every other industry understands that the huge generation of aging Boomers is a business opportunity, outside of health care Silicon Valley is still resolutely focused on under-30s. This, even though the titans themselves age; boy-king Mark Zuckerberg is almost 40. Hey, it’s California; they want to turn back aging, not accept it.

Manjou struggles to imagine the specific directions products might take, but I like his main point: where’s the fun? What is this idea that after 65 you’re just something to send a robot to check up on? Yes, age often brings impairments, but why not build for them? You would think that given the right affordances, virtual worlds and online games would have a lot to offer people whose lives are becoming more constrained.

It’s true that by the time you realize that ageism pervades our society you’re old enough that no one’s listening to you any more. But even younger people must struggle with many modern IT practices: the pale, grey type that pervades the web, the picklists, the hidden passwords you have to type twice… And captchas, which often display on my desktop too small to see clearly and are resistant to resizing upwards. Bots are better at captchas than humans anyway, so what *is* the point?

We’re basically back where we were 30 years ago, when the new discipline of human-computer interaction fought to convince developers that if the people who struggle to operate their products look stupid the problem is bad design. And all this is coming much more dangerously to cars; touch screens that can’t be operated by feel are Exhibit A.

***

But there is much that’s worse about modern cars. A few weeks ago, the Mozilla Foundation published a report reviewing the privacy of modern cars. Tl;dr: “Cars are the worst product category we have ever reviewed for privacy.”

The problems are universal across the 25 brands Mozilla researchers Jen Caltrider, Misha Rykov, and Zoë MacDonald reviewed: “Modern cars are surveillance-machines on wheels souped-up with sensors, radars, cameras, telematics, and apps that can detect everything we do inside.” Cars can collect all the data that phones and smart home devices can. But unlike phones, space is a non-issue, and unlike smart speakers, video cameras, and thermostats, cars move with you and watch where you go. Drivers, passengers, passing pedestrians…all are fodder for data collection in the new automotive industry, where heated seats and unlocking extra battery range are subscription add-ons, and the car you buy isn’t any more yours than the £6-per-hour Zipcar in the designated space around the corner.

Then there are just some really weird clauses in the companies’ privacy policies. Some collect “genetic data” (here the question that arises is not only “why?” but “how?). Nissan says it can collect information about owners’ “sexual activity” for use in “direct marketing” or to share with marketing partners. ” The researchers ask, “What on earth kind of campaign are you planning, Nissan?”

Still unknown: whether the data is encrypted while held on the car; how securely it’s held; and whether the companies will resist law enforcement requests at all. We do know that that car companies share and sell the masses of intimate information they collect, especially the cars’ telematics with insurance companies.

The researchers also note that new features allow unprecedented levels of control. VW’s Car-Net, for example, allows parents – or abusers – to receive a phone alert if the car is driven outside of set hours or in or near certain locations. Ford has filed a patent on a system for punishing drivers who miss car payments.

“I got old at the right time,” a friend said in 2019. You can see his point.

Illustrations: Artist Dominic Wilcox‘s imagined driverless sleeper car of the future, as seen at the Science Museum in 2019.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

Five seconds

Careful observers posted to Hacker News this week – and the Washington Post reported – that the X formerly known as Twitter (XFKAT?) appeared to be deliberately introducing a delay in loading links to sites the owner is known to dislike or views as competitors. These would be things like the New York Times and selected other news organizations, and rival social media and publishing services like Facebook, Instagram, Bluesky, and Substack.

The 4.8 seconds users clocked doesn’t sound like much until you remember, as the Post does, that a 2016 Google study found that 53% of mobile users will abandon a website that takes longer than three seconds to load. Not sure whether desktop users are more or less patient, but it’s generally agreed that delay is the enemy.

The mechanism by which XFKAT was able to do this is its built-in link shortener, t.co, through which it routes all the links users post. You can see this for yourself if you right-click on a posted link and copy the results. You can only find the original link by letting the t.co links resolve and copying the real link out of the browser address bar after the page has loaded.

Whether or not the company was deliberately delaying these connections, the fact is that it *can* – as can Meta’s platforms and many others. This in itself is a problem; essentially it’s a failure of network neutrality. This is the principle that a telecoms company should treat all traffic equally, and it is the basis of the egalitarian nature of the Internet. Regulatory insistence on network neutrality is why you can run a voice over Internet Protocol connection over broadband supplied by a telco or telco-owned ISP even though the services are competitors. Social media platforms are not subject to these rules, but the delaying links story suggests maybe they should be once they reach a certain size.

Link shorteners have faded into the landscape these days, but they were controversial for years after the first such service – TinyURL – was launched in 2002 (per Wikipedia). Critics cited several main issues: privacy, persistence, and obscurity. The latter refers to users’ inability to know where their clicks are taking them; I feel strongly about this myself. The privacy issue is that the link shorteners-in-the-middle are in a position to collect traffic data and exploit it (bad actors could also divert links from their intended destination). The ability to collect that data and chart “impact” is, of course, one reason shorteners were widely adopted by media sites of all types. The persistence issue is that intermediating links in this way creates one or more central points of failure. When the link shortener’s server goes down for any reason – failed Internet connection, technical fault, bankrupt owner company – the URL the shortener encodes becomes unreachable, even if the page itself is available as normal. You can’t go directly to the page, or even located a cached copy at the Internet Archive, without the original URL.

Nonetheless, shortened links are still widely used, for the same reasons why they were invented. Many URLs are very long and complicated. In print publications, they are visually overwhelming, and unwieldy to copy into a web address bar; they are near-impossible to proofread in footnotes and citations. They’re even worse to read out on broadcast media. Shortened links solve all that. No longer germane is the 140-character limit Twitter had in its early years; because the URL counted toward that maximum, short was crucial. Since then, the character count has gotten bigger, and URLs aren’t included in the count any more.

If you do online research of any kind you have probably long since internalized the routine of loading the linked content and saving the actual URL rather than the shortened version. This turns out to be one of the benefits of moving to Mastodon: the link you get is the link you see.

So to network neutrality. Logically, its equivalent for social media services ought to include the principle that users can post whatever content or links they choose (law and regulation permitting), whether that’s reposted TikTok videos, a list of my IDs on other systems, or a link to a blog advocating that all social media companies be forced to become public utilities. Most have in fact operated that way until now, infected just enough with the early Internet ethos of openness. Changing that unwritten social contract is very bad news even though no one believed XFKAT’s CEO when he insisted he was a champion of free speech and called the now-his site the “town square”.

If that’s what we want social media platforms to be, someone’s going to have to force them, especially if they begin shrinking and their owners start to feel the chill wind of an existential threat. You could even – though no one is, to the best of my knowledge – make the argument that swapping in a site-created shortened URL is a violation of the spirit of data protection legislation. After all, no one posts links on a social media site with the view that their tastes in content should be collected, analyzed, and used to target ads. Librarians have long been stalwarts in resisting pressure to disclose what their patrons read and access. In the move online in general, and to corporate social media in particular, we have utterly lost sight of the principle of the right to our own thoughts.

Illustrations: The New York City public library in 2006..

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series she is a contributing editor for the Plutopia News Network podcast. Follow on Wendy M. GrossmanPosted on Categories Media, Net life, UncategorizedTags , Leave a comment on Five seconds