Follow the business models

In a market that enabled the rational actions of economists’ fantasies, consumers would be able to communicate their preferences for “smart” or “dumb” objects by exercising purchasing power. Instead, everything from TVs and vacuum cleaners to cars is sprouting Internet connections and rampant data collection.

I would love to believe we will grow out of this phase as the risks of this approach continue to become clearer, but I doubt it because business models will increasingly insist on the post-sale money, which never existed in the analog market. Subscriptions to specialized features and embedded ads seem likely to take ever everything. Essentially, software can change the business model governing any object’s manufacture into Gillette’s famous gambit: sell the razors cheap, and make the real money selling razor blades. See also in particular printer cartridges. It’s going to be everywhere, and we’re all going to hate it.

***

My consciousness of the old ways is heightened at the moment because I spent last weekend participating in a couple of folk music concerts around my old home town, Ithaca, NY. Everyone played acoustic instruments and sang old songs to celebrate 58 years of the longest-running folk music radio show in North America. Some of us hadn’t really met for nearly 50 years. We all look older, but everyone sounded great.

A couple of friends there operate a “rock shop” outside their house. There’s no website, there’s no mobile app, just a table and some stone wall with bits of rock and other findings for people to take away if they like. It began as an attempt to give away their own small collection, but it seems the clearing space aspect hasn’t worked. Instead, people keep bringing them rocks to give away – in one case, a tray of carefully laid-out arrowheads. I made off with a perfect, peach-colored conch shell. As I left, they were taking down the rock shop to make way for fantastical Halloween decorations to entertain the neighborhood kids.

Except for a brief period in the 1960s, playing folk music has never been lucrative. However it’s still harder now: teens buy CDs to ensure they can keep their favorite music, and older people buy CDs because they still play their old collections. But you can’t even *give* a 45-year-old a CD because they have no way to play it. At the concert, Mike Agranoff highlighted musicians’ need for support in an ecosystem that now pays them just $0.014 (his number) for streaming a track.

***

With both Halloween and the US election scarily imminent, the government the UK elected in July finally got down to its legislative program this week.

Data protection reform is back in the form of the the Data Use and Access Bill, < a href="https://www.theregister.com/2024/10/24/uk_proposes_new_data_law/">Lindsay Clark reports at The Register, saying the bill is intended to improve efficiency in the NHS, the police force, and businesses. It will involve making changes to the UK’s implementation of the EU’s General Data Protection Regulation. Care is needed to avoid putting the UK’s adequacy decision at risk. At the Open Rights Group Mariano della Santi warns that the bill weakens citizens’ protection against automated decision making. At medConfidential, Sam Smith details the lack of safeguards for patient data.

At Computer Weekly, Bill Goodwin and Sebastian Klovig Skelton outline the main provisions and hopes: improve patient care, free up police time to spend more protecting the public, save money.

‘Twas ever thus. Every computer system is always commissioned to save money and improve efficiency – they say this one will save 140,000 a years of NHS staff time! Every new computer system also always brings unexpected costs in time and money and messy stages of implementation and adaptation during which everything becomes *less* efficient. There are always hidden costs – in this case, likely the difficulties of curating data and remediating historical bias. An easy prediction: these will be non-trivial.

***

Also pending is the draft United Nations Convention Against Cybercrime; the goal is to get it through the General Assembly by the end of this year.

Human Rights Watch writes that 29 civil society organizations have written to the EU and member states asking them to vote against the treaty’s adoption and consider alternative approaches that would safeguard human rights. The EFF is encouraging all states to vote no.

Internet historians will recall that there is already a convention on cybercrime, sometimes called the Budapest Convention. Drawn up in 2001 by the Council of Europe to come into force in 2004, it was signed by 70 countries and ratified by 68. The new treaty has been drafted by a much broader range of countries, including Russia and China, is meant to be consistent with that older agreement. However, the hope is it will achieve the global acceptance its predecessor did not, in part because of the broader

However, opponents are concerned that the treaty is vague, failing to limit its application to crimes that can only be committed via a computer, and lacks safeguards. It’s understandable that law enforcement, faced with the kinds of complex attacks on computer systems we see today want their path to international cooperation eased. But, as EFF writes, that eased cooperation should not extend to “serious crimes” whose definition and punishment is left up to individual countries.

Illustrations: Halloween display seen near Mechanicsburg, PA.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Review: Invisible Rulers

Invisible Rulers: The People Who Turn Lies Into Reality
by Renée DiResta
Hachette
ISBN: 978-1-54170337-7

For the last week, while violence has erupted in British cities, commentators asked, among other things: what has social media contributed to the inflammation? Often, the focus lands on specific famous people such as Elon Musk, who told exTwitter that the UK is heading for civil war (which basically shows he knows nothing about the UK).

It’s a particularly apt moment to read Renée DiResta‘s new book, Invisible Rulers: The People Who Turn Lies Into Reality. Until June, DiResta was the technical director of the Stanford Internet Observatory, which studies misinformation and disinformation online.

In her book, DiResta, like James Ball in The Other Pandemic and Naomi Klein in Doppelganger, traces how misinformation and disinformation propagate online. Where Ball examined his subject from the inside out (having spent his teenaged years on 4chan) and Klein is drawn from the outside in, DiResta’s study is structural. How do crowds work? What makes propaganda successful? Who drives engagement? What turns online engagement into real world violence?

One reason these questions are difficult to answer is the lack of transparency regarding the money flowing to influencers, who may have audiences in the millions. The trust they build with their communities on one subject, like gardening or tennis statistics, extends to other topics when they stray. Someone making how-to knitting videos one day expresses concern about their community’s response to a new virus, finds engagement, and, eventually, through algorithmic boosting, greater profit in sticking to that topic instead. The result, she writes, is “bespoke realities” that are shaped by recommendation engines and emerge from competition among state actors, terrorists, ideologues, activists, and ordinary people. Then add generative AI: “We can now mass-produce unreality.”

DiResta’s work on this began in 2014, when she was checking vaccination rates in the preschools she was looking at for her year-old son in the light of rising rates of whooping cough in California. Why, she wondered, were there all these anti-vaccine groups on Facebook, and what went on in them? When she joined to find out, she discovered a nest of evangelists promoting lies to little opposition, a common pattern she calls “asymmetry of passion”. The campaign group she helped found succeeded in getting a change in the law, but she also saw that the future lay in online battlegrounds shaping public opinion. When she presented her discoveries to the Centers for Disease Control, however, they dismissed it as “just some people online”. This insouciance would, as she documents in a later chapter, come back to bite during the covid emergency, when the mechanisms already built whirred into action to discredit science and its institutions.

Asymmetry of passion makes those holding extreme opinions seem more numerous than they are. The addition of boosting algorithms and “charismatic leaders” such as Musk or Robert F. Kennedy, Jr (your mileage may vary) adds to this effect. DiResta does a good job of showing how shifts within groups – anti-vaxx groups that also fear chemtrails and embrace flat earth, flat earth groups that shift to QAnon – lead eventually from “asking questions” to “take action”. See also today’s UK.

Like most of us, DiResta is less clear on potential solutions. She gives some thought to the idea of prebunking, but more to requiring transparency: platforms around content moderation decisions, influencers around their payment for commercial and political speech, and governments around their engagement with social media platforms. She also recommends giving users better tools and introducing some friction to force a little more thought before posting.

The Observatory’s future is unclear, as several other key staff have left; Stanford told The Verge in June that the Observatory would continue under new leadership. It is just one of several election integrity monitors whose future is cloudy; in March Facebook announced it would shut down research tool CrowdTangle on August 14. DiResta’s book is an important part of its legacy.

Outbound

As the world and all knows by now, the UK is celebrating this year’s American Independence Day by staging a general election. The preliminaries are mercifully short by US standards, in that the period between the day it was called and the day the winners will be announced is only about six weeks. I thought the announcement would bring more sense of relief than it did. Instead, these six weeks seem interminable for two reasons: first, the long, long wait for the announcement, and second, the dominant driver for votes is largely negative – voting against, rather than voting for.

Labour, which is in polling position to win by a lot, is best served by saying and doing as little as possible, lest a gaffe damage its prospects. The Conservatives seem to be just trying not to look as hopeless as they feel. The only party with much exuberance is the far-right upstart Reform, which measures success in terms of whether it gets a larger share of the vote than the Conservatives and whether Nigel Farage wins a Parliamentary seat on his eighth try. And the Greens, who are at least motivated by genuine passion for their cause, and whose only MP is retiring this year. For them, sadly, success would be replacing her.

Particularly odd is the continuation of the trend visible in recent years for British right-wingers to adopt the rhetoric and campaigning style of the current crop of US Republicans. This week, they’ve been spinning the idea that Labour may win a dangerous “supermajority”. “Supermajority” has meaning in the US, where the balance of powers – presidency, House of Representatives, Senate – can all go in one party’s direction. It has no meaning in the UK, where Parliament is sovereign. All it means is Labour could wind up with a Parliamentary majority so large that they can pass any legislation they want. But this has been the Conservatives’ exact situation for the last five years, ever since the 2019 general election gave Boris Johnson a majority of 86. We should probably be grateful they largely wasted the opportunity squabbling among themselves.

This week saw the launch, day by day, of each party manifesto in turn. At one time, this would have led to extensive analysis and comparisons. This year, what discussion there is focuses on costs: whose platform commits to the most unfunded spending, and therefore who will raise taxes the most? Yet my very strong sense is that few among the electorate are focused on taxes; we’d all rather have public services that work and an end to the cost-of-living crisis. You have to be quite wealthy before private health care offers better value than paying taxes. But here may lie the explanation for both this and the weird Republican-ness of 2024 right-wing UK rhetoric: they’re playing to the same wealthy donors.

In this context, it’s not surprising that there’s not much coverage of what little the manifestos have to say about digital rights or the Internet. The exception is Computer Weekly, which finds the Conservatives promising more of the same and Labour offering a digital infrastructure plan, which includes building data centers and easing various business regulations but not to reintroduce the just-abandoned Data Protection and Digital Information bill.

In the manifesto itself: “Labour will build on the Online Safety Act, bringing forward provisions as quickly as possible, and explore further measures to keep everyone safe online, particularly when using social media. We will also give coroners more powers to access information held by technology companies after a child’s death.” The latter is a reference to recent cases such as that of 14-year-old Molly Russell, whose parents fought for five years to gain access to her Instagram account after her death.

Elsewhere, the manifesto also says, “Too often we see families falling through the cracks of public services. Labour will improve data sharing across services, with a single unique identifier, to better support children and families.”

“A single unique identifier” brings a kind of PTSD flashback: the last Labour government, in power from 1997 to 2010, largely built the centralized database state, and was obsessed with national ID cards, which were finally killed by David Cameron’s incoming coalition government. At the time, one of the purported benefits was streamlining government interaction. So I’m suspicious: this number could easily be backed by biometrics and checked via phone apps on the spot, anywhere and grow into…?

In terms of digital technologies, the LibDems mostly talk about health care, mandating interoperability for NHS systems and improving both care and efficiency. That can only be assessed if the detail is known. Also of interest: the LibDems’ proposed anti-SLAPP law, increasingly needed.

The LibDems also commit to advocate for a “Digital Bill of Rights”. I’m not sure it’s worth the trouble: “digital rights” as a set of civil liberties separate from human rights is antiquated, and many aspects are already enshrined in data protection, competition, and other law. In 2019, under the influence of then-deputy leader Tom Watson, this was a Labour policy. The LibDems are unlikely to have any power; but they lead in my area.

I wish the manifestos mattered and that we could have a sensible public debate about what technology policy should look like and what the priorities should be. But in a climate where everyone votes to get one lot out, the real battle begins on July 5, when we find out what kind of bargain we’ve made.

Illustrations: Polling station in Canonbury, London, in 2019 (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Last year’s news

It was tempting to skip wrapping up 2023, because at first glance large language models seemed so thoroughly dominant (and boring to revisit), but bringing the net.wars archive list up to date showed a different story. To be fair, this is partly personal bias: from the beginning LLMs seemed fated to collapse under the weight of their own poisoning; AI Magazine predicted such an outcome as early as June.

LLMs did, however, seem to accelerate public consciousness of three long-running causes of concern: privacy and big data; corporate cooption of public and private resources; and antitrust enforcement. That acceleration may be LLMs’ more important long-term effect. In the short term, the justifiably bigger concern is their propensity to spread disinformation and misinformation in the coming year’s many significant elections.

Enforcement of data protection laws has been slowly ramping up in any case, and the fines just keep getting bigger, culminating in May’s fine against Meta for €1.2 billion. Given that fines, no matter how large, seem insignificant compared to the big technology companies’ revenues, the more important trend is issuing constraints on how they do business. That May fine came with an order to stop sending EU citizens’ data to the US. Meta responded in October by announcing a subscription tier for European Facebook users: €160 a year will buy freedom from ads. Freedom from Facebook remains free.

But Facebook is almost 20 years old; it had years in which to grow without facing serious regulation. By contrast, ChatGPT, which OpenAI launched just over a year ago, has already faced investigation by the US Federal Trade Commission and been banned temporarily by the Italian data protection authority (it was reinstated a month later with conditions). It’s also facing more than a dozen lawsuits claiming copyright infringement; the most recent of these was filed just this week by the New York Times. It has settled one of these suits by forming a partnership with Axel Springer.

It all suggests a lessening tolerance for “ask forgiveness, not permission”. As another example, Clearview AI has spent most of the four years since Kashmir Hill alerted the world to its existence facing regulatory bans and fines, and public disquiet over the rampant spread of live facial recognition continues to grow. Add in the continuing degradation of exTwitter, the increasing number of friends who say they’re dropping out of social media generally, and the revival of US antitrust actions with the FTC’s suit against Amazon, and it feels like change is gathering.

It would be a logical time, for an odd reason: each of the last few decades as seen through published books has had a distinctive focus with respect to information technology. I discovered this recently when, for various reasons, I reorganized my hundreds of books on net.wars-type subjects dating back to the 1980s. How they’re ordered matters: I need to be able to find things quickly when I want them. In 1990, a friend’s suggestion of categorizing by topic seemed logical: copyright, privacy, security, online community, robots, digital rights, policy… The categories quickly broke down and cross-pollinated. In rebuilding the library, what to replace it with?

The exercise, which led to alphabetizing by author’s name within decade of publication, revealed that each of the last few decades has been distinctive enough that it’s remarkably easy to correctly identify a book’s decade without turning to the copyright page to check. The 1980s and 1990s were about exploration and explanation. Hype led us into the 2000s, which were quieter in publishing terms, though marked by bursts of business books that spanned the dot-com boom, bust, and renewal. The 2010s brought social media, content moderation, and big data, and a new set of technologies to hype, such as 3D printing and nanotechnology (about which we hear nothing now). The 2020s, it’s too soon to tell…but safe to say disinformation, AI, and robots are dominating these early years.

The 2020s books to date are trying to understand how to rein in the worst effects of Big Tech: online abuse, cryptocurrency fraud, disinformation, the loss of control as even physical devices turn into manufacturer-controlled subscription services, and, as predicted in 2018 by Christian Wolmar, the ongoing failure of autonomous vehicles to take over the world as projected just ten years ago.

While Teslas are not autonomous, the company’s Silicon Valley ethos has always made them seem more like information technology than cars. Bad idea, as Reuters reports; its investigation found a persistent pattern of mishaps such as part failures and wheels falling off – and an equally persistent pattern of the company blaming the customer, even when the car was brand new. If we don’t want shoddy goods and data invasion with everything to be our future, fighting back is essential. In 2032, I hope looking back shows that story.

The good news going into 2024 is, as the Center for the Public Domain at Duke University, Public Domain Review and Cory Doctorow write, the bumper crop of works entering the public domain: sound recordings (for the first time in 40 years), DH Lawrence’s Lady Chatterley’s Lover, Agatha Christie’s The Mystery of the Blue Train, Ben Hecht and Charles MacArthur’s play The Front Page. and the first of Mickey Mouse. Happy new year.

Illustrations: Promotional still from the 1928 production of The Front Page, which enters the public domain on January 1, 2024 (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

Surveillance machines on wheels

After much wrangling and with just a few days of legislative time between the summer holidays and the party conference season, on Tuesday night the British Parliament passed the Online Safety bill, which will become law as soon as it gets royally signed (assuming they can find a pen that doesn’t leak). The government announcement brims with propagandist ecstasy, while the Open Rights Group’s statement offers the reality: Briton’s online lives will be less secure as a result. Which means everyone’s will.

Parliament – and the net.wars archive – dates the current version of this bill to 2022, and the online harms white paper on which it’s based to 2020. But it *feels* like it’s been a much longer slog; I want to say six years.

This is largely because the fight over two key elements – access to encrypted messaging and age verification – *is* that old. Age verification was enshrined in the Digital Economy Act (2017), and we reviewed the contenders to implement it in 2016. If it’s ever really implemented, age verification will make Britain the most frustrating place in the world to be online.

Fights over strong encryption have been going on for 30 years. In that time, no new mathematics has appeared to change the fact that it’s not possible to create a cryptographic hole that only “good guys” can use. Nothing will change about that; technical experts will continue to try to explain to politicians that you can have secure communications or you can have access on demand, but you can’t have both.

***

At the New York Times, Farhood Manjou writes that while almost every other industry understands that the huge generation of aging Boomers is a business opportunity, outside of health care Silicon Valley is still resolutely focused on under-30s. This, even though the titans themselves age; boy-king Mark Zuckerberg is almost 40. Hey, it’s California; they want to turn back aging, not accept it.

Manjou struggles to imagine the specific directions products might take, but I like his main point: where’s the fun? What is this idea that after 65 you’re just something to send a robot to check up on? Yes, age often brings impairments, but why not build for them? You would think that given the right affordances, virtual worlds and online games would have a lot to offer people whose lives are becoming more constrained.

It’s true that by the time you realize that ageism pervades our society you’re old enough that no one’s listening to you any more. But even younger people must struggle with many modern IT practices: the pale, grey type that pervades the web, the picklists, the hidden passwords you have to type twice… And captchas, which often display on my desktop too small to see clearly and are resistant to resizing upwards. Bots are better at captchas than humans anyway, so what *is* the point?

We’re basically back where we were 30 years ago, when the new discipline of human-computer interaction fought to convince developers that if the people who struggle to operate their products look stupid the problem is bad design. And all this is coming much more dangerously to cars; touch screens that can’t be operated by feel are Exhibit A.

***

But there is much that’s worse about modern cars. A few weeks ago, the Mozilla Foundation published a report reviewing the privacy of modern cars. Tl;dr: “Cars are the worst product category we have ever reviewed for privacy.”

The problems are universal across the 25 brands Mozilla researchers Jen Caltrider, Misha Rykov, and Zoë MacDonald reviewed: “Modern cars are surveillance-machines on wheels souped-up with sensors, radars, cameras, telematics, and apps that can detect everything we do inside.” Cars can collect all the data that phones and smart home devices can. But unlike phones, space is a non-issue, and unlike smart speakers, video cameras, and thermostats, cars move with you and watch where you go. Drivers, passengers, passing pedestrians…all are fodder for data collection in the new automotive industry, where heated seats and unlocking extra battery range are subscription add-ons, and the car you buy isn’t any more yours than the £6-per-hour Zipcar in the designated space around the corner.

Then there are just some really weird clauses in the companies’ privacy policies. Some collect “genetic data” (here the question that arises is not only “why?” but “how?). Nissan says it can collect information about owners’ “sexual activity” for use in “direct marketing” or to share with marketing partners. ” The researchers ask, “What on earth kind of campaign are you planning, Nissan?”

Still unknown: whether the data is encrypted while held on the car; how securely it’s held; and whether the companies will resist law enforcement requests at all. We do know that that car companies share and sell the masses of intimate information they collect, especially the cars’ telematics with insurance companies.

The researchers also note that new features allow unprecedented levels of control. VW’s Car-Net, for example, allows parents – or abusers – to receive a phone alert if the car is driven outside of set hours or in or near certain locations. Ford has filed a patent on a system for punishing drivers who miss car payments.

“I got old at the right time,” a friend said in 2019. You can see his point.

Illustrations: Artist Dominic Wilcox‘s imagined driverless sleeper car of the future, as seen at the Science Museum in 2019.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

Five seconds

Careful observers posted to Hacker News this week – and the Washington Post reported – that the X formerly known as Twitter (XFKAT?) appeared to be deliberately introducing a delay in loading links to sites the owner is known to dislike or views as competitors. These would be things like the New York Times and selected other news organizations, and rival social media and publishing services like Facebook, Instagram, Bluesky, and Substack.

The 4.8 seconds users clocked doesn’t sound like much until you remember, as the Post does, that a 2016 Google study found that 53% of mobile users will abandon a website that takes longer than three seconds to load. Not sure whether desktop users are more or less patient, but it’s generally agreed that delay is the enemy.

The mechanism by which XFKAT was able to do this is its built-in link shortener, t.co, through which it routes all the links users post. You can see this for yourself if you right-click on a posted link and copy the results. You can only find the original link by letting the t.co links resolve and copying the real link out of the browser address bar after the page has loaded.

Whether or not the company was deliberately delaying these connections, the fact is that it *can* – as can Meta’s platforms and many others. This in itself is a problem; essentially it’s a failure of network neutrality. This is the principle that a telecoms company should treat all traffic equally, and it is the basis of the egalitarian nature of the Internet. Regulatory insistence on network neutrality is why you can run a voice over Internet Protocol connection over broadband supplied by a telco or telco-owned ISP even though the services are competitors. Social media platforms are not subject to these rules, but the delaying links story suggests maybe they should be once they reach a certain size.

Link shorteners have faded into the landscape these days, but they were controversial for years after the first such service – TinyURL – was launched in 2002 (per Wikipedia). Critics cited several main issues: privacy, persistence, and obscurity. The latter refers to users’ inability to know where their clicks are taking them; I feel strongly about this myself. The privacy issue is that the link shorteners-in-the-middle are in a position to collect traffic data and exploit it (bad actors could also divert links from their intended destination). The ability to collect that data and chart “impact” is, of course, one reason shorteners were widely adopted by media sites of all types. The persistence issue is that intermediating links in this way creates one or more central points of failure. When the link shortener’s server goes down for any reason – failed Internet connection, technical fault, bankrupt owner company – the URL the shortener encodes becomes unreachable, even if the page itself is available as normal. You can’t go directly to the page, or even located a cached copy at the Internet Archive, without the original URL.

Nonetheless, shortened links are still widely used, for the same reasons why they were invented. Many URLs are very long and complicated. In print publications, they are visually overwhelming, and unwieldy to copy into a web address bar; they are near-impossible to proofread in footnotes and citations. They’re even worse to read out on broadcast media. Shortened links solve all that. No longer germane is the 140-character limit Twitter had in its early years; because the URL counted toward that maximum, short was crucial. Since then, the character count has gotten bigger, and URLs aren’t included in the count any more.

If you do online research of any kind you have probably long since internalized the routine of loading the linked content and saving the actual URL rather than the shortened version. This turns out to be one of the benefits of moving to Mastodon: the link you get is the link you see.

So to network neutrality. Logically, its equivalent for social media services ought to include the principle that users can post whatever content or links they choose (law and regulation permitting), whether that’s reposted TikTok videos, a list of my IDs on other systems, or a link to a blog advocating that all social media companies be forced to become public utilities. Most have in fact operated that way until now, infected just enough with the early Internet ethos of openness. Changing that unwritten social contract is very bad news even though no one believed XFKAT’s CEO when he insisted he was a champion of free speech and called the now-his site the “town square”.

If that’s what we want social media platforms to be, someone’s going to have to force them, especially if they begin shrinking and their owners start to feel the chill wind of an existential threat. You could even – though no one is, to the best of my knowledge – make the argument that swapping in a site-created shortened URL is a violation of the spirit of data protection legislation. After all, no one posts links on a social media site with the view that their tastes in content should be collected, analyzed, and used to target ads. Librarians have long been stalwarts in resisting pressure to disclose what their patrons read and access. In the move online in general, and to corporate social media in particular, we have utterly lost sight of the principle of the right to our own thoughts.

Illustrations: The New York City public library in 2006..

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series she is a contributing editor for the Plutopia News Network podcast. Follow on Wendy M. GrossmanPosted on Categories Media, Net life, UncategorizedTags , Leave a comment on Five seconds

The safe place

For a long time, fear that technical decisions – new domain names ($)(, cooption of open standards or software, laws mandating data localization – would splinter the Internet. “Balkanize” was heard a lot.

A panel at the UK Internet Governance Forum a couple of weeks ago focused on this exact topic, and was mostly self-congratulatory. Which is when it occurred to me that the Internet may not *be* fragmented, but it *feels* fragmented. Almost every day I encounter some site I can’t reach: email goes into someone’s spam folder, the site or its content is off-limits because it’s been geofenced to conform with copyright or data protection laws, or the site mysteriously doesn’t load, with no explanation. The most likely explanation for the latter is censorship built into the Internet feed by the ISP or the establishment whose connection I’m using, but they don’t actually *say* that.

The ongoing attrition at Twitter is exacerbating this feeling, as the users I’ve followed for years continue to migrate elsewhere. At the moment, it takes accounts on several other services to keep track of everyone: definite fragmentation.

Here in the UK, this sense of fragmentation may be about to get a lot worse, as the long-heralded Online Safety bill – written and expanded until it’s become a “Frankenstein bill”, as Mark Scott and Annabelle Dickson report at Politico – hurtles toward passage. This week saw fruitless debates on amendments in the House of Lords, and it will presumably be back in the Commons shortly thereafter, where it could be passed into law by this fall.

A number of companies have warned that the bill, particularly if it passes with its provisions undermining end-to-end encryption intact, will drive them out of the country. I’m not sure British politicians are taking them seriously; so often such threats are idle. But in this case, I think they’re real, not least because post-Brexit Britain carries so much less global and commercial weight, a reality some politicians are in denial about. WhatsApp, Signal, and Apple have all said openly that they will not compromise the privacy of their masses of users elsewhere to suit the UK. Wikipedia has warned that including it in the requirement to age-verify its users will force it to withdraw rather than violate its principles about collecting as little information about users as possible. The irony is that the UK government itself runs on WhatsApp.

Wikipedia, Ian McRae, the director of market intelligence for prospective online safety regulator Ofcom, showed in a presentation at UKIGF, would be just one of the estimated 150,000 sites within the scope of the bill. Ofcom is ramping up to deal with the workload, an effort the agency expects to cost £169 million between now and 2025.

In a legal opinion commissioned by the Open Rights Group, barristers at Matrix Chambers find that clause 9(2) of the bill is unlawful. This, as Thomas Macaulay explains at The Next Web, is the clause that requires platforms to proactively remove illegal or “harmful” user-generated content. In fact: prior restraint. As ORG goes on to say, there is no requirement to tell users why their content has been blocked.

Until now, the impact of most badly-formulated British legislative proposals has been sort of abstract. Data retention, for example: you know that pervasive mass surveillance is a bad thing, but most of us don’t really expect to feel the impact personally. This is different. Some of my non-UK friends will only use Signal to communicate, and I doubt a day goes by that I don’t look something up on Wikipedia. I could use a VPN for that, but if the only way to use Signal is to have a non-UK phone? I can feel those losses already.

And if people think they dislike those ubiquitous cookie banners and consent clickthroughs, wait until they have to age-verify all over the place. Worst case: this bill will be an act of self-harm that one day will be as inexplicable to future generations as Brexit.

The UK is not the only one pursuing this path. Age verification in particular is catching on. The US states of Virginia, Mississippi, Louisiana, Arkansas, Texas, Montana, and Utah have all passed legislation requiring it; Pornhub now blocks users in Mississippi and Virginia. The likelihood is that many more countries will try to copy some or all of its provisions, just as Australia’s law requiring the big social media platforms to negotiate with news publishers is spawning copies in Canada and California.

This is where the real threat of the “splinternet” lies. Think of requiring 150,000 websites to implement age verification and proactively police content. Many of those sites, as the law firm Mischon de Reya writes may not even be based in the UK.

This means that any site located outside the UK – and perhaps even some that are based here – will be asking, “Is it worth it?” For a lot of them, it won’t be. Which means that however much the Internet retains its integrity, the British user experience will be the Internet as a sea of holes.

Illustrations: Drunk parrot in a Putney garden (by Simon Bisson; used by permission).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Mastodon.