The safe place

For a long time, fear that technical decisions – new domain names ($)(, cooption of open standards or software, laws mandating data localization – would splinter the Internet. “Balkanize” was heard a lot.

A panel at the UK Internet Governance Forum a couple of weeks ago focused on this exact topic, and was mostly self-congratulatory. Which is when it occurred to me that the Internet may not *be* fragmented, but it *feels* fragmented. Almost every day I encounter some site I can’t reach: email goes into someone’s spam folder, the site or its content is off-limits because it’s been geofenced to conform with copyright or data protection laws, or the site mysteriously doesn’t load, with no explanation. The most likely explanation for the latter is censorship built into the Internet feed by the ISP or the establishment whose connection I’m using, but they don’t actually *say* that.

The ongoing attrition at Twitter is exacerbating this feeling, as the users I’ve followed for years continue to migrate elsewhere. At the moment, it takes accounts on several other services to keep track of everyone: definite fragmentation.

Here in the UK, this sense of fragmentation may be about to get a lot worse, as the long-heralded Online Safety bill – written and expanded until it’s become a “Frankenstein bill”, as Mark Scott and Annabelle Dickson report at Politico – hurtles toward passage. This week saw fruitless debates on amendments in the House of Lords, and it will presumably be back in the Commons shortly thereafter, where it could be passed into law by this fall.

A number of companies have warned that the bill, particularly if it passes with its provisions undermining end-to-end encryption intact, will drive them out of the country. I’m not sure British politicians are taking them seriously; so often such threats are idle. But in this case, I think they’re real, not least because post-Brexit Britain carries so much less global and commercial weight, a reality some politicians are in denial about. WhatsApp, Signal, and Apple have all said openly that they will not compromise the privacy of their masses of users elsewhere to suit the UK. Wikipedia has warned that including it in the requirement to age-verify its users will force it to withdraw rather than violate its principles about collecting as little information about users as possible. The irony is that the UK government itself runs on WhatsApp.

Wikipedia, Ian McRae, the director of market intelligence for prospective online safety regulator Ofcom, showed in a presentation at UKIGF, would be just one of the estimated 150,000 sites within the scope of the bill. Ofcom is ramping up to deal with the workload, an effort the agency expects to cost £169 million between now and 2025.

In a legal opinion commissioned by the Open Rights Group, barristers at Matrix Chambers find that clause 9(2) of the bill is unlawful. This, as Thomas Macaulay explains at The Next Web, is the clause that requires platforms to proactively remove illegal or “harmful” user-generated content. In fact: prior restraint. As ORG goes on to say, there is no requirement to tell users why their content has been blocked.

Until now, the impact of most badly-formulated British legislative proposals has been sort of abstract. Data retention, for example: you know that pervasive mass surveillance is a bad thing, but most of us don’t really expect to feel the impact personally. This is different. Some of my non-UK friends will only use Signal to communicate, and I doubt a day goes by that I don’t look something up on Wikipedia. I could use a VPN for that, but if the only way to use Signal is to have a non-UK phone? I can feel those losses already.

And if people think they dislike those ubiquitous cookie banners and consent clickthroughs, wait until they have to age-verify all over the place. Worst case: this bill will be an act of self-harm that one day will be as inexplicable to future generations as Brexit.

The UK is not the only one pursuing this path. Age verification in particular is catching on. The US states of Virginia, Mississippi, Louisiana, Arkansas, Texas, Montana, and Utah have all passed legislation requiring it; Pornhub now blocks users in Mississippi and Virginia. The likelihood is that many more countries will try to copy some or all of its provisions, just as Australia’s law requiring the big social media platforms to negotiate with news publishers is spawning copies in Canada and California.

This is where the real threat of the “splinternet” lies. Think of requiring 150,000 websites to implement age verification and proactively police content. Many of those sites, as the law firm Mischon de Reya writes may not even be based in the UK.

This means that any site located outside the UK – and perhaps even some that are based here – will be asking, “Is it worth it?” For a lot of them, it won’t be. Which means that however much the Internet retains its integrity, the British user experience will be the Internet as a sea of holes.

Illustrations: Drunk parrot in a Putney garden (by Simon Bisson; used by permission).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Mastodon.

Solidarity

Whatever you’re starting to binge-watch, slow down. It’s going to be a long wait for fresh content out of Hollywood.

Yesterday, the actors union, SAG-AFTRA, went out on strike alongside the members of the Writers Guild of America, who have been “>walking picket lines since May 2. Like the writers, actors have seen their livelihoods shrink as US TV shows’ seasons shorten, “reruns” that pay residuals fade into the past, and DVD royalties dry up, while royalties from streaming remain tiny by comparison. At the Hollywood and Levine podcast, the veteran screenwriter Ken Levine gives the background to the WGA’s action. But think of it this way: the writers and cast of The Big Bang Theory may be the last to share fairly in the enormous profits their work continues to generate.

The even bigger threat? AI that makes it possible to capture the actor’s likeness and then reuse it ad infinitum in new work. This, as Malia Mendez writes at the LA Times, is the big fear. In a world where Harrison Ford at 80 is making movies in which he’s aged down to look 40 and James Earl Jones has agreed to clone his voice for reuse after his death, it’s arguably a rational big fear.

We’ve had this date for a long time. In the late 1990s I saw a demonstration of “vactors” – virtual actors that were created by scanning a human actor moving in various ways and building a library of movements that thereafter could be rendered at will. At the time, the state of the art was not much advanced from the liquid metal man in Terminator 2. Rendering film-quality characters was very slow, but that was then and this is now, and how long before rendering moving humans can be done in high-def in real-time at action speed?

The studios are already pushing actors into allowing synthesized reuse. California law grants public figures, including actors, publicity rights that prevent the commercial use of their name and likeness without consent. However, Mendez reports that current contracts already require actors to waive those rights to grant the studios digital simulation or digital creation rights. The effects are worst in reality television, where the line is blurred between the individual as a character on a TV show and the individual in their off-screen life. She quotes lawyer Ryan Schmidt: “We’re at this Napster 2001 moment…”

That moment is even closer for voice actors. Last year, Actors Equity announced a campaign to protect voice actors from their synthesized counterparts. This week, one of those synthesizers is providing commentary – more like captions, really – for video clips like this one at Wimbledon. As I said last year, while synthesized voices will be good enough for many applications such as railway announcements, there are lots of situations that will continue to require real humans. Sports commentary is one; commentators aren’t just there to provide information, they’re *also* there to sell the game. Their human excitement at the proceedings is an important part of that.

So SAG-AFTRA, like the Writers Guild of America, is seeking limitations on how studios may use AI, payment for such uses, and rules on protecting against misuse. In another LA Times story, Anoushka Sakoui reports that the studios’ offer included requiring “a performer’s consent for the creation and use of digital replicas or for digital alterations of a performance”. Like publishers “offering” all-rights-in perpetuity contracts to journalists and authors since the 1990s, the studios are trying to ensure they have all the rights they could possibly want.

“You cannot change the business model as much as it has changed and not expect the contract to change, too,” SAG-AFTRA president Fran Drescher said yesterday in a speech that has been widely circulated.

It was already clear this is going to be a long strike that will damage tens of thousands of industry workers and the economy of California. Earlier this week, Dominic Patten reported at Deadline that the Association of Movie and Television Producers plans to delay resuming talks with the WGA until October. By then, Patten reports producers saying, writers will be losing their homes and be more amenable to accepting the AMPTP’s terms. The AMPTP officially denies this, saying it’s committed to reaching a deal. Nonetheless, there are no ongoing talks. As Ken Levine pointed out in a pair of blogposts written during the 2007 writers strike, management is always in control of timing.

But as Levine also says, in the “old days” a top studio mogul could simply say, “Let’s get this done” and everyone would get around the table and make a deal. The new presence of tech giants Netflix, Amazon, and Apple in the AMPTP membership makes this time different. At some point, the strike will be too expensive for legacy Hollywood studios. But for Apple, TV production is a way to sell services and hardware. For Amazon, it’s a perk that comes with subscribing to its Prime delivery service. Only Netflix needs a constant stream of new work – and it can commission it from creators across the globe. All three of them can wait. And the longer they drag this out, the more the traditional studios will lose money and weaken as competitors.

Legacy Hollywood doesn’t seem to realize it yet, but this strike is existential for them, too.

Illustrations: SAG-AFTRA president Fran Drescher, announcing the strike on Thursday.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Mastodon.

Watson goes to Wimbledon

The launch of the Fediverse-compatible Meta app Threads seems to have slightly overshadowed the European Court of Justice’s ruling, earlier in the week. This ruling deserves more attention: it undermines the basis of Meta’s targeted advertising. In noyb’s initial reaction, data protection legal bulldog Max Schrems suggests the judgment will make life difficult for not just Meta but other advertising companies.

As Alex Scroxton explains at Computer Weekly, the ruling rejects several different claims by Meta that all attempt to bypass the requirement enshrined in the General Data Protection Regulation that where there is no legal basis for data processing users must actively consent. Meta can’t get by with claiming that targeted advertising is a part of its service users expect, or that it’s technically necessary to provide its service.

More interesting is the fact that the original complaint was not filed by a data protection authority but by Germany’s antitrust body, which sees Meta’s our-way-or-get-lost approach to data gathering as abuse of its dominant position – and the CJEU has upheld this idea.

All this is presumably part of why Meta decided to roll out Threads in many countries but *not* the EU, In February, as a consequence of Brexit, Meta moved UK users to its US agreements. The UK’s data protection law is a clone of GDPR and will remain so until and unless the British Parliament changes it via the pending Data Protection and Digital Information bill. Still, it seems the move makes Meta ready to exploit such changes if they do occur.

Warning to people with longstanding Instagram accounts who want to try Threads: if your plan is to try and (maybe) delete, set up a new Instagram account for the purpose. Otherwise, you’ll be sad to discover that deleting your new Threads account means vaping your old Instagram account along with it. It’s the Hotel California method of Getting Big Fast.

***

Last week the Irish Council for Civil Liberties warned that a last-minute amendment to the Courts and Civil Law (Miscellaneous) bill will allow Ireland’s Data Protection Commissioner to mark any of its proceedings “confidential” and thereby bar third parties from publishing information about them. Effectively, it blocks criticism. This is a muzzle not only for the ICCL and other activists and journalists but for aforesaid bulldog Schrems, who has made a career of pushing the DPC to enforce the law it was created to enforce. He keeps winning in court, too, which I’m sure must be terribly annoying.

The Irish DPC is an essential resource for everyone in Europe because Ireland is the European home of so many of American Big Tech’s subsidiaries. So this amendment – which reportedly passed the Oireachta (Ireland’s parliament) – is an alarming development.

***

Over the last few years Canadian law professor Michael Geist has had plenty of complaints about Canada’s Online News Act, aka C-18. Like the Australian legislation it emulates, C-18 requires intermediaries like Facebook and Google to negotiate and pay for licenses to link to Canadian news content. The bill became law on June 22.

Naturally, Meta and Google have warned that they will block links to Canadian news media from their services when the bill comes into force six months hence. They also intend to withdraw their ongoing programs to support the Canadian press. In response, the Canadian government has pulled its own advertising from Meta platforms Facebook and Instagram. Much hyperbolic silliness is taking place

Pretty much everyone who is not the Canadian government thinks the bill is misconceived. Canadian publishers will lose traffic, not gain revenues, and no one will be happy. In Australia, the main beneficiary appears to be Rupert Murdoch, with whom Google signed a three-year agreement in 2021 and who is hardly the sort of independent local media some hoped would benefit. Unhappily, the state of California wants in on this game; its in-progress Journalism Preservation Act also seeks to require Big Tech to pay a “journalism usage fee”.

The result is to continue to undermine the open Internet, in which the link is fundamental to sharing information. If things aren’t being (pay)walled off, blocked for copyright/geography, or removed for corporate reasons – the latest announced casualty is the GIF hosting site Gfycat – they’re being withheld to avoid compliance requirements or withdrawn for tax reasons. None of us are better off for any of this.

***

Those with long memories will recall that in 2011 IBM’s giant computer, Watson, beat the top champions at the TV game show Jeopardy. IBM predicted a great future for Watson as a medical diagnostician.

By 2019, that projected future was failing. “Overpromised and underdelivered,” ran a IEEE Spectrum headline. IBM is still trying, and is hoping for success with cancer diagnosis.

Meanwhile, Watson has a new (marketing) role: analyzing the draw and providing audio and text commentary for back-court tennis matches at Wimbledon and for highlights clips. For each match, Watson also calculates the competitors’ chances of winning and the favorability of their draw. For a veteran tennis watcher, it’s unsatisfying, though: IBM offers only a black box score, and nothing to show how that number was reached. At least human commentators tell you – albeit at great, repetitive length – the basis of their reasoning.

Illustrations: IBM’s Watson, which beat two of Jeopardy‘s greatest champions in 2011.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Twitter.

The horns of a dilemma

It has always been possible to conceive a future for Mastodon and the Fediverse that goes like this: incomers join the biggest servers (“instances”). The growth of those instances, if they can afford it, accelerates. When the sysadmins of smaller instances burn out and withdraw, their users also move to the largest instances. Eventually, the Fediverse landscape is dominated by a handful of very large instances (who enshittify in the traditional way) with a long tail of small and smaller ones. The very large ones begin setting rules – mostly for good reasons like combating abuse, improving security, and offering new features – that the very small ones struggle to keep up with. Eventually, it becomes too hard for most small instances to function.

This is the history of email. In 2003, when I set up my own email server at home, almost every techie had one. By this year, when I decommissioned it in favor of hosted email, almost everyone had long since moved to Gmail or Hotmail. It’s still possible to run an independent server, but the world is increasingly hostile to them.

Another possible Fediverse future: the cultural norms that Mastodon and other users have painstakingly developed over time become swamped by a sudden influx of huge numbers of newcomers when a very large instance joins the federation. The newcomers, who know nothing of the communities they’re joining, overwhelm their history and culture. The newcomers are despised and mocked – but meanwhile, much of the previous organically grown culture is lost, and people wanting intelligent conversation leave to find it elsewhere.

This is the history of Usenet, which in 1994 struggled to absorb 1 million AOLers arriving via a new gateway and software whose design reflected AOL’s internal design rather than Usenet’s history and culture. The result was to greatly exacerbate Usenet’s existing problems of abuse.

A third possible Fediverse future: someone figures out how to make money out of it. Large and small instances continue to exist, but many become commercial enterprises, and small instances increasingly rely on large instances to provide services the small instances need to stay functional. While both profit from that division of labor, the difficulty of discovery means small servers stay small, and the large servers become increasingly monopolistic, exploitative, and unpleasant to use. This is the history of the web, with a few notable exceptions such as Wikipedia and the Internet Archive.

A fourth possible future: the Fediverse remains outside the mainstream, and admins continue to depend on donations to maintain their servers. Over time, the landscape of servers will shift as some burn out or run out of money and are replaced. This is roughly the history of IRC, which continues to serve its niche. Many current Mastodonians would be happy with this; as long as there’s no corporate owner no one can force anyone out of business for being insufficiently profitable.

These forking futures are suddenly topical as Mastodon administrators consider how to respond to this: Facebook will launch a new app that will interoperate with Mastodon and any other network that uses the ActivityPub protocol. Early screenshots suggest a clone of Twitter, Meta’s stated target, and reports say that Facebook is talking to celebrities like Oprah Winfrey and the Dalai Lama as potential users. The plan is reportedly that users will access the new service via their Instagram IDs and passwords. Top-down and celebrity-driven is the opposite of the Fediverse.

It should not be much comfort to anyone that the competitor the company wants to kill with this initiative is Twitter, not Mastodon, because either way Meta doesn’t care about Mastodon and its culture. Mastodon is a rounding error even for just Instagram. Twitter is also comparatively small (and, like Reddit, too text-based to grow much further) but Meta sees in it the opportunity to capture its influencers and build profits around them.

The Fediverse is a democracy in the sense that email and Usenet were; admins get to decide their server’s policy, and users can only accept or reject by moving their account (which generally loses their history). For admins, how to handle Meta is not an easy choice. Meta has approached for discussions the admins of some of the larger Mastodon instances, who must sign an NDA or give up the chance to influence developments. That decision is for the largest few; but potentially every Mastodon instance operator will have to decide the bigger question: do they federate with Meta or not? Refusal means their users can’t access Meta’s wider world, which will inevitably include many of their friends; acceptance means change and loss of control. As I’ve said here before, something that is “open” only to your concept of “good people” isn’t open at all; it’s closed.

At Chronicles of the Instantly Curious, Carey Lening deplores calls to shun Meta as elitist; the AOL comparison draws itself. Even so, the more imminent bad future for Mastodon is the possibility that this is the fork that could split the Fediverse into two factions. Of course the point of being decentralized is to allow more choice over who you socially network with. But until now, none of those choices took on the religious overtones associated with the most heated cyberworld disputes. Fasten your seatbelts…

Illustrations: A mastodon by Heinrich Harder (public domain, via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Mastodon.

Own goals

There’s no point in saying I told you so when the people you’re saying it to got the result they intended.

At the Guardian, Peter Walker reports the Electoral Commission’s finding that at least 14,000 people were turned away from polling stations in May’s local elections because they didn’t have the right ID as required under the new voter ID law. The Commission thinks that’s a huge underestimate; 4% of people who didn’t vote said it was because of voter ID – which Walker suggests could mean 400,000 were deterred. Three-quarters of those lacked the right documents; the rest opposed the policy. The demographics of this will be studied more closely in a report due in September, but early indications are that the policy disproportionately deterred people with disabilities, people from certain ethnic groups, and people who are unemployed.

The fact that the Conservatives, who brought in this policy, lost big time in those elections doesn’t change its wrongness. But it did lead the MP Jacob Rees-Mogg (Con-North East Somerset) to admit that this was an attempt to gerrymander the vote that backfired because older voters, who are more likely to vote Conservative, also disproportionately don’t have the necessary ID.

***

One of the more obscure sub-industries is the business of supplying ad services to websites. One such little-known company is Criteo, which provides interactive banner ads that are generated based on the user’s browsing history and behavior using a technique known as “behavioral retargeting”. In 2018, Criteo was one of seven companies listed in a complaint Privacy International and noyb filed with three data protection authorities – the UK, Ireland, and France. In 2020, the French data protection authority, CNIL, launched an investigation.

This week, CNIL issued Criteo with a €40 million fine over failings in how it gathers user consent, a ruling noyb calls a major blow to Criteo’s business model.

It’s good to see the legal actions and fines beginning to reach down into adtech’s underbelly. It’s also worth noting that the CNIL was willing to fine a *French* company to this extent. It makes it harder for the US tech giants to claim that the fines they’re attracting are just anti-US protectionism.

***

Also this week, the US Federal Trade Commission announced it’s suing Amazon, claiming the company enrolled millions of US consumers into its Prime subscription service through deceptive design and sabotaged their efforts to cancel.

“Amazon used manipulative, coercive, or deceptive user-interface designs known as “dark patterns” to trick consumers into enrolling in automatically-renewing Prime subscriptions,” the FTC writes.

I’m guessing this is one area where data protection laws have worked, In my UK-based ultra-brief Prime outings to watch the US Open tennis, canceling has taken at most two clicks. I don’t recognize the tortuous process Business Insider documented in 2022.

***

It has long been no secret that the secret behind AI is human labor. In 2019, Mary L. Gray and Siddharth Suri documented this in their book Ghost Work. Platform workers label images and other content, annotate text, and solve CAPTCHAs to help train AI models.

At MIT Technology Review, Rhiannon Williams reports that platform workers are using ChatGPT to speed up their work and earn more. A team of researchers from the Swiss Federal Institute of Technology study (PDF)found that between 33% and 46% of the 44 workers they tested with a request to summarize 16 extracts from medical research papers used AI models to complete the task.

It’s hard not to feel a little gleeful that today’s “AI” is already eating itself via a closed feedback loop. It’s not good news for platform workers, though, because the most likely consequence will be increased monitoring to force them to show their work.

But this is yet another case in which computer people could have learned from their own history. In 2008, researchers at Google published a paper suggesting that Google search data could be used to spot flu outbreaks. Sick people searching for information about their symptoms could provide real-time warnings ten days earlier than the Centers for Disease Control could.

This actually worked, some of the time. However, as early as 2009, Kaiser Fung reported at Harvard Business Review in 2014, Google Flu Trends missed the swine flu pandemic; in 2012, researchers found that it had overestimated the prevalence of flu for 100 out of the previous 108 weeks. More data is not necessarily better, Fung concluded.

In 2013, as David Lazer and Ryan Kennedy reported for Wired in 2015 in discussing their investigation into the failure of this idea, GFT missed by 140% (without explaining what that means). Lazer and Kennedy find that Google’s algorithm was vulnerable to poisoning by unrelated seasonal search terms and search terms that were correlated purely by chance, and failed to take into account changing user behavior as when it introduced autosuggest and added health-related search terms. The “availability” cognitive bias also played a role: when flu is in the news, searches go up whether or not people are sick.

While the parallels aren’t exact, large language modelers could have drawn the lesson that users can poison their models. ChatGPT’s arrival for widespread use will inevitably thin out the proportion of text that is human-written – and taint the well from which LLMs drink. Everyone imagines the next generation’s increased power. But it’s equally possible that the next generation will degrade as the percentage of AI-generated data rises.

Illustrations: Drunk parrot seen in a Putney garden (by Simon Bisson).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Mastodon or Twitter.

All change

One of the reasons Silicon Valley technology company leaders sometimes display such indifference to the desires of their users is that they keep getting away with it. At Facebook, now Meta, after each new privacy invasion, the user base just kept getting bigger. At Twitter, despite much outrage at its new owner’s policies, although it feels definitely emptier the exodus toward other sites appears to have dropped off. At Reddit, where CEO Steve Huffman has used the term “landed gentry” to denigrate moderators leading protests against a new company policy…well, we’ll see.

In April, Reddit announced it would begin charging third parties for access to its API, the interface that gives computers outside its system access to the site’s data. Charges will apply to everyone except developers building apps and bots that help people use Reddit and academic/non-commercial researchers studying Reddit.

In May, the company announced pricing: $12,000 per 50 million requests. This compares to Twitter’s recently announced $42,000 per 50 million tweets and photo site Imgur‘s $166 per 50 million API calls. Apollo, maker of the popular iOS Reddit app, estimates that it would now cost $20 million a year to keep its app running.

The reasoning behind this could be summed up as, “They cost us real money; why should we help them?” Apollo’s app is popular, it appears, because it offers a cleaner interface. But it also eliminates Reddit’s ads, depriving the site of revenue. Reddit is preparing for an IPO later this year against stiff headwinds.

A key factor in this timing is the new gold rush around large language models, which are being built by scraping huge amounts of text anywhere they can find it. Taking “our content”, Huffman calls it, suggesting Reddit deserves to share in the profits while eliding the fact that said content is all user-created.

This week, thousands of moderators shuttered their forums (subreddits) in protest. At The Verge, Jay Peters reports that more than 8,000 (out of 138,000) subreddits went dark for 48 hours from Monday to Wednesday. Given Huffman’s the-blackout-will-pass refusal to budge, some popular forums have vowed to continue the protest indefinitely.

Some redditors have popped up on other social media to ask about viable alternatives (they’re also discussing this question on Reddit itself). But moving communities is hard, which is why these companies understand their users’ anger is rarely an existential threat.

The most likely outcome is that redditors are about to confront the fate that eventually befalls almost every online community: the people they *thought* cared about them are going to sell them to people who *don’t* care about them. Reddit as they knew it is entering a phase of precarity that history says will likely end with the system’s shutdown or abandonment. Shareholders’ and owners’ desire to cash out and indifference to Twitter’s actual users is how Elon Musk ended up in charge. It’s how NBC Universal shut down Television without Pity, how Yahoo killed GeoCities, and how AOL spitefully dismantled CompuServe.

The lesson from all of these is: shareholders and corporate owners don’t have to care about users.

The bigger issue, however, is that Reddit, like Twitter, is not currently a sustainable business. Founded in 2005, it was a year old when Conde Nast bought it, only to spin it out again into an independent subsidiary in 2011. Since then it has held repeated funding rounds, most recently in 2021, when it raised $700 million. Since its IPO filing in December 2021, its value has dropped by a third. It will not survive in any form without new sources of revenue; it’s also cutting costs with layoffs.

Every Internet service or site, from Flickr to bitcoin, begins with founders and users sharing the same goal: for the service to grow and prosper. Once the service has grown past a certain point, however, their interests diverge. Users generally seek community, entertainment, and information; investors only seek profits. The need to produce revenues led Google’s chiefs, who had previously held that ads would inevitably corrupt search results, hired Sheryl Sandberg to build the company’s ad business. Seven years later, facomg the same problem, Facebook did the same thing – and hired the same person to do it. Reddit has taken much longer than most Internet companies to reach this inevitable fork.

Yet the volunteer human moderators Huffman derided are the key to Reddit’s success; they set the tone in each subreddit community. Reddit’s topic-centered design means much more interaction with strangers than the person-centered design of blogs and 2010-era social media, but it also allows people with niche interests to find both experts and each other. That fact plus human curation means that lately many add “reddit” to search terms in order to get better results. Reddit users’ loss is therefore also our loss as we try to cope with t1he enshittification of the most monopolistic Internet services.

Its board still doesn’t have to care.

None of this is hopeful. Even if redditors win this round and find some compromise to save their favorite apps, once the IPO is past, any power they have will be gone.

“On the Internet your home will always leave you,” someone observed on Twitter a couple of years ago. I fear that moment is now coming for Reddit. Next time, build your community in a home you can own.

Illustration: Reddit CEO and co-founder Steve Huffman speaking at the Oxford Union in 2019.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Mastodon or Twitter.

Snowden at ten

As almost every media outlet has headlined this week, it is now ten years since Edward Snowden alerted the world to the real capabilities of the spy agencies, chiefly but not solely the US National Security Agency. What is the state of surveillance now? most of the stories ask.

Some samples: at the Open Rights Group executive director Jim Killock summarizes what Snowden revealed; Snowden is interviewed; the Guardian’s editor at the time, Alan Rusbridger, recounts events at the Guardian, which co-published Snowden’s discoveries with the Washington Post; journalist Heather Brooke warns of the increasing sneakiness of government surveillance; and Jessica Lyons Hardcastle outlines the impact. Finally, at The Atlantic, Ewen MacAskill, one of the Guardian journalists who worked on the Snowden stories, says only about 1% of Snowden’s documents were ever published.

As has been noted here recently, it seems as though everywhere you look surveillance is on the rise: at work, on privately controlled public streets, and everywhere online by both government and commercial actors. As Brooke writes and the Open Rights Group has frequently warned, surveillance that undermines the technical protections we rely on puts us all in danger.

The UK went on to pass the Investigatory Powers Act, which basically legalized what the security services were doing, but at least did add some oversight. US courts found that the NSA had acted illegally and in 2015 Congress made bulk collection of Americans’ phone records illegal. But, as Bruce Schneier has noted, Snowden’s cache of documents was aging even in 2013; now they’re just old. We have no idea what the secret services are doing now.

The impact in Europe was significant: in 2016 the EU adopted the General Data Protection Regulation. Until Snowden, data protection reform looked like it might wind up watering down data protection law in response to an unprecedented amount of lobbying by the technology companies. Snowden’s revelations raised the level of distrust and also gave Max Schrems some additional fuel in bringing his legal actions< against EU-US data deals and US corporate practices that leave EU citizens open to NSA snooping.

The really interesting question is this: what have we done *technically* in the last decade to limit government’s ability to spy on us at will?

Work on this started almost immediately. In early 2014, the World Wide Web Consortium and the Internet Engineering Task Force teamed up on a workshop called Strengthening the Internet Against Pervasive Monitoring (STRINT). Observing the proceedings led me to compare the size of the task ahead to boiling the ocean. The mood of the workshop was united: the NSA’s actions as outlined by Snowden constituted an attack on the Internet and everyone’s privacy, a view codified in RFC 7258, which outlined the plan to mitigate pervasive monitoring. The workshop also published an official report.

Digression for non-techies: “RFC” stands for “Request for Comments”. The thousands of RFCs since 1969 include technical specifications for Internet protocols, applications, services, and policies. The title conveys the process: they are published first as drafts and incorporate comments before being finalized.

The crucial point is that the discussion was about *passive* monitoring, the automatic, ubiquitous, and suspicionless collection of Internet data “just in case”. As has been said so many times about backdoors in encryption, the consequence of poking holes in security is to make everyone much more vulnerable to attacks by criminals and other bad actors.

So a lot of that workshop was about finding ways to make passive monitoring harder. Obviously, one method is to eliminate vulnerabilities, especially those the NSA planted. But it’s equally effective to make monitoring more expensive. Given the law of truly large numbers, even a tiny extra cost per user creates unaffordable friction. They called it a ten-year project, which takes us to…almost now.

Some things have definitely improved, largely through the expanded use of encryption to protect data in transit. On the web, Let’s Encrypt, now ten years old, makes it easy and cheap to obtain a certificate for any website. Search engines contribute by favoring encrypted (that is, HTTPS) web links over unencrypted ones (HTTP). Traffic between email servers has gone from being transmitted in cleartext to being almost all encrypted. Mainstream services like WhatsApp have added end-to-end encryption to the messaging used by billions. Other efforts have sought to reduce the use of fixed long-term identifiers such as MAC addresses that can make tracking individuals easier.

At the same time, even where there are data protection laws, corporate surveillance has expanded dramatically. And, as has long been obvious, governments, especially democratic governments, have little motivation to stop it. Data collection by corporate third parties does not appear in the public budget, does not expose the government to public outrage, and is available via subpoena any time government officials want. If you are a law enforcement or security service person, this is all win-win; the only data you can’t get is the data that isn’t collected.

In an essay reporting on the results of the work STRINT began as part of the ten-year assessment currently circulating in draft, STRINT convenor Stephen Farrell writes, “So while we got a lot right in our reaction to Snowden’s revelations, currently, we have a “worse” Internet.”

Illustrations: Edward Snowden, speaking to Glenn Greenwald in a screenshot from Laura Poitras’ film Prism from Praxis Films (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Mastodon or Twitter.

Review: A Hacker’s Mind

A Hacker’s Mind: How the Powerful Bend Society’s Rules, and How to Bend them Back
by Bruce Schneier
Norton
ISBN: 978-0-393-86666-7

One of the lessons of the Trump presidency has been how much of the US government runs on norms that have developed organically over the republic’s 247-year history. Trump felt no compunction about breaking those norms. In computer security parlance, he hacked the system by breaking those norms in ways few foresaw or thought possible.

This is the kind of global systemic hacking Bruce Scheneir explores in his latest book, A Hacker’s Mind. Where most books on this topic limit their focus to hacking computers, Schneier opts to start with computer hacking, use it to illustrate the hacker’s habit of mind, and then find that mindset in much larger and more consequential systemic abuses. In his array of hacks by the rich and powerful, Trump is a distinctly minor player.

First, however, Schneier introduces computer hacking from the 1980s onward. In this case, “hacking” is defined in the old way: active subversion of a system to make it do things its designers never intended. In the 1980s, “hacker” was a term of respect applied to you by others admiring your cleverness. It was only in the 1990s that common usage equated hacking with committing crimes with a computer. In his 1984 book Hackers, Steven Levy showed this culture in action at MIT. It’s safe to say that without hacks we wouldn’t have the Internet.

The hacker’s habit of mind can be applied to far more than just technology. It can – and is today being used to – subvert laws, social norms, financial systems, politics, and democracy itself. This is Schneier’s main point. You can draw a straight line from technological cleverness to Silicon Valley’s “disrupt” to the aphorism coined by Georgetown law professor Julie Cohen, whom Schneier quotes: “Power interprets regulation as damage, and routes around it”.

In the first parts of the book he discusses the impact of system vulnerabilities, the kinds of responses one can make, and the basic types of response. In a compact amount of space, he covers patching, hardening, and simplifying systems, evaluating threat models as they change, and limiting the damage the hack can cause. Or, the hack may be normalized, becoming part of our everyday landscape.

Then he gets serious. In the bulk of the book, he explores applications: hacking financial, legal, political, cognitive, and AI systems. Specialized AI – Schneier wisely avoids the entirely speculative hype and fear around artificial general intelligence – is both exceptionally vulnerable to hacks and an exceptional vector for them. Anthropomorphic robots especially can be designed to hack our emotional responses.

“The rich are better at hacking,” he observes. They have greater resources, more powerful allies, and better access. If the good side of hacking is innovation, the bad side is societal damage, increasing unfairness and inequality, and the subversion of the systems we used to trust. Schneier believes all of this will get worse because today’s winners have so much ability to hack what’s left. Hacking, he says, is an existential threat. Nonetheless, he has hope: we *can* build resilient governance structures. We must hack hacking.

Microsurveillance

“I have to take a photo,” the courier said, raising his mobile phone to snap a shot of the package on the stoop in front of my open doorway.

This has been the new thing. I guess the spoken reason is to ensure that the package recipient can’t claim that it was never delivered, protecting all three of the courier, the courier company, and the shipper from fraud. But it feels like the unspoken reason is to check that the delivery guy has faithfully completed his task and continued on his appointed round without wasting time. It feels, in other words, like the delivery guy is helping the company monitor him.

I say this, and he agrees. I had, in accordance with the demands of a different courier, pinned a note to my door authorizing the deliverer to leave the package on the doorstep in my absence. “I’d have to photograph the note,” he said.

I mentioned American truck drivers, who are pushing back against in-cab cameras and electronic monitors. “They want to do that here, too,” he said. “They want to put in dashboard cameras.” Since then, in at least some cases – for example, Amazon – they have.

Workplace monitoring was growing in any case, but, as noted in 2021, the explosion in remote working brought by the pandemic normalized a level of employer intrusion that might have been more thoroughly debated in less fraught times. The Trades Union Congress reported in 2022 that 60% of employees had experiened being tracked in the previous years. And once in place, the habit of surveillance is very hard to undo.

When I was first thinking about this piece in 2021, many of these technologies were just being installed. Two years later, there’s been time for a fight back. One such story comes from the France-based company Teleperformance, one of those obscure, behind-the-scenes suppliers to the companies we’ve all heard of. In this case, the company in the shadows supplies remote customer service workers to include, just in the UK, the government’s health and education departments, NHS Digital, the RAF and Royal Navy, and the Student Loans Company, as well as Vodafone, eBay, Aviva, Volkswagen, and the Guardian itself; some of Teleperformance’s Albanian workers provide service to Apple UK

In 2021, Teleperformance demanded that remote workers in Colombia install in-home monitoring and included a contract clause requiring them to accept AI-powered cameras with voice analytics in their homes and allowing the company to store data on all members of the worker’s family. An earlier attempt at the same thing in Albania failed when the Information and Data Protection Commissioner stepped in.

Teleperformance tried this in the UK, where the unions warned about the normalization of surveillance. The company responded that the cameras would only be used for meetings, training, and scheduled video calls so that supervisors could check that workers’ desks were free of devices deemed to pose a risk to data security. Even so, In August 2021 Teleperformance told Test and Trace staff to limit breaks to ten minutes in a six-hour shift and to select “comfort break” on their computers (so they wouldn’t be paid for that time).

Other stories from the pandemic’s early days show office workers being forced to log in with cameras on for a daily morning meeting or stay active on Slack. Amazon has plans to use collected mouse movements and keystrokes to create worker profiles to prevent impersonation. In India, the government itself demanded that its accredited social health activists install an app that tracks their movements via GPS and monitors their uses of other apps.

More recently, Politico reports that Uber drivers must sign in with a selfie; they will be banned if the facial recognition verification software fails to find a match.

This week, at the Guardian Clea Skopoleti updated the state of work. In one of her examples, monitoring software calculates “activity scores” based on typing and mouse movements – so participating in Zoom meetings, watching work-related video clips, and thinking don’t count. Young people, women, and minority workers are more likely to be surveilled.

One employee Skopoleti interviews takes unpaid breaks to carve out breathing space in which to work; another reports having to explain the length of his toilet breaks. Another, a English worker in social housing, reports his vehicle is tracked so closely that a manager phones if they think he’s not in the right place or taking too long.

This is a surveillance-breeds-distrust-breeds-more-surveillance cycle. As Ellen Ullman long ago observed, systems infect their owners with the desire to do more and more with them. It will take time for employers to understand the costs in worker burnout, staff turnover, and absenteeism.

One way out is through enforcing the law: In 2020, the ICO investigated Barclay’s Bank, which was accused of spying on staff via software that tracked how they spent their time; the bank dropped it. In many of these stories, however, the surveillance suppliers say they operate within the law.

The more important way out is worker empowerment. In Colombia, Teleperformance has just guaranteed its 40,000 workers the right to form a union.

First, crucially, we need to remember that surveillance is not normal.

Illustrations: The boss tells Charlie Chaplin to get back to work in Modern Times (1936).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Mastodon or Twitter.

The arc of surveillance

“What is the point of introducing contestability if the system is illegal?” a questioner asked at this year’s Compiuters, Privacy, and Data Protection, or more or less.

This question could have been asked in any number of sessions where tweaks to surface problems leave the underlying industry undisturbed. In fact, the questioner raised it during the panel on enforcement, GDPR, and the newly-in-force Digital Markets Act. Maria Luisa Stasi explained the DMA this way: it’s about business models. It’s a step into a deeper layer.
.
The key question: will these new laws – the DMA, the recent Digital Services Act, which came into force in November, the in-progress AI Act – be enforced better than GDPR has been?

The frustration has been building all five years of GDPR’s existence. Even though this week, Meta was fined €1.2 billion for transferring European citizens’ data to the US, Noyb reports that 85% of its 800-plus cases remain undecided, 58% of them for more than 18 months. Even that €1.2 billion decision took ten years, €10 million, and three cases against the Irish Data Protection Commissioner to push through – and will now be appealed. Noyb has an annotated map of the various ways EU countries make litigation hard. The post-Snowden political will that fueled GDPR’s passage has had ten years to fade.

It’s possible to find the state of privacy circa 2023 depressing. In the 30ish years I’ve been writing about privacy, numerous laws have been passed, privacy has become a widespread professional practice and area of study in numerous fields, and the number of activists has grown from a literal handful to tens of thousands around the world. But overall the big picture is one of escalating surveillance of all types and by all sorts of players. At the 2000 Computers, Freedom, and Privacy conference, Neal Stephenson warned not to focus on governments. Watch the “Little Brothers”, he said. Google was then a tiny self-funded startup, and Mark Zuckerberg was 16. Stephenson was prescient.

And yet, that surveillance can be weirdly patchy. In a panel on children online, Leanda Barrington-Leach noted platforms’ selective knowledge: “How do they know I like red Nike trainers but don’t know I’m 12?” A partial answer came later: France’s CNIL has looked at age verification technologies and concluded that none are “mature enough” to both do the job and protect privacy.

In a discussion of deceptive practices, paraphrasing his recent paper, Mark Leiser pinpointed a problem: “We’re stuck with a body of law that looks at online interface as a thing where you look for dark patterns, but there’s increasing evidence that they’re being embedded in the systems architecture underneath and I’d argue we’re not sufficiently prepared to regulate that.”

As a response, Woody Hartzog and Neil Richards have proposed the concept of “data loyalty”. Similar to a duty of care, the “loyalty” in this case is owed by the platform to its users. “Loyalty is the requirement to make the interests of the trusted party [the platform] subservient to those of the trustee or vulnerable one [the user],” Hartzog explained. And the more vulnerable you are the greater the obligation on the powerful party.

The tone was set early with a keynote from Julie Cohen that highlighted structural surveillance and warned against accepting the Big Tech mantra that more technology naturally brings improved human social welfare..

“What happens to surveillance power as it moves into the information infrastructure?” she asked. Among other things, she concluded, it disperses accountability, making it harder to challenge but easier to embed. And once embedded, well…look how much trouble people are having just digging Huawei equipment out of mobile networks.

Cohen’s comments resonate. A couple of years ago, when smart cities were the hot emerging technology, it became clear that many of the hyped ideas were only really relevant to large, dense urban areas. In smaller cities, there’s no scope for plotting more efficient delivery routes, for example, because there aren’t enough options. As a result, congestion is worse in a small suburban city than in Manhattan, where parallel routes draw off traffic. But even a small town has scope for surveillance, and so some of us concluded that this was the technology that would trickle down. This is exactly what’s happening now: the Fusus technology platform even boasts openly of bringing the surveillance city to the suburbs.

Laws will not be enough to counter structural surveillance. In a recent paper, Cohen wrote, “Strategies for bending the arc of surveillance toward the safe and just space for human wellbeing must include both legal and technical components.”

And new approaches, as was shown by an unusual panel on sustainability, raised by the computational and environmental costs of today’s AI. This discussion suggested a new convergence: the intersection, as Katrin Fritsch put it, of digital rights, climate justice, infrastructure, and sustainability.

In the deception panel, Roseamunde van Brakel similarly said we need to adopt a broader conception of surveillance harm that includes social harm and risks for society and democracy and also the impact on climate of use of all these technologies. Surveillance, in other words, has environmental costs that everyone has ignored.

I find this convergence hopeful. The arc of surveillance won’t bend without the strength of allies..

Illustrations: CCTV camera at 22 Portobello Road, London, where George Orwell lived.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Mastodon or Twitter.