Inappt

Recently, it took a flatwoven wool rug cmore than two weeks to travel from Luton, Bedfordshire to southwest London. The rug’s source – an Etsy seller – and I sent back and forth dozens of messages. It would be there tomorrow. Oh, no, the courier now says Wednesday. Um, Friday. Er, next week. I can send you a different rug, if you want to choose one. No.

In the end, the rug arrived into my life. I don’t dare decide it’s the wrong color.

I would dismiss this as a one-off aberration, except that a few weeks ago the intended recipient of a parcel sent at the beginning of November casually mentioned they had never received it. Upon chasing, the courier company replied: “Despite an extensive investigation, we have not been able to locate your parcel.”

I would dismiss those as a two-off aberration except that late last year the post office tracking on yet another item went on showing it stuck in some unidentifiable depot somewhere for two weeks. Eventually, I applied brain and logic and went down to the nearest delivery office and there it was, waiting for me to pay the customs fee specified on the card I never received. It was only a few days away from being sent back.

And I would dismiss those as a three-off aberration except that two weeks ago I was notified to expect a package from a company whose name I didn’t recognize between 7pm and 9pm. I therefore felt perfectly safe to go into the room furthest from the front door, the kitchen, and wash some dishes at 5:30. Nope. They delivered at 5:48, I didn’t hear them, and I had a hard time figuring out whom to contact to persuade them to redeliver.

The point about all this is not to yell at random couriers to get off my lawn but to note that at least this part of the app-based economy has stopped delivering the results it promised. Less than ten years since these companies set out to disrupt delivery services by providing lower prices, accurate information, on-time deliveries, and constant tracking, we’re back to waiting at home for unspecified numbers of hours wondering if they’re going to show and struggling to trace lost packages. Only this time, there’s no customer service, working conditions and pay are much worse for drivers and delivery folk, and the closure of many local outlets has left us all far more dependent on them.

***

Also falling over this week, as widely reported (because: journalists), was Twitter, which for a time on Wednesday barred posting new tweets unless they were posted via the kind of scheduling software that the site is limiting). Many of us have been expecting outages ever since November, when Charlie Warzel at The Atlantic and Chris Stokel-Walker at MIT Technology Review interviewed Twitter engineers past and present. All of them warned that the many staff cuts and shrinking budgets have left the service undersupplied with people who can keep the site running and that outages of increasing impact should be expected.

Nonetheless, the “Apocalypse, Now!” reporting that ensued was about as sensible as the reporting earlier in the week that the Fediverse was failing to keep the Tweeters who flooded there beginning in November. In response, https://www.techdirt.com/2023/02/08/lazy-reporters-claiming-fediverse-is-slumping-despite-massive-increase-in-usage/ Mike Masnick noted at TechDirt how silly this was. Because: 1) There’s a lot more to the Fediverse than just Mastodon, which is all these reporters looked at; 2) even then, Mastodon had lost a little from its peak but was still vastly more active than before November; 3) it’s hard for people to change their habits, and they will revert to what’s familiar if they don’t see a reason why they can’t; and 4) it’s still early days. So, meh.

However, Zeynep Tufekci reminds that Twitter’s outage is entertainment only for the privileged; for those trying to coordinate rescue and aid efforts for Turkey, Twitter is an essential tool.

***

While we’re sniping at the failings of current journalism, it appears that yet another technology has been overhyped: DoNotPay, “the world’s first robot lawyer”, the bot written by a British university student that has supposedly been helping folks successfully contest traffic tickets. Masnick (again) and Kathryn Tewson have been covering the story for TechDirt. Tewson, a paralegal, has taken advantage of the fact that cities publish their parking ticket data in order to study DoNotPay’s claims in detail.

TechDirt almost ran a skeptical article about the service in 2017. Suffice to say that now Masnick concludes, “I wish that DoNotPay actually could do much of what it claims to do. It sounds like it could be a really useful service…”

***

The pile-up of this sort of thing – apps that disrupt and then degrade service, technology that’s overhyped (see also self-driving cars), flat-out fraud (see cryptocurrencies), breathless media reporting of nothing much – is probably why I have been unable to raise any excitement over the wow-du-jour, ChatGPT. It seems obvious that of course it can’t read, and can’t understand anything it’s typing, and that sober assessment of what it might be good for is some way off. In the New Yorker, Ted Chiang puts it in its place: think of it as a blurred JPEG. Sounds about right.

Illustrations: Drunk parrot (taken by Simon Bisson).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard – or follow on Mastodon or Twitter.

Disequilibrium

“Things like [the Net[ tend to be self-balancing,” (then) IBM security engineer David Chess tells Andrew Leonard at the end of his 1997 book Bots: The Origin of New Spccies. “If some behavior gets so out of control that it really impacts the community, the community responds with whatever it takes to get back to an acceptable equilibrium. Organic systems are like that.”

When Leonard was writing, Usenet was the largest social medium. Quake was the latest hot video game, and text-only multi-player games were still mainstream. CompuServe and AOL were competing to be the biggest commercial information service. In pocket computers, the Palm Pilot was a year old and selling by the million. And: everyone still used modems; broadband trials were two years away.

It was also a period of what Leonard calls “decentralized anarchy”: that is, the web was new and open (and IRC and Usenet were old and open), and it was reasonable to predict that bots would be the newest wave of personal empowerment.

Here in 2023, we’ve spent the last ten years complaining about the increasing centralization of the web, and although bots are in fact all around us, no service provides the kind of tools that would allow the technologically limited to write them and dispatch them to do our bidding. In fact, on the corporately-owned web, bots only exist if some large company agrees they may. Yesterday, Twitter decided it doesn’t agree to their existence any more, at least not for free; as of February 9 developers must pay for access to Twitter’s application programming interface, which was free until now. Pricing is yet to be announced.

APIs are gateways through which computer programs can interoperate. Twitter’s APIs allow developers to build apps that let users analyze their social graph, block abuse, manage ad campaigns, log in to other sites across the web, and, lately, help you find and connect with the people in your Twitter list who are also on Mastodon; they also enable researchers to study online behavior and make possible apps that roll threads into a correctly ordered single page and many more, as Jess Wetherbed explains at The Verge. Many of these uses are not revenue-generating and not intended to be; most will likely shut down. It will be a fascinating chance to discover what bots have actually been doing for us on the service. Their absence will expose Twitter’s bare bones.

However, as Charles Arthur writes at his Social Warming Substack, the move won’t deter the *other* kind of bots – that is, the ones people complain about: paid influencers, scammers, automated accounts, and so on, which can’t be killed at scale and aren’t using the API.

This all follows Twitter’s move two weeks ago to block third-party clients without notice. Granted, Twitter needs money: its change of ownership loaded its balance sheet with debt, its ad revenues have reportedly plummeted, and its efforts to find new revenue streams are not going well. If fears of Twitter’s demise and the return of users previously banned for bad behavior weren’t enough to send users scrambling to other services, this new move, as Mike Masnick quips at TechDirt, seems perfectly designed to send even more users and developers to Mastodon, where openness is a founding principle and therefore where years of effort can’t be undone in a second by owner decree.

The really interesting question is not so much whether Twitter can survive as a closed-garden paywalled channel, which seems to be its direction of travel, but whether its enclosure represents the kind of disruption that Chess was talking about: one that becomes an inflection point. Earlier attempts to swim against the tide of centralization represented by Facebook and the rest of Web 2.0, such as the 2010-founded Diaspora, have never really caught fire.

It’s tempting to make tennis analogies: often, when a new champion becomes dominant a contributing factor is nerves or self-destruction on the part of the top players they have to beat. And right now there’s Twitter destroying its assets to suit the whims of a despotic owner, Facebook panic-spending to try to secure itself a future with technology it hopes will restore the company’s youthful glow, the ad market that supports all these companies shrinking, and governments setting privacy and antitrust laws to stun.

It’s also true that users are different now. The teens who lied about their ages to get onto Facebook in 2010 are in their mid-20s. An increasing number of the 40-something parents of today’s teens have had broadband Internet access their entire adult lives. The users exploding into the combination of smart phones and social media in 2010 needed much more help than they do today, help that slick user design provided. But part of that promise was also keeping users safe – and there the social media companies have failed in all directions and at all scales.

If this really is the moment where the Internet reverts to decentralized anarchy and rediscovers the joys of connecting without the data collection, intrusive advertising, and manipulation, governments will seek to reimpose control. And the laws to help them – for example, Britain’s Online Safety bill – are close to passage. This will be a rough ride.

Illustrations: The Twitter bird flying upside down.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard – or follow on Mastodon or Twitter.

Re-centralizing

But first, a housekeeping update. Net.wars has moved – to a new address and new blogging software. For details, see here. If you read net.wars via RSS, adjust your feed to https://netwars.pelicancrossing.net. Past posts’ old URLs will continue to work, as will the archive index page, which lists every net.wars column back to November 2001. And because of the move: comments are now open for the first time in probably about ten years. I will also shortly set up a mailing list for those who would rather get net.wars by email.

***

This week the Ada Lovelace Institute held a panel discussion of ethics for researchers in AI. Arguably, not a moment too soon.

At Noema magazine, Timnet Gebru writes, as Mary L Gray and Siddharth Suri have previously, that what today passes for “AI” and “machine learning” is, underneath, the work of millions of poorly-paid marginalized workers who add labels, evaluate content, and provide verification. At Wired, Gebru adds that their efforts are ultimately directed by a handful of Silicon Valley billionaires whose interests are far from what’s good for the rest of us. That would be the “rest of us” who are being used, willingly or not, knowingly or not, as experimental research subjects.

Two weeks ago, for example, a company called Koko ran an experiment offering chatbot-written/human-overseen mental health counseling without informing the 4,000 people who sought help via the “Koko Cares” Discord server. In a Twitter thread. company co-founder Rob Morris said those users rated the bot’s responses highly until they found out a bot had written them.

People can build relationships with anything, including chatbots, as was proved in 1996 with the release of the experimental chatbot therapist Eliza. People found Eliza’s responses comforting even though they knew it was a bot. Here, however, informed consent processes seem to have been ignored. Morris’s response, when widely criticized for the unethical nature of this little experiment was to say it was exempt from informed consent requirements because helpers could opt whether to use the chatbot’s reponses and Koko had no plan to publish the results.

One would like it to be obvious that *publication* is not the biggest threat to vulnerable people in search of help. One would also like modern technology CEOs to have learned the right lesson from prior incidents such as Facebook’s 2012 experiment to study users’ moods when it manipulated their newsfeeds. Facebook COO Sheryl Sandberg apologized for *how the experiment was communicated*, but not for doing it. At the time, we thought that logic suggested that such companies would continue to do the research but without publishing the results. Though isn’t tweeting publication?

It seems clear that scale is part of the problem here, like the old saying, one death is a tragedy; a million deaths are a statistic. Even the most sociopathic chatbot owner is unlikely to enlist an experimental chatbot to respond to a friend or family member in distress. But once a screen intervenes, the thousands of humans on the other side are just a pile of user IDs; that’s part of how we get so much online abuse. For those with unlimited control over the system we must all look like ants. And who wouldn’t experiment on ants?

In that sense, the efforts of the Ada Lovelace panel to sketch out the diligence researchers should follow are welcome. But the reality of human nature is that it will always be possible to find someone unscrupulous to do unethical research – and the reality of business nature is not to care much about research ethics if the resulting technology will generate profits. Listening to all those earnest, worried researchers left me writing this comment: MBAs need ethics. MBAs, government officials, and anyone else who is in charge of how new technologies are used and whose decisions affect the lives of the people those technologies are imposed upon.

This seemed even more true a day later, at the annual activists’ gathering Privacy Camp. In a panel on the proliferation of surveillance technology at the borders, speakers noted that every new technology that could be turned to helping migrants is instead being weaponized against them. The Border Violence Monitoring Network has collected thousands of such testimonies.

The especially relevant bit came when Hope Barker, a senior policy analyst with BVMN, noted this problem with the forthcoming AI Act: accountability is aimed at developers and researchers, not users.

Granted, technology that’s aborted in the lab isn’t available for abuse. But no technology stays the same after leaving the lab; it gets adapted, altered, updated, merged with other technologies, and turned to uses the researchers never imagined – as Wendy Hall noted in moderating the Ada Lovelace panel. And if we have learned anything from the last 20 years it is that over time technology services enshittify, to borrow Cory Doctorow’s term in a rant which covers the degradation of the services offered by Amazon, Facebook, and soon, he predicts, TikTok.

The systems we call “AI” today have this in common with those services: they are centralized. They are technologies that re-advantage large organizations and governments because they require amounts of data and computing power that are beyond the capabilities of small organizations and individuals to acquire. We can only rent them or be forced to use them. The ur-evil AI, HAL in Stanley Kubrick’s 2001: A Space Odyssey taught us to fear an autonomous rogue. But the biggest danger with “AIs” of the type we are seeing today, that are being put into decision making and law enforcement, is not the technology, nor the people who invented it, but the expanding desires of its controller.

Illustrations: HAL, in 2001.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns back to November 2001. Comment here, or follow on Mastodon or Twitter.

net.wars – 2023 and beyond

After 16 years, net.wars has a new site URL and new software. All the entries going back to 2006 will stay at the old site URL, but going forward you will find net.wars every Friday here, where you are now.

One consequence is that comments are now open (again, for those with very long memories). The old blog got so much spam that even putting up a post became a slow and difficult process, and the host recommended shutting them down. Another is that posts through last Friday, January 20, 2023. are static pages. To find old posts, either consult the archive page, which goes all the way back to November 2001, when net.wars began as a column for The Inquirer, or use DuckDuckGo (or some other search engine of your choice) to search.

Thanks for reading.