Faking it

I have finally figured out what benefit exTwitter gets from its new owner’s decision to strip out the headlines from linked third-party news articles: you cannot easily tell the difference between legitimate links and ads. Both have big unidentified pictures, and if you forget to look for the little “Ad” label at the top right or check the poster’s identity to make sure it’s someone you actually follow, it’s easy to inadvertently lessen the financial losses accruing to said owner by – oh, the shame and horror – clicking on that ad. This is especially true because the site has taken to injecting these ads with increasing frequency into the carefully curated feed that until recently didn’t have this confusion. Reader, beware.

***

In all the discussion of deepfakes and AI-generated bullshit texts, did anyone bring up the possibility of datafakes? Nature highlights a study in which researchers created a fake database to provide evidence for concluding that one of two surgical procedures is better than the other. This is nasty stuff. The rising numbers of retracted papers already showed serious problems with peer review (which are not new, but are getting worse). To name just a couple: reviewers are unpaid and often overworked, and what they look for are scientific advances, not fraud.

In the UK, Ben Goldacre has spearheaded initiatives to improve on the quality of published research. A crucial part of this is ensuring people state in advance the hypothesis they’re testing, and publish the results of all trials, not just the ones that produce the researcher’s (or funder’s) preferred result.

Science is the best process we have for establishing an edifice of reliable knowledge. We desperately need it to work. As the dust settles on the week of madness at OpenAI, whose board was supposed to care more about safety than about its own existence, we need to get over being distracted by the dramas and the fears of far-off fantasy technology and focus on the fact that the people running the biggest computing projects by and large are not paying attention to the real and imminent problems their technology is bringing.

***

Callum Cant reports at the Guardian that Deliveroo has won a UK Supreme Court ruling that its drivers are self-employed and accordingly do not have the right to bargain collectively for higher pay or better working conditions. Deliveroo apparently won this ruling because of a technicality – its insertion of a clause that allows drivers to send a substitute in their place, an option that is rarely used.

Cant notes the health and safety risks to the drivers themselves, but what about the rest of of us? A driver in his tenth hour of a seven-day-a-week grind doesn’t just put themselves at risk; they’re a risk to everyone they encounter on the roads. The way these things are going, if safety becomes a problem, instead of raising wages to allow drivers a more reasonable schedule and some rest, the likelihood is that these companies will turn to surveillance technology, as Amazon has.

In the US, this is what’s happened to truck drivers, and, as Karen Levy documents in her book, Data Driven, it’s counterproductive. Installing electronic logging devices into truckers’ cabs has led older, more experienced, and, above all, *safer* drivers to leave the profession, to be replaced with younger, less-experienced, and cheaper drivers with a higher appetite for risk. As Levy writes, improved safety won’t come from surveiling exhausted drivers; what’s needed is structural change to create better working conditions.

***

The UK’s covid inquiry has been livestreaming its hearings on government decision making for the last few weeks, and pretty horrifying they are, too. That’s true even if you don’t include former deputy medical officer Johnathan Van-Tam’s account of the threats of violence aimed at him and his family. They needed police protection for nine months and were advised to move out of their house – but didn’t want to leave their cat. Will anyone take the job of protecting public health if this is the price?

Chris Whitty, the UK’s Chief Medical Officer, said the UK was “woefully underprepared”, locked down too late, and made decisions too slowly. He was one of the polite ones.

Former special adviser Dominic Cummings (from whom no one expected politeness) said everyone called Boris Johnson a trolley, because, like a shopping trolley with the inevitable wheel pointing in the wrong direction, he was so inconsistent.

The government chief scientific adviser, Patrick Vallance had kept a contemporaneous diary, which provided his unvarnished thoughts at the time, some of which were read out. Among them: Boris Johnson was obsessed with older people accepting their fate, unable to grasp the concept of doubling times or comprehend the graphs on the dashboard, and intermittently uncertain if “the whole thing” was a mirage.

Our leader envy in April 2020 seems correctly placed. To be fair, though: Whitty and Vallance, citing their interactions with their counterparts in other countries, both said that most countries had similar problems. And for the same reason: the leaders of democratic countries are generally not well-versed in science. As the Economist’s health policy editor, Natasha Loder warned in early 2022, elect better leaders. Ask, she said, before you vote, “Are these serious people?” Words to keep in mind as we head toward the elections of 2024.

Illustrations: The medium Mina Crandon and the “materialized spirit hand” she produced during seances.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

The one hundred

Among the highlights of this week’s hearings of the Covid Inquiry were comments made by Helen MacNamara, who was the deputy cabinet secretary during the relevant time, about the effect of the lack of diversity. The absence of women in the room, she said, led to a “lack of thought” about a range of issues, including dealing with childcare during lockdowns, the difficulties encountered by female medical staff in trying to find personal protective equipment that fit, and the danger lockdowns would inevitably pose when victims of domestic abuse were confined with their abusers. Also missing was anyone who could have identified issues for ethnic minorities, disabled people, and other communities. Even the necessity of continuing free school lunches was lost on the wealthy white men in charge, none of whom were ever poor enough to need them. Instead, MacNamara said, they spent “a disproportionate amount” of their time fretting about football, hunting, fishing, and shooting.

MacNamara’s revelations explain a lot. Of course a group with so little imagination about or insight into other people’s lives would leave huge, gaping holes. Arrogance would ensure they never saw those as failures.

I was listening to this while reading posts on Mastodon complaining that this week’s much-vaunted AI Safety Summit was filled with government representatives and techbros, but weak on human rights and civil society. I don’t see any privacy organizations on the guest list, for example, and only the largest technology platforms needed apply. Granted, the limit of 100 meant there wasn’t room for everyone. But these are all choices seemingly designed to make the summit look as important as possible.

From this distance, it’s hard to get excited about a bunch of bigwigs getting together to alarm us about a technology that, as even the UK government itself admits, may – even most likely – will never happen. In the event, they focused on a glut of disinformation and disruption to democratic polls. Lots of people are thinking about the first of these, and the second needs local solutions. Many technology and policy experts are advocating openness and transparency in AI regulation.

Me, I’d rather they’d given some thought to how to make “AI” (any definition) sustainable, given the massive resources today’s math-and-statistics systems demand. And I would strongly favor a joint resolution to stop using these systems for surveillance and eliminate predictive systems that pretend to be sble to spot potential criminals in advance or decide who are deserving of benefits, admission into retail stores, or parole. But this summit wasn’t about *us*.

***

A Mastodon post reminded me that November 2 – yesterday – was the 35th anniversary of the Morris Worm and therefore the 35th anniversary of the day I first heard of the Internet. Anniversaries don’t matter much, but any history of the Internet would include this now largely-fotgotten (or never-known) event.

Morris’s goals were pretty anodyne by today’s standards. He wanted, per Wikipedia, to highlight flaws in some computer systems. Instead, the worm replicated out of control and paralyzed parts of this obscure network that linked university and corporate research institutions, who now couldn’t work. It put the Internet on the front pages for the first time.

Morris became the first person to be convicted of a felony under the brand-new Computer Fraud and Abuse Act (1986); that didn’t stop him from becoming a tenured professor at MIT in 2006. The heroes of the day were the unsung people who worked hard to disable the worm and restore full functionality. But it’s the worm we remember.

It was another three years before I got online myself, in 1991, and two or three more years after that before I got direct Internet access via the now-defunct Demon Internet. Everyone has a different idea of when the Internet began, usually based on when they got online. For many of us, it was November 2, 1988, the day when the world learned how important this technology they had never heard of had already become.

***

This week also saw the first anniversary of Twitter’s takeover. Despite a variety of technical glitches and numerous user-hostile decisions, the site has not collapsed. Many people I used to follow are either gone or posting very little. Even though I’m not experiencing the increased abuse and disinformation I see widely reported, there’s diminishing reward for checking in.

There’s still little consensus on a replacement. About half of my Twitter list have settled in on Mastodon. Another third or so are populating Bluesky. I hear some are finding Threads useful, but until it has a desktop client I’m out (and maybe even then, given its ownership). A key issue, however, is that uncertainty about which site will survive (or “win”) leads many people to post the same thing on multiple services. But you don’t dare skip one just in case.

For both philosophical and practical reasons, I’m hoping more people will get comfortable on Mastodon. Any corporate-owned system will merely replicate the situation in which we become hostages to business interests who have as little interest in our welfare as Boris Johnson did according to MacNamara and other witnesses. Mastodon is not a safe harbor from horrible human behavior, but with no ads and no algorithm determining what you see, at least the system isn’t designed to profit from it.

Illustrations: Former deputy cabinet secretary Helen MacNamara testifying at the Covid Inquiry.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

Five seconds

Careful observers posted to Hacker News this week – and the Washington Post reported – that the X formerly known as Twitter (XFKAT?) appeared to be deliberately introducing a delay in loading links to sites the owner is known to dislike or views as competitors. These would be things like the New York Times and selected other news organizations, and rival social media and publishing services like Facebook, Instagram, Bluesky, and Substack.

The 4.8 seconds users clocked doesn’t sound like much until you remember, as the Post does, that a 2016 Google study found that 53% of mobile users will abandon a website that takes longer than three seconds to load. Not sure whether desktop users are more or less patient, but it’s generally agreed that delay is the enemy.

The mechanism by which XFKAT was able to do this is its built-in link shortener, t.co, through which it routes all the links users post. You can see this for yourself if you right-click on a posted link and copy the results. You can only find the original link by letting the t.co links resolve and copying the real link out of the browser address bar after the page has loaded.

Whether or not the company was deliberately delaying these connections, the fact is that it *can* – as can Meta’s platforms and many others. This in itself is a problem; essentially it’s a failure of network neutrality. This is the principle that a telecoms company should treat all traffic equally, and it is the basis of the egalitarian nature of the Internet. Regulatory insistence on network neutrality is why you can run a voice over Internet Protocol connection over broadband supplied by a telco or telco-owned ISP even though the services are competitors. Social media platforms are not subject to these rules, but the delaying links story suggests maybe they should be once they reach a certain size.

Link shorteners have faded into the landscape these days, but they were controversial for years after the first such service – TinyURL – was launched in 2002 (per Wikipedia). Critics cited several main issues: privacy, persistence, and obscurity. The latter refers to users’ inability to know where their clicks are taking them; I feel strongly about this myself. The privacy issue is that the link shorteners-in-the-middle are in a position to collect traffic data and exploit it (bad actors could also divert links from their intended destination). The ability to collect that data and chart “impact” is, of course, one reason shorteners were widely adopted by media sites of all types. The persistence issue is that intermediating links in this way creates one or more central points of failure. When the link shortener’s server goes down for any reason – failed Internet connection, technical fault, bankrupt owner company – the URL the shortener encodes becomes unreachable, even if the page itself is available as normal. You can’t go directly to the page, or even located a cached copy at the Internet Archive, without the original URL.

Nonetheless, shortened links are still widely used, for the same reasons why they were invented. Many URLs are very long and complicated. In print publications, they are visually overwhelming, and unwieldy to copy into a web address bar; they are near-impossible to proofread in footnotes and citations. They’re even worse to read out on broadcast media. Shortened links solve all that. No longer germane is the 140-character limit Twitter had in its early years; because the URL counted toward that maximum, short was crucial. Since then, the character count has gotten bigger, and URLs aren’t included in the count any more.

If you do online research of any kind you have probably long since internalized the routine of loading the linked content and saving the actual URL rather than the shortened version. This turns out to be one of the benefits of moving to Mastodon: the link you get is the link you see.

So to network neutrality. Logically, its equivalent for social media services ought to include the principle that users can post whatever content or links they choose (law and regulation permitting), whether that’s reposted TikTok videos, a list of my IDs on other systems, or a link to a blog advocating that all social media companies be forced to become public utilities. Most have in fact operated that way until now, infected just enough with the early Internet ethos of openness. Changing that unwritten social contract is very bad news even though no one believed XFKAT’s CEO when he insisted he was a champion of free speech and called the now-his site the “town square”.

If that’s what we want social media platforms to be, someone’s going to have to force them, especially if they begin shrinking and their owners start to feel the chill wind of an existential threat. You could even – though no one is, to the best of my knowledge – make the argument that swapping in a site-created shortened URL is a violation of the spirit of data protection legislation. After all, no one posts links on a social media site with the view that their tastes in content should be collected, analyzed, and used to target ads. Librarians have long been stalwarts in resisting pressure to disclose what their patrons read and access. In the move online in general, and to corporate social media in particular, we have utterly lost sight of the principle of the right to our own thoughts.

Illustrations: The New York City public library in 2006..

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series she is a contributing editor for the Plutopia News Network podcast. Follow on Wendy M. GrossmanPosted on Categories Media, Net life, UncategorizedTags , Leave a comment on Five seconds