Planned incompatibility

My first portable music player was a monoaural Sony cassette player a little bigger than a deck of cards. I think it was intended for office use as a dictation machine, but I hauled it to folk clubs and recorded the songs I liked, and used it to listen to music while in transit. Circa 1977, I was the only one on most planes.

At the time, each portable device had its own charger with its own electrical specification and plug type. Some manufacturers saw this as an opportunity, and released so-called “universal” chargers that came with an array of the most common plugs and user-adjustable settings so you could match the original amps and volts. Sony reacted by ensuring that each new generation had a new plug that wasn’t included on the universal chargers…which would then copy it….which would push Sony to come up with yet another new plug And so on. All in the name of consumer safety, of course.

Sony’s modern equivalent (which of course includes Sony itself) doesn’t need to invent new plugs because more sophisticated methods are available. They can instead insert a computer chip that the main device checks to ensure the part is “genuine”. If the check fails, as it might if you’ve bought your replacement part from a Chinese seller on eBay, the device refuses to let the new part function. This is how Hewlett-Packard has ensured that its inkjet printers won’t work with third-party cartridges, it’s one way that Apple has hobbled third-party repair services, and it’s how, as this week’s news tells us, the PS5 will check its optonal disc drives.

Except the PS5 has a twist: in order to authenticate the drive the PS5 has to use an Internet connection to contact Sony’s server. I suppose it’s better than John Deere farm equipment, which, Cory Doctorow writes in his new book, The Internet Con: How to Seize the Means of Computation, requires a technician to drive out to a remote farm and type in a code before the new part will work while the farmer waits impatiently. But not by much, if you’re stuck somewhere offline.

“It’s likely that this is a security measure in order to ensure that the disc drive is a legitimate one and not a third party,” Video Gamer speculates. Checking the “legitimacy” of an optional add-on is not what I’d call “security”; in general it’s purely for the purpose of making it hard for customers to buy third-party add-ons (a goal the article does nod at later). Like other forms of digital rights management, the nuisance all accrues to the customer and the benefits, such as they are, accrue only to the manufacturer.

As Doctorow writes, part-pairing, as this practice is known, originated with cars (for this reason, it’s also often known as “VIN” locking, from vehicle information number), brought in to reducee the motivation to steal cars in order to strip them and sell their parts (which *is* security). The technology sector has embraced and extended this to bolster the Gilette business model: sell inkjet printers cheap and charge higher-than-champagne prices for ink. Apple, Doctorow writes, has used this approach to block repairs in order to sustain new phone sales – good for Apple, but wasteful for the environment and expensive for us. The most appalling of his examples, though, is wheelchairs, which are “VIN-locked and can’t be serviced by a local repair shop”, and medical devices. Making on-location repairs impossible in these cases is evil.

The PS5, though, compounds part-pairing by requiring an Internet connection, a trend that really needs not to catch on. As hundreds of Tesla drivers discovered the hard way during an app server outage it’s risky to presume those connections will always be there when you need them. Over the last couple of decades, we’ve come to accept that software is not a purchase but a subscription service subject to license. Now, hardware is going the same way, as seemed logical from the late-1990s moment when MIT’s Neil Gershenfeld proposed Things That Think. Back then, I imagined the idea applying to everyday household items, not devices that keep our bodies functioning. This oncoming future is truly dangerous, as Andrea Matwyshyn has been pointing out..

For Doctorow, the solution is to mandate and enforce interoperability as well as other regulations such as antitrust law. The right to repair laws that are appearing inany jurisdictions (and which companies like Apple and John Deere have historically opposed). Requiring interoperability would force companies to enable – or at least not to hinder – third-party repairs.

But more than that is going to be needed if we are to avoid a future in which every piece of our personal infrastructures is turned into a subscription service. At The Register, Richard Speed reminds that Microsoft will end support for Windows 10 in 2025, potentially leaving 400 million PCs stranded. We have seen this before.

I’m not sure anyone in government circles is really thinking about the implications for an aging population. My generation still owns things; you can’t delete my library of paper books or charge me for each reread. But today’s younger generation, for whom everything is a rental…what will they do at retirement age, when income drops but nothing gets cheaper in a world where everything stops working the minute you stop paying? If we don’t force change now, this will be their future.

Illustrations: A John Deere tractor.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

Review: The Other Pandemic

The Other Pandemic: How QAnon Contaminated the World
By James Ball
Bloomsbury Press
ISBN: 978-1-526-64255-4

One of the weirdest aspects of the January 6 insurrection at the US Capitol building was the mismatched variety of flags and causes represented: USA, Confederacy, Third Reich, Thin Blue Line, American Revolution, pirate, Trump. And in the midst: QAnon.

As journalist James Ball tells it in his new book, The Other Pandemic, QAnon is the perfect example of a modern, decentralized movement: it has no leader and no fixed ideology. Instead, it morphs to embrace the memes of the moment, drawing its force by renewing age-old conspiracy theories that never die. QAnon’s presence among all those flags – and popping up in demonstrations in many other countries – is a perfect example.

Charles Arthur’s 2021 book Social Warming used global warming as a metaphor for social media’s spread of anger and division. Ball prefers the metaphor of public health. The difference is subtle, but important: Arthur argued that social media became destabilizing because no one chose to stop it, where Ball’s characterization implies less agency. People have less choice about being infected with pathogens, no matter how careful they are.

Ball divides the book into four main sections reflecting the stages of a pandemic: emergence, infection, transmission, convalescence. He covers some of the same ground as Naomi Klein in her recent book Doppelganger. But Ball spent his adolescence goofing around on 4chan, where QAnon was later hatched, while Klein lets her personal story lead her into Internet fora. In other words, Klein writes about Internet culture from the outside in, while Ball writes from the inside out. Talia Lavin’s Culture Warlords, on the other hand, focused exclusively on investigating online hate..

“Goofing around” and “4chan” may sound incompatible, but as Ball tells it, in the early days after its founding in 2003, 4chan was anarchic and fun, with roots in gaming culture. Every online service I’ve known back to 1990 has had a corner like this, where ordinary rules of polite society were suspended and transgression was largely ironic, even if also obnoxious. The difference: 4chan’s culture spread well beyond its borders, and its dark side fuelled a global threat. The original QAnon posting arrived on 4chan in 2017, followed quickly by others. Detailed, seemingly knowledgeable, and full of questions for readers to “research”, they quickly attracted backers who propagated them onto much bigger sites like YouTube, which turned a niche audience of thousands into a mass audience of millions.

A key element of Ball’s metaphor is Richard Dawkins’ 1976 concept of memes: scraps of ideas that use us to replicate themselves, as biological viruses do. To extend the analogy, Ball argues that we shouldn’t blame – or dismiss as stupid – the people who get “infected” by QAnon.

This book represents an evolution for Ball. In 2017’s Post-Truth, he advocated fact-checking and teaching media literacy as key elements of the solution to the spread of misinformation. Here, he acknowledges that this approach is only a small part of containing a social movement that feeds on emotional engagement and doesn’t care about facts. In his conclusion, where he advocates prevention rather than cure and the adoption of multi-pronged strategies analogous to those we use to fight diseases like malaria, however, there are echoes of that trust in authority. I continue to believe the essential approach will be nearer to that of modern cybersecurity, similarly decentralized and mixing economics, the social sciences, psychology, and technology, among others. But this challenge is so big that no one metaphor is enough to contain it.

The documented life

For various reasons, this week I asked my GP for printed verification of my latest covid booster. They handed me what appears to be a printout of the entire history of my interactions with the practice back to 1997.

I have to say, reading it was a shock. I expected them to have kept records of tests ordered and the results. I didn’t think about them keeping everything I said on the website’s triage form, which they ask you to use when requesting an appointment, treatment, or whatever. Nor did I expect notes beginning “Pt dropped in to ask…”

The record doesn’t, however, show all details of all conversations I’ve had with everyone in the practice. It notes medical interactions, like noting a conversation in which I was advised about various vaccinations. It doesn’t mention that on first acquaintance with the GP to whom I’m assigned I asked her about her attitudes toward medical privacy and alternative treatments such as acupuncture. “Are you interviewing me?” she asked. A little bit, yes.

There are also bits that are wrong or outdated.

I think if you wanted a way to make the privacy case, showing people what’s in modern medical records would go a long way. That said, one of the key problems in current approaches to the issues surrounding mass data collection is that everything is siloed in people’s minds. It’s rare for individuals to look at a medical record and connect it to the habit of mind that continues to produce Google, Meta, Amazon, and an ecosystem of data brokers that keeps getting bigger no matter how many data protection laws we pass. Medical records hit a nerve in an intimate way that purchase histories mostly don’t. Getting the broad mainstream to see the overall picture, where everything connects into giant, highly detailed dossiers on all of us, is hard.

And it shouldn’t be. Because it should be obvious by now that what used to be considered a paranoid view has a lot of reality. Governments aren’t highly motivated to curb commercial companies’ data collecction because that all represents data that can be subpoenaed without the risk of exciting a public debate or having to justify a budget. In the abstract, I don’t care that much who knows what about me. Seeing the data on a printout, though, invites imagining a hostile stranger reading it. Today, that potentially hostile stranger is just some other branch of the NHS, probably someone looking for clues in providing me with medical care. Five or twenty years from now…who knows?

More to the point, who knows what people will think is normal? Thirty years ago, “normal” meant being horrified at the idea of cameras watching everywhere. It meant fingerprints were only taken from criminal suspects. And, to be fair, it meant that governments could intercept people’s phone calls by making a deal with just one legacy giant telephone company (but a lot of people didn’t fully realize that). Today’s kids are growing up thinking of constantly being tracked as normal, I’d like to think that we’re reaching a turning point where what Big Tech and other monopolists have tried to convince is is normal is thoroughly rejected. It’s been a long wait.

I think the real shock in looking at records like this is seeing yourself through someone else’s notes. This is very like the moment in the documentary Erasing David, when the David of the title gets his phone book-sized records from a variety of companies. “What was I angry about on November 2006?” he muses, staring at the note of a moment he had long forgotten but the company hadn’t. I was relieved to see there were no such comments. On the other hand, also missing were a couple of things I distinctly remember asking them to write down.

But don’t get me wrong: I am grateful that someone is keeping these notes besides me. I have medical records! For the first 40 years of my life, doctors routinely refused to show patients any of their medical records. Even when I was leaving the US to move overseas in 1981, my then-doctor refused to give me copies, saying, “There’s nothing there that would be any use to you.” I took that to mean there were things he didn’t want me to see. Or he didn’t want to take the trouble to read through and see that there weren’t. So I have no record of early vaccinations or anything else from those years. At some point I made another attempt and was told the records had been destroyed after seven years. Given that background, the insousiance with which the receptionist printed off a dozen pages of my history and handed it over was a stunning advance in patient rights.

For the last 30-plus years, therefore, I’ve kept my own notes. There isn’t, after checking, anything in the official record that I don’t have. There may, of course, be other notes they don’t share with patients.

Whether for purposes malign (surveillance, control) or benign (service), undocumented lives are increasingly rare. In an ideal world, there’d be a way for me and the medical practice to collaborate to reconcile discrepancies and rectify omissions. The notion of patients controlling their own data is still far from acceptance. That requires a whole new level of trust.

Illustrations: Asclepius, god of medieine, exhibited in the Museum of Epidaurus Theatre (Michael F. Mehnert via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

The grown-ups

In an article this week in the Guardian, Adrian Chiles asks what decisions today’s parents are making that their kids will someday look back on in horror the way we look back on things from our childhood. Probably his best example is riding in cars without seatbelts (which I’m glad to say I survived). In contrast to his suggestion, I don’t actually think tomorrow’s parents will look back and think they shouldn’t have had smartphones, though it’s certainly true that last year a current parent MP (whose name I’ve lost) gave an impassioned speech opposing the UK’s just-passed Online Safety Act in which she said she had taken risks on the Internet as a teenager that she wouldn’t want her kids to take now.

Some of that, though, is that times change consequences. I knew plenty of teens who smoked marijuana in the 1970s. I knew no one whose parents found them severely ill from overdoing it. Last week, the parent of a current 15-year-old told me he’d found exactly that. His kid had made the classic error (see several 2010s sitcoms) of not understanding how slowly gummies act. Fortunately, marijuana won’t kill you, as the parent found to his great relief after some frenzied online searching. Even in 1972, it was known that consuming marijuana by ingestion (for example, in brownies) made it take effect more slowly. But the marijuana itself, by all accounts, was less potent. It was, in that sense, safer (although: illegal, with all the risks that involves).

The usual excuse for disturbing levels of past parental risk-taking is “We didn’t know any better”. A lot of times that’s true. When today’s parents of teenagers were 12 no one had smartphones; when today’s parents were teens their parents had grown up without Internet access at home; when my parents were teens they didn’t have TV. New risks arrive with every generation, and each new risk requires time to understand the consequences of getting it wrong.

That is, however, no excuse for some of the decisions adults are making about systems that affect all of us. Also this week and also at the Guardian, Akiko Hart, interim director of Liberty writes scathingly about government plans to expand the use of live facial recognition to track shoplifters. Under Project Pegasus, shops will use technology provided by Facewatch.

I first encountered Facewatch ten years ago at a conference on biometrics. Even then the company was already talking about “cloud-based crime reporting” in order to deter low-level crime. And even then there were questions about fairness. For how long would shoplifters remain on a list of people to watch closely? What redress was there going to be if the system got it wrong? Facewatch’s attitude seemed to be simply that what the company was doing wasn’t illegal because its customer companies were sharing information across their own branches. What Hart is describing, however, is much worse: a state-backed automated system that will see ten major retailers upload their CCTV images for matching against police databases. Policing minister Chris Philp hopes to expand this into a national shoplifting database including the UK’s 45 million passport photos. Hart suggests instead tackling poverty.

Quite apart from how awful all that is, what I’m interested in here is the increased embedding in public life of technology we already know is flawed and discriminatory. Since 2013, myriad investigations have found the algorithms that power facial recognition to have been trained on unrepresentative databases that make them increasingly inaccurate as the subjects diverge from “white male”.

There are endless examples of misidentification leading to false arrests. Last month, a man who was pulled over on the road in Georgia filed a lawsuit after being arrested and held for several days for a crime he didn’t commit in Louisiana, where he had never been.

In 2021, a story I’d missed, the University of Illinois at Urbana-Champaign announced it would discontinue using Proctorio, remote proctoring software that monitors students for cheating. The issue: the software frequently fails to recognize non-white faces. In a retail store, this might mean being followed until you leave. In an exam situation, this may mean being accused of cheating and having your results thrown out. A few months later, at Vice, Todd Feathers reported that a student researcher had studied the algorithm Proctorio was using and found its facial detection model failed to recognize black faces more than half the time. Late last year, the Dutch Institute of Human Rights found that using Proctorio could be discriminatory.

The point really isn’t this specific software or these specific cases. The point is more that we have a technology that we know is discriminatory and that we know would still violate human rights if it were accurate…and yet it keeps getting more and more deeply embedded in public systems. None of these systems are transparent enough to tell us what facial identification model they use, or publish benchmarks and test results.

So much of what net.wars is about is avoiding bad technology law that sticks. In this case, it’s bad technology that is becoming embedded in systems that one day will have to be ripped out, and we are entirely ignoring the risks. On that day, our children and their children will look at us, and say, “What were you thinking? You did know better.”

Illustrations: The CCTV camera on George Orwell’s house at 22 Portobello Road, London.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

The end of cool

For a good bit of this year’s We Robot, it felt like abstract “AI” – that is, algorithms running on computers with no mobility – had swallowed the robots whose future this conference was invented to think about. This despite a pre-conference visit to Boston Dynamics, which showed off its Atlas
‘s ability to do gymnastics. It’s cute, but is it useful? Your washing machine is smarter, and its intelligence solves real problems like how to use less water.

There’s always some uncertainty about boundaries at this event: is a machine learning decision making system a robot? At the inaugural We Robot in 2012, the engineer Bill Smart summed up the difference: “My iPhone can’t stab me in my bed.” Of course, neither could an early Roomba, which most would agree was the first domestic robot. However, it was also dumb as a floor tile, achieving cleanliness through random repetition rather than intelligent mapping. In the Roomba 1.0 sense, a “robot” is “a device that does boring things so I don’t have to”. Not cool, but useful, and solves a real problem

During a session in which participants played a game designed to highlight the conflicts inherent in designing an urban drone delivery system, Lael Odhner offered yet another definition: “A robot is a literary device we use to voice our discomfort with technology.” In the context of an event where participants think through the challenges robots bring to law and policy, this may be the closest approximation.

In the design exercise, our table’s three choices were: fund the FAA (so they can devise and enforce rules and policies), build it as a municipally-owned public service both companies and individuals can use as customers, and ban advertising on the drones for reasons of both safety and offensiveness. A similar exercise last year produced more specific rules, but also led us to realize that a drone delivery service had no benefits over current delivery services.

Much depends on scale. One reason we chose a municipal public service was the scale of noise and environmental impact inevitably generated by multiple competing commercial services. In a paper, Woody Hartzog examined the meaning of “scale”: is scale *more*, or is scale *different*? You can argue, as net.wars often has, that scale *creates* difference, but it’s rarely clear where to place the threshold, or how reaching it changes a technology’s harms or who it makes vulnerable. Ryan Calo and Daniella DiPaola suggested that rather than associate vulnerability with particular classes of people we should see it as variable with circumstances: “Everyone is vulnerable sometimes, and vulnerability is a state that can be created and manipulated toward particular ends.” This seems a more logical and fairer approach.

An aspect of this is that there are two types of rules: harm rules, which empower institutions to limit harm, and power rules, which empower individuals to protect themselves. A possible worked example soon presented itself in Kegan J Strawn;s and Daniel Sokol‘s paper on safety techniques in mobile robots, which suggested copying medical ethics’ consent approach. Then someone described the street scene in which every pedestrian had to give consent to every passing experimental Tesla, a possibly an even worse scenario than ad-bearing delivery drones. Pedestrians get nothing out of the situation, and Teslas don’t become safer. What you really want is for car companies not to test the safety of autonomous vehicles on public roads with pedestrians as unwitting crash test dummies.

I try to think every year how our ideas about inegrating robots into society are changing over time. An unusual paper from Maria P. Angel considered this question with respect to privacy scholarship by surveying 1990s writing and 20 years of papers presented at Privacy Law Scholars. We Robot co-founders Calo, Michael Froomkin, and Ian Kerr partly copied its design. Angel’s conclusion is roughly that the 1990s saw calls for an end to self-regulation while the 2000s moved from privacy as necessary for individual autonomy and self-determination to collective benefits and most recently to its importance for human flourishing.

As Hartzog commented, he came to the first We Robot with the belief that “Robots are magic”, only to encounter Smart’s “really fancy hammers.” And, Smart and Cindy Grimm added in 2018, controlled by sensors that are “late, noisy, and wrong”. Hartzog’s early excitement was shared by many of us; the future looked so *interesting* when it was almost entirely imaginary.

Over time, the robotic future has become more nowish, and has shifted in response to technological development; the discussion has become more about real systems (2022) than imagined future ones. The arrival of real robots on our streets – for example, San Francisco’s 2017 use of security robots to deter homeless camps – changed parts of the discussion from theoretical to practical.

In the mid-2010s, much discussion focused on problems of fairness, especially to humans in the loop, who, Madeleine Claire Elish correctly predicted in 2016 would be blamed for failures. More recently, the proliferation of data-gathering devices (sensors, cameras) into everything from truckers’ cabs to agriculture and the arrival of new algorithmic systems dubbed AI has raised awareness of the companies behind these technologies. And, latterly, that often the technology diverts attention from the better possibilities of structural change.

But that’s not as cool.

Illustrations: Boston Dynamics’ Atlas robots doing synchronized backflips (via YouTube).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Review: Data Driven

Data Driven: Truckers, Technology, and the New Workplace Surveillance
By Karen Levy
Princeton University Press
ISBN: 978-0-6911-7530-0

The strikes in Hollywood show actors and writers in an existential crisis: a highly lucrative industry used to pay them a good middle class living but now has the majority struggling just to survive. In her recent book, Data Driven, Cornell assistant professor Karen Levy finds America’s truckers in a similar plight.

Both groups have had their industries change around them because of new technology. In Hollywood, streaming came along to break the feedback loop that powered a highly successful business model for generations. In trucking, the culprit is electronic logging devices (ELDs), which are changing the profession entirely.

Levy has been studying truckers since 2011. At that point, ELDs were beginning to appear in truckers’ cabs but were purely voluntary. That changed in 2017, when the Federal Motor Carrier Safety Administration’s rule mandating their use came into force. The intention, as always, is reasonably benign: to improve safety by ensuring that truckers on the road remain alert and comply with the regulations governing the hours they’re allowed to work.

As part of this work, Levy has interviewed truckers, family members, and managers, and studied trucker-oriented media such as online forums, radio programs, and magazines. She was also able to examine auditing practices in both analog and digital formats.

Some of her conclusions are worrying. For example, she finds that taking truckers’ paper logs into an office away from the cab allowed auditors more time to study them and greater ability to ask questions about them. ELDs, by contrast, are often wired into the cab, and the auditor must inspect them in situ. Where the paper logs were simply understood, many inspectors struggle with the ELDs’ inconsistent interfaces, and being required to enter what is after all the trucker’s personal living space tends to limit the time they spend.

Truckers by and large experience the ELDs as intrusive. Those who have been at the wheel the longest most resent the devaluation of their experience the devices bring. Unlike the paper logs, which remained under the truckers’ control, ELDs often send the data they collect direct to management, who may respond by issuing instructions that override the trucker’s own decisions and on-site information.

Levy’s main point would resonate with those Hollywood strikers. ELDs are being used to correct the genuine problem of tired, and therefore unsafe, truckers. Yet the reason truckers are so tired and take the risk of overworking is the way the industry is structured. Changing how drivers are paid from purely by the mile to including the hours they spend moving their trucks around the yards waiting to unload and other periods of unavoidable delay would be far more effective. Worse, it’s the most experienced truckers who are most alienated by the ELDs’ surveillance. Replacing them with younger, less experienced drivers will not improve road safety for any of us.