Review: Money in the Metaverse

Money in the Metaverse: Digital Assets, Online Identities, Spatial Computing, and Why Virtual Worlds Mean Real Business
by David Birch and Victoria Richardson
London Publishing Partnership
ISBN: 978-1-916749-05-4

In my area of London there are two buildings whose architecture unmistakably identifies them as former banks. Time has moved on, and one houses a Pizza Express, the other a Tesco Direct. The obviously-built-to-be-a-Post-Office building, too, is now a restaurant, and the post office itself now occupies a corner of a newsagent’s. They ilustrate a point David Birch has frequently made: there is nothing permanent about our financial arrangements. Banking itself is only a few hundred years old.

Writing with Victoria Richardson, in their new book Money in the Metaverse: Birch argues this point anew. At one time paper notes seemed as shocking and absurd as cryptocurrencies and non-fungible tokens do today. The skeptic reads that and wonders if the early days of paper notes were as rife with fraud and hot air as NFTs have been. Is the metaverse even still a thing? It’s all AI hype round here now.

Birch and Richardson, however, believe that increasingly our lives will be lived online – a flight to the “cyburbs”, they call it. In one of their early examples of our future, they suggest it will be good value to pay for a virtual ticket (NFT) to sit next to a friend to listen to a concert in a virtual auditorium. It may be relevant that they were likely writing this during the acute phase of the covid pandemic. By now, most of the people I zoomed with then are back doing things in the real world and are highly resistant to returning to virtual, or even hybrid, meetups.

But exactly how financial services might operate isn’t really their point and would be hard to get right eve if it were. Instead, their goal is to explain various novel financial technologies and tools such as NFTs, wallets, smart contracts, and digital identities and suggest possible strategies for businesses to use them to build services. Some of the underlying ideas have been around for at least a couple of decades: software agents that negotiate on an individual’s behalf, and support for multiple disconnected identities to be used in the different roles in life we all have, for example. Others are services that seem to have little to do with the metaverse, such as paperless air travel, already being implemented, and virtual tours of travel destination, which have been with us in some form since video arrived on the web.

The key question – whether the metaverse will see mass adoption – is not one Birch and Richardson can answer. Certainly, I’m dubious about some of the use cases they propose – such as the idea of gamifying life insurance by offering reduced premiums to those who reach various thresholds of physical activity or healthy living. Insurance is supposed to manage risk by pooling it; their proposal would penalize disability and illness.

A second question occurs: what new kinds of crime will these technologies enable? Just this week, Fortune reported that cashlessness has brought a new level of crime to Sweden. Why should the metaverse be different? This, too, is beyond the scope of Birch’s and Richardson’s work, which is to explain but not to either hype or critique. The overall impression the book leaves, however, is of a too-clean computer-generated landscape or smart city mockup, where the messiness of real life is missing.

Soap dispensers and Skynet

In the TV series Breaking Bad, the weary ex-cop Mike Ehrmantraut tells meth chemist Walter White : “No more half measures.” The last time he took half measures, the woman he was trying to protect was brutally murdered.

Apparently people like to say there are no dead bodies in privacy (although this is easily countered with ex-CIA director General Michael Hayden’s comment, “We kill people based on metadata”). But, as Woody Hartzog told a Senate committee hearing in September 2023, summarizing work he did with Neil Richards and Ryan Durrie, half measures in AI/privacy legislation are still a bad thing.

A discussion at Privacy Law Scholars last week laid out the problems. Half measures don’t work. They don’t prevent societal harms. They don’t prevent AI from being deployed where it shouldn’t be. And they sap the political will to follow up with anything stronger.

In an article for The Brink, Hartzog said, “To bring AI within the rule of law, lawmakers must go beyond half measures to ensure that AI systems and the actors that deploy them are worthy of our trust,”

He goes on to list examples of half measures: transparency, committing to ethical principles, and mitigating bias. Transparency is good, but doesn’t automatically bring accountability. Ethical principles don’t change business models. And bias mitigation to make a technology nominally fairer may simultaneously make it more dangerous. Think facial recognition: debias the system and improve its accuracy for matching the faces of non-male, non-white people, and then it’s used to target those same people with surveillance.

Or, bias mitigation may have nothing to do with the actual problem, an underlying business model, as Arvind Narayanan, author of the forthcoming book AI Snake Oil, pointed out a few days later at an event convened by the Future of Privacy Forum. In his example, the Washington Post reported in 2019 on the case of an algorithm intended to help hospitals predict which patients will benefit from additional medical care. It turned out to favor white patients. But, Narayanan said, the system’s provider responded to the story by saying that the algorithm’s cost model accurately predicted the costs of additional health care – in other words, the algorithm did exactly what the hospital wanted it to do.

“I think hospitals should be forced to use a different model – but that’s not a technical question, it’s politics.”.

Narayanan also called out auditing (another Hartzog half measure). You can, he said, audit a human resources system to expose patterns in which resumes it flags for interviews and which it drops. But no one ever commissions research modeled on the expensive random controlled testing common in medicine that follows up for five years to see if the system actually picks good employees.

Adding confusion is the fact that “AI” isn’t a single thing. Instead, it’s what someone called a “suitcase term” – that is, a container for many different systems built for many different purposes by many different organizations with many different motives. It is absurd to conflate AGI – the artificial general intelligence of science fiction stories and scientists’ dreams that can surpass and kill us all – with pattern-recognizing software that depends on plundering human-created content and the labeling work of millions of low-paid workers

To digress briefly, some of the AI in that suitcase is getting truly goofy. Yum Brands has announced that its restaurants, which include Taco Bell, Pizza Hut, and KFC, will be “AI-first”. Among Yum’s envisioned uses, the company tells Benj Edwards at Ars Technica, are being able to ask an app what temperature to set the oven. I can’t help suspecting that the real eventual use will be data collection and discriminatory pricing. Stuff like this is why Ed Zitron writes postings like The Rot-Com Bubble, which hypothesizes that the reason Internet services are deteriorating is that technology companies have run out of genuinely innovative things to sell us.

That you cannot solve social problems with technology is a long-held truism, but it seems to be especially true of the messy middle of the AI spectrum, the use cases active now that rarely get the same attention as the far ends of that spectrum.

As Neil Richards put it at PLSC, “The way it’s presented now, it’s either existential risk or a soap dispenser that doesn’t work on brown hands when the real problem is the intermediate level of societal change via AI.”

The PLSC discussion included a list of the ways that regulations fail. Underfunded enforcement. Regulations that are pure theater. The wrong measures. The right goal, but weakly drafted legislation. Make the regulation ambiguous, or base it on principles that are too broad. Choose conflicting half-measures – for example, require transparency but add the principle that people should own their own data.

Like Cristina Caffarra a week earlier at CPDP, Hartzog, Richards, and Durrie favor finding remedies that focus on limiting abuses of power. Full measures include outright bans, the right to bring a private cause of action, imposing duties of “loyalty, care, and confidentiality”, and limiting exploitative data practices within these systems. Curbing abuses of power, as he says, is nothing new. The shiny new technology is a distraction.

Or, as Narayanan put it, “Broken AI is appealing to broken institutions.”

Illustrations: Mike (Jonathan Banks) telling Walt (Bryan Cranston) in Breaking Bad (S03e12) “no more half measures”.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Review: More Than a Glitch

More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech
By Meredith Broussard
MIT Press
ISBN: 978-0-262-04765-4

At the beginning of the 1985 movie Brazil, a family’s life is ruined when a fly gets stuck in a typewriter key so that the wrong man is carted away to prison. It’s a visual play on “computer bug”, so named after a moth got trapped in a computer at Harvard.

Based on her recent book More Than a Glitch, NYU associate professor Meredith Broussard, would call both the fly and the moth a “glitch”. In the movie, the error is catastrophic for Buttle-not-Tuttle and his family, but it’s a single, ephemeral mistake that can be prevented with insecticide and cross-checking. A “bug” is more complex and more significant: it’s “substantial”, “a more serious matter that makes software fail”. It “deserves attention”. It’s the difference between the lone rotten apple in a bushel full of good ones and a barrel that causes all the apples put it in to rot.

This distinction is Broussard’s prelude to her fundamental argument that the lack of fairness in computer systems is persistent, endemic, and structural. In the book, she examines numerous computer systems that are already out in the world causing trouble. After explaining the fundamentals of machine bias, she goes through a variety of sectors and applications to examine failures of fairness in each one. In education, proctoring software penalizes darker-skinned students by failing to identify them accurately, and algorithms used to estimate scores on tests canceled during the pandemic penalized exceptional students from unexpected backgrounds. In health, long-practiced “race correction” that derives from slavery preferences white patients for everything from painkillers to kidney transplants – and gets is embedded into new computer systems built to replicate existing practice. If computer developers don’t understand the way in which the world is prejudiced – and they don’t – how can the systems they create be more neutral than the precursors they replace? Broussard delves inside each system to show why, not just how, it doesn’t work as intended.

In other cases Broussard highlights, part of the problem is rigid inflexibility in back-end systems that need to exchange data. There’s little benefit in having 58 gender options if the underlying database only supports two choices. At a doctor’s office, Broussard is told she can only check one box for race; she prefer to check both “black” and “white” because in medical settings it may affect her treatment. The digital world remains only partially accessible. And, as Broussard discovered when she was diagnosed with breast cancer, even supposed AI successes like reading radiology films are overhyped. This section calls back to her 2018 book, Artificial Unintelligence, which did a good job of both explaining how machine learning and “AI” computer systems work and why a lot of the things the industry says work…really don’t (see also self-driving cars).

Broussard concludes by advocating for public interest technology and a rethink. New technology imitates the world it comes from; computers “predict the status quo”. Making change requires engineering technology so that it performs differently. It’s a tall order, and Broussard knows that. But wasn’t that the whole promise the technology founder made? That they could change the world to empower the rest of us?

Review: A History of Fake Things on the Internet

A History of Fakes on the Internet
By Walter J. Scheirer
Stanford University Press
ISBN 2023017876

One of Agatha Christie’s richest sources of plots was the uncertainty of identity in England’s post-war social disruption. Before then, she tells us, anyone arriving to take up residence in a village brought a letter of introduction; afterwards, old-time residents had to take newcomers at their own valuation. Had she lived into the 21st century, the arriving Internet would have given her whole new levels of uncertainty to play with.

In his recent book A History of Fake Things on the Internet, University of Notre Dame professor Walter J. Scheirer describes creating and detecting online fakes as an ongoing arms race. Where many people project doomishly that we will soon lose the ability to distinguish fakery from reality, Scheirer is more optimistic. “We’ve had functional policies in the past; there is no good reason we can’t have them again,” he concludes, adding that to make this happen we need a better understanding of the media that support the fakes.

I have a lot of sympathy with this view; as I wrote recently, things that fool people when a medium is new are instantly recognizable as fake once they become experienced. We adapt. No one now would be fooled by the images that looked real in the early days of photography. Our perceptions become more sophisticated, and we learn to examine context. Early fakes often work simply because we don’t know yet that such fakes are possible. Once we do know, we exercise much greater caution before believing. Teens who’ve grown up applying filters to the photos and videos they upload to Instagram and TikTok, see images very differently than those of us who grew up with TV and film.

Schierer begins his story with the hacker counterculture that saw computers as a source of subversive opportunities. His own research into media forensics began with Photoshop. At the time, many, especially in the military, worried that nation-states would fake content in order to deceive and manipulate. What they found, in much greater volume, was memes and what Schierer calls “participatory fakery” – that is, the cultural outpouring of fakes for entertainment and self-expression, most of it harmless. Further chapters consider cheat codes in games, the slow conversion of hackers into security practitioners, adversarial algorithms and media forensics, shock-content sites, and generative AI.

Through it all, Schierer remains optimistic that the world we’re moving into “looks pretty good”. Yes, we are discovering hundreds of scientific papers with faked data, faked results, or faked images, but we also have new analysis tools to use to detect them and Retraction Watch to catalogue them. The same new tools that empower malicious people enable many more positive uses for storytelling, collaboration, and communication. Perhaps forgetting that the computer industry relentlessly ignores its own history, he writes that we should learn from the past and react to the present.

The mention of scientific papers raises an issue Schierer seems not to worry about: waste. Every retracted paper represents lost resources – public funding, scientists’ time and effort, and the same multiplied into the future for anyone who attempts to build on that paper. Figuring out how to automate reliable detection of chatbot-generated text does nothing to lessen the vast energy, water, and human resources that go into building and maintaining all those data centers and training models (see also filtering spam). Like Scheirer, I’m largely optimistic about our ability to adapt to a more slippery virtual reality. But the amount of wasted resources is depressing and, given climate change, dangerous.

Alabama never got the bomb

There is this to be said for nuclear weapons: they haven’t scaled. Since 1969, when Tom Lehrer warned about proliferation (“We’ll try to stay serene and calm | When Alabama gets the bomb”), a world of treaties, regulation, and deterrents has helped, but even if it hadn’t, building and updating nuclear weapons remains stubbornly expensive. (That said, the current situation is scary enough.)

The same will not be true of drones, James Patton Rogers explained in a recent talk at Kings College London about his new book, Precision: A History of American Warfare. Already, he says, drones are within reach for non-governmental actors such as Mexican drug cartels. At the BBC, Jonathan Marcus estimated in February 2022 that more than 100 nations and non-state actors already have combat drones and these systems are proliferating rapidly. The brief moment in which the US and Israel had an exclusive edge is already gone; Rogers says Iran and Turkey are “drone powers”. Back to the BBC in 2022: Marcus writes that some terrorist groups had already been able to build attack drone systems using commercial components for a few hundred dollars. Rogers put the number of countries with drone capability in 2023 at 113, plus 65 armed groups. He also called them one of the “greatest threats to state security”, noting the speed and abruptness with which they’ve flipped from being protective and their potential for “assassinations, strikes, saturation attacks”.

Rogers, who calls his book an “intellectual history”, traces the beginnings of precision to the end of the long, muddy, casualty-filled conflict of World War I. Never again: instead, remote attacks on military-industrial targets that limit troops on the ground and loss of life. The arrival of the atomic bomb and Russia’s development of same changed focus to the Dr Strangelove-style desire for the technology to mount massive retaliation. John F. Kennedy successfully campaigned on the missile gap. (In this part of Rogers’ presentation, it was impossible not to imagine how effective this amount of energy could have been if directed toward climate change…)

The 1990s and the Gulf War brought a revival of precision in the form of the first cruise missiles and the first drones. But as long ago as 1988 there were warnings that the US could not monopolize drones and they would become a threat. “We need an international accord to control drone proliferation,” Rogers said.

But the threat to state security was not Rogers’ answer when an audience member asked him, “What keeps you awake at night?”

“Drone mass killings targeting ethnic diasporas in cities.”

Authoritarian governments have long reached out to control opposition outside their borders. In 1974, I rented an apartment from the Greek owner of a local highly-regarded restaurant. A day later, a friend reacted in horror: didn’t I know that restaurateur was persona-non-patronize because he had reported Greek student protesters in Ithaca, New York to the military junta then in power and there had been consequences for their families back home? No, I did not.

As an informant, landlord’s powers were limited, however. He could go to and photograph protests; if he couldn’t identify the students he could still send their pictures. But he couldn’t amass comprehensive location data tracking their daily lives, operate a facial recognition system, or monitor them on social media and infer their social graphs. A modern authoritarian government equipped with Internet connections can do all of that and more, and the data it can’t gather itself it can obtain by purchase, contract, theft, hacking, or compulsion.

In Canada, opponents of Chinese Communist Party policies report harassment and intimidation. Freedom House reports that China’s transnational repression also includes spyware, digital threats, physical assault, and cooption of other countries, all escalating since 2014. There’s no reason for this sort of thing to be limited to the Chinese (and Russians); Citizen Lab has myriad examples of governments’ use of spyware to target journalists, political opponents, and activists, inside or outside the countries where they’re active.

Today, even in democratic countries there is an ongoing trend toward increased and more militaristic surveillance of migrants and borders. In 2021, Statewatch reported on the militarization of the EU’s borders along the Mediterranean, including a collaboration between Airbus and two Israeli companies to use drones to intercept migrant vessels Another workshop that same year made plain the way migrants are being dataveilled by both governments and the aid agencies they rely on for help. In 2022, the courts ordered the UK government to stop seizing the smartphones belonging to migrants arriving in small boats.

Most people remain unaware of this unless some poliitician boasts about it as part of a tough-on-immigration platform. In general, rights for any kind of foreigners – immigrants, ethnic minorities – are a hard sell, if only because non-citizens have no vote, and an even harder one against the headwind of “they are not us” rhetoric. Threats of the kind Rogers imagined are not the sort nations are in the habit of protecting against.

It isn’t much of a stretch to imagine all those invasive technologies being harnessed to build a detailed map of particular communities. From there, given affordable drones, you just need to develop enough malevolence to want to kill them off, and be the sort of country that doesn’t care if the rest of the world despises you for it.

Illustrations: British migrants to Australia in 1949 (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

Review: The Bill Gates Problem

The Bill Gates Problem: Reckoning with the Myth of the Good Billionaire
By Tim Schwab
Metropolitan Books
ISBN: 978-1-25085009-6

Thirty years ago, the Federal Trade Commission began investigating one of the world’s largest technology companies on antitrust grounds. Was it leveraging its monopoly in one area to build dominance in others? Did it bully smaller competitors into disclosing their secrets, which it then copied? And so on. That company was Microsoft, Windows was giving it leverage over office productivity software, web browsers, and media players, and its leader was Bill Gates. In 1999, the courts ruled Microsoft a monopoly.

At the time, it was relatively commonplace for people to complain that Gates was insufficiently charitable. Why wasn’t he more philanthropic, given his vast and increasing wealth? (Our standards for billionaire wealth were lower back then.) Be careful what you wish for…

The transition from monopolist mogul to beneficent social entrepreneur where Tim Schwab starts in The Bill Gates Problem: Reckoning with the Myth of the Good Billionaire. In Schwab’s view, the reason is well-executed PR, in which category he includes the many donations the foundation makes to journalism organizations.

I have heard complaints for years that the Bill and Melinda Gates Foundation’s approach to philanthropy favors expensive technological interventions over cheaper, well-established ones. In education that might mean laptops and edtech software rather than training teachers; in medicine that might mean vaccine research rather than clean water. Schwab’s investigative work turns up dozens such stories in the areas BMGF works in: family planning, education, health. Yet, Schwab writes, citing numerous sources for his figures, for all the billions BMGF has poured into these areas, it has failed to meet its stated objectives.

You can argue that case, but Schwab moves on from there to examine the damaging effects of depending on a billionaire, no matter how smart and well-intentioned, to finance services that might more properly be the business of the state. No one elected Gates, and no one has voted on the priorities he has chosen to set. The covid pandemic provides a particularly good example. One of the biggest concerns as efforts to produce vaccines got underway was ensuring that access would not be limited to rich countries. Many believed that the most efficient way of doing this was to refrain from patenting the vaccines, and help poorer countries build their own production facilities. Gates was one of those who opposed this approach, arguing that patents were necessary to reward pharmaceutical companies for the investment they poured into research, and also that few countries had the expertise to make the vaccines. Gates gave in to pressure and reversed his position in May 2021 to support a “narrow waiver”. Reading that BMGF is the biggest funder of the WHO and remembering his preference for technological interventions made me wonder: how much do we have Gates to thank for the emphasis on vaccines and the reluctance to push cheaper non-pharmaceutical interventions like masks, HEPA filters, and ventilation in countries like the UK?

Schwab goes into plenty of detail about all this. But his wider point is to lay out the power Gates’s massive wealth – both the foundation’s and his own – gives him over the charitable sector and, through public-partnerships, many of the nations in which he operates. Schwab also calls Gates’s approach “philanthropic colonialism” because the bulk of his donations go to organizations based in the West, rather than directly to their counterparts elsewhere.

Pointing out the amount of taxpayer subsidy the foundation gets through the tax exemptions charities get, Schwab asks if we’re really getting value for our money. Wouldn’t we be better off collecting taxes and setting our own agendas? Is there really any such thing as a “good” billionaire?

Review: The Oracle

The Oracle
by Ari Juels
Talos Press
ISBN: 978-1-945863-85-1
Ebook ISBN: 978-1-945863-86-8

In 1994, a physicist named Timothy C. May posited the idea of an anonymous information market he called blacknet. With anonymity secured by cryptography, participants could trade government secrets. And, he wrote in 1988’s Crypto-Anarchist Manifesto “An anonymous computerized market will even make possible abhorrent markets for assassinations and extortion.” In May’s time, the big thing missing to enable such a market was a payment system. Then, in 2008, came bitcoin and the blockchain.

In 2015, Ari Juels, now the Weill Family Foundation and Joan and Sanford I. Weill Professor at Cornell Tech but previously chief scientist at the cryptography company RSA, saw blacknet potential in etherum’s adoption of “smart contracts”, an idea that had been floating around since the 1990s. Smart contracts are computer programs that automatically execute transactions when specified conditions are met without the need for a trusted intermediary to provide guarantees. Among other possibilities, they can run on blockchains – the public, tamperproof, shared ledger that records cryptocurrency transactions.

In the resulting research paper on criminal smart contracts PDF), Juels and co-authors Ahmed Kosba and Elaine Shi wrote: “We show how what we call criminal smart contracts (CSCs) can facilitate leakage of confidential information, theft of cryptographic keys, and various real-world crimes (murder, arson, terrorism).”

It’s not often a research paper becomes the basis for a techno-thriller novel, but Juels has prior form. His 2009 novel Tetraktys imagined that members of an ancient Pythagorean cult had figured out how to factor prime numbers, thereby busting the widely-used public key cryptography on which security on the Internet depends. Juels’ hero in that book was uniquely suited to help the NSA track down the miscreants because he was both a cryptographer and the well-schooled son of an expert on the classical world. Juels could almost be describing himself: before turning to cryptography he studied classical literature at Amherst and Oxford.

Juels’ new book, The Oracle, has much in common with his earlier work. His alter-ego here is a cryptographer working on blockchains and smart contracts. Links to the classical world – in this case, a cult derived from the oracle at Delphi – are provided by an FBI agent and art crime investigator who enlists his help when a rogue smart contract is discovered that offers $10,000 to kill an archeology professor, soon followed by a second contract offering $700,000 for a list of seven targets. Soon afterwards, our protagonist discovers he’s first on that list, and he has only a few days to figure out who wrote the code and save his own life. That quest also includes helping the FBI agent track down some Delphian artifacts that we learn from flashbacks to classical times were removed from the oracle’s temple and hidden.

The Delphi oracle, Juels writes, “revealed divine truth in response to human questions”. The oracles his cryptographer is working on are “a source of truth for questions asked by smart contracts about the real world”. In Juels’ imagining, the rogue assassination contract is issued with trigger words that could be expected to appear in a death announcement. When someone tries to claim the bounty, the smart contract checks news sources for those words, only paying out if it finds them. Juels has worked hard to make the details of both classical and cryptographic worlds comprehensible. They remain stubbornly complex, but you can follow the story easily enough even if you panic at the thought of math.

The tension is real, both within and without the novel. Juels’ idea is credible enough that it’s a relief when he says the contracts as described are not feasible with today’s technology, and may never become so (perhaps especially because the fictional criminal smart contract is written in flawless computer code). The related paper also notes that some details of their scheme have been left out so as not to enable others to create these rogue contracts for real. Whew. For now.

Review: Virtual You

Virtual You: How Building Your Digital Twin Will Revolutionize Medicine and Change Your Life
By Peter Coveney and Roger Highfield
Princeton University Press
ISBN: 978-0-691-22327-8

Probably the quickest way to appreciate how much medicine has changed in a lifetime is to pull out a few episodes of TV medical series over the years: the bloodless 1960s Dr Kildare; the 1980s St Elsewhere, which featured a high-risk early experiment in now-routine cardiac surgery; the growing panoply of machcines and equipment of the 2000s series E.R. (1994-2009). But there are always more improvements to be made, and around 2000, when the human genome was being sequenced, we heard a lot about the promise of personalized medicine it was supposed to bring. Then we learned over time that, as so often with scientific advances, knowing more merely served to show us how much more we *didn’t* know – in the genome’s case, about epigenetics, proteomics, and the microbiome. With some exceptions such as cancers that can be tested for vulnerability to particular drugs, the dream of personalized medicine so far mostly remains just that.

Growing alongside all that have been computer models, mostly famously used for metereology and climate change predictions. As Peter Coveney and Roger Highfield explain in Virtual You, models are expected to play a huge role in medicine, too. The best-known use is in drug development, where modeling can help suggest new candidates. But the use that interests Coveney and Highfield is on the personal level: a digital twin for each of us that can be used to determine the right course of treatment by spotting failures in advance, or help us make better lifestyle choices tailored to our particular genetic makeup.

This is not your typical book of technology hype. Instead, it’s a careful, methodical explanation of the mathematical and scientific basis for how this technology will work and its state of development from math and physics to biology. As they make clear, developing the technology to create these digital twins is a huge undertaking. Each of us is a massively complex ecosystem generating masses of data and governed by masses of variables. Modeling our analog selves requires greater complexity than may even be possible with classical digital computers. Coveney and Highfield explain all this meticulously.

It’s not as clear to me as it is to them that virtual twins are the future of mainstream “retail” medicine, especially if, as they suggest, they will be continually updated as our bodies produce new data. Some aspects will be too cost-effective to ignore; ensuring that the most expensive treatments are directed only to those who can benefit will be a money saver to any health service. But the vast amount of computational power and resources likely required to build and maintain a virtual twin for each individual seem prohibitive for all but billionaires. As in engineering, where virtual twins are used for prototyping or meterology, where simulations have led to better and more detailed forecasts, the primary uses seem likely to be at the “wholesale” level. That still leaves room for plenty of revolution.

New phone, who dis?

So I got a new phone. What makes the experience remarkable is that the old phone was a Samsung Galaxy Note 4, which, if Wikipedia is correct, was released in 2014. So the phone was at least eight, probably nine, years old. When you update incrementally, like a man who gets his hair cut once a week, it’s hard to see any difference. When you leapfrog numerous generations of updates, it’s seeing the man who’s had his first haircut in a year: it’s a shock.

The tl;dr: most of what I don’t like about the switch is because of Google.

There were several reasons why I waited so long. It was a good enough phone and it had a very good camera for its time; I finessed the lack of security updates by not using the phone for functions where it mattered. Also, I didn’t want to give up the disappearing headphone jack, home button, or, especially, user-replaceable battery. The last of those is why I could keep the phone for so long, and it was the biggest deal-breaker.

For that reason, I’ve known for years that the Note’s eventual replacement would likely be a Fairphone, a Dutch outfit that is doing its best to produce sustainable phones. It’s repairable and user-upgradable (it takes one screwdriver to replace a cracked screen or the camera), and changing the bettery takes a second. I had to compromise on the headphone jack, which requires a USB-C dongle. Not having the home button is hard to get used to; I used it constantly. It turns out, though, that it’s even harder to get used to not having the soft button on the bottom left that used to show me recently used apps so I could quickly switch back to the thing I was using a few minutes ago. But that….is software.

The biggest and most noticeable change between Android 6 (the Note 4 got its last software update in 2017) and Android 13 (last week) is the assumptions both Android chief Google and the providers of other apps make about what users want. On the Note 4, I had a quick-access button to turn the wifi on and off. Except for the occasional call over Signal, I saw no reason to keep it on to drain the battery unnecessarily. Today, that same switch is buried several layers deep in settings with apparently no way to move that into the list of quick-access functions. That’s just one example. But no acommodation for my personal quirks can change the sense of being bullied into giving away more data and control than I’d like.

Giving in to Google does, however, mean an easy transfer of your old phone’s contents to your new phone (if transferring the external SD card isn’t enough).

Too late I remembered the name Murena – a company that equips Fairphones with de-Googlified Android. As David Pierce writes at The Verge, that requires a huge effort. Murena has built replacements for the standard Google apps, a cloud system for email, calendars, and productivity software. Even so, Pierce writes, apps hit the limit: despite Murena’s effort to preserve user anonymity, it’s just not possible to download them without interacting with Google, especially when payment is required. And who wants to run their phone without third-party apps? Not even me (although I note that many of those I use can still be sideloaded).

The reality is I would have preferred to wait even longer to make the change. I was pushed by the fact that several times recently the Note has complained that it can’t download email because it was running out of storage space (which is why I would prefer to store everything on an external SD card, but: not an option for email and apps). And on a recent trip to the US, there were numerous occasions where the phone simply didn’t work, even though there shouldn’t be any black spots in places like Boston and San Francisco. A friend suggested that in all likelihood there were freuqency bands being turned off while other newer ones were probably ones the Note couldn’t use. I had forgotten that 5G, which I last thought about in 2018, had been arriving. So: new phone. Resentfully.

This kind of forced wastefulness is one of the things Donald Norman talks about in his new book, Design for a Better World. To some extent, the book is a mea culpa: after decades of writing about how to design things better to benefit us as individuals, Norman has recognized the necessity to rethink and replace human-centered design with humanity-centered design. Sustainability is part of that.

Everything around us is driven by design choices. Building unrepairable phones is a choice, and a destructive one, given the amount of rare materials used inside that wind up in landfills instead of, new phones or some other application. The Guardian’s review of the latest Fairphone asks, “Could this be the first phone to last ten years?” I certainly hope so, but if something takes it down before then it will be an externality like switched-off bands, the end of software updates, or a bank’s decision to require customers use an app for two-factor authentication and then update it so older phones can’t run it. These are, as Norman writes, complex systems in which the incentives are all misplaced. And so: new phone. Largely unnecessarily.

Illustrations: Personally owned 1970s AT&T phone.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

Planned incompatibility

My first portable music player was a monoaural Sony cassette player a little bigger than a deck of cards. I think it was intended for office use as a dictation machine, but I hauled it to folk clubs and recorded the songs I liked, and used it to listen to music while in transit. Circa 1977, I was the only one on most planes.

At the time, each portable device had its own charger with its own electrical specification and plug type. Some manufacturers saw this as an opportunity, and released so-called “universal” chargers that came with an array of the most common plugs and user-adjustable settings so you could match the original amps and volts. Sony reacted by ensuring that each new generation had a new plug that wasn’t included on the universal chargers…which would then copy it….which would push Sony to come up with yet another new plug And so on. All in the name of consumer safety, of course.

Sony’s modern equivalent (which of course includes Sony itself) doesn’t need to invent new plugs because more sophisticated methods are available. They can instead insert a computer chip that the main device checks to ensure the part is “genuine”. If the check fails, as it might if you’ve bought your replacement part from a Chinese seller on eBay, the device refuses to let the new part function. This is how Hewlett-Packard has ensured that its inkjet printers won’t work with third-party cartridges, it’s one way that Apple has hobbled third-party repair services, and it’s how, as this week’s news tells us, the PS5 will check its optonal disc drives.

Except the PS5 has a twist: in order to authenticate the drive the PS5 has to use an Internet connection to contact Sony’s server. I suppose it’s better than John Deere farm equipment, which, Cory Doctorow writes in his new book, The Internet Con: How to Seize the Means of Computation, requires a technician to drive out to a remote farm and type in a code before the new part will work while the farmer waits impatiently. But not by much, if you’re stuck somewhere offline.

“It’s likely that this is a security measure in order to ensure that the disc drive is a legitimate one and not a third party,” Video Gamer speculates. Checking the “legitimacy” of an optional add-on is not what I’d call “security”; in general it’s purely for the purpose of making it hard for customers to buy third-party add-ons (a goal the article does nod at later). Like other forms of digital rights management, the nuisance all accrues to the customer and the benefits, such as they are, accrue only to the manufacturer.

As Doctorow writes, part-pairing, as this practice is known, originated with cars (for this reason, it’s also often known as “VIN” locking, from vehicle information number), brought in to reducee the motivation to steal cars in order to strip them and sell their parts (which *is* security). The technology sector has embraced and extended this to bolster the Gilette business model: sell inkjet printers cheap and charge higher-than-champagne prices for ink. Apple, Doctorow writes, has used this approach to block repairs in order to sustain new phone sales – good for Apple, but wasteful for the environment and expensive for us. The most appalling of his examples, though, is wheelchairs, which are “VIN-locked and can’t be serviced by a local repair shop”, and medical devices. Making on-location repairs impossible in these cases is evil.

The PS5, though, compounds part-pairing by requiring an Internet connection, a trend that really needs not to catch on. As hundreds of Tesla drivers discovered the hard way during an app server outage it’s risky to presume those connections will always be there when you need them. Over the last couple of decades, we’ve come to accept that software is not a purchase but a subscription service subject to license. Now, hardware is going the same way, as seemed logical from the late-1990s moment when MIT’s Neil Gershenfeld proposed Things That Think. Back then, I imagined the idea applying to everyday household items, not devices that keep our bodies functioning. This oncoming future is truly dangerous, as Andrea Matwyshyn has been pointing out..

For Doctorow, the solution is to mandate and enforce interoperability as well as other regulations such as antitrust law. The right to repair laws that are appearing inany jurisdictions (and which companies like Apple and John Deere have historically opposed). Requiring interoperability would force companies to enable – or at least not to hinder – third-party repairs.

But more than that is going to be needed if we are to avoid a future in which every piece of our personal infrastructures is turned into a subscription service. At The Register, Richard Speed reminds that Microsoft will end support for Windows 10 in 2025, potentially leaving 400 million PCs stranded. We have seen this before.

I’m not sure anyone in government circles is really thinking about the implications for an aging population. My generation still owns things; you can’t delete my library of paper books or charge me for each reread. But today’s younger generation, for whom everything is a rental…what will they do at retirement age, when income drops but nothing gets cheaper in a world where everything stops working the minute you stop paying? If we don’t force change now, this will be their future.

Illustrations: A John Deere tractor.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon