Revival

There appears to be media consensus: “Bluesky is dead.”

At The Commentary, James Meigs calls Bluesky “an expression of the left’s growing hypersensitivity to ideas leftists find offensive”, and says he accepts exTwitter’s “somewhat uglier vibe” in return for “knowing that right-wing views aren’t being deliberately buried”. Then he calls Bluesky “toxic” and a “hermetically sealed social-media bubble”.

At New Media and Marketing, Rich Meyer says Bluesky is in decline and engagement is dropping, and exTwitter is making a comeback.

At Slate, Alex Kirshner and Nitish Pahwa complain that Bluesky feels “empty”, say that its too-serious users are abandoning it because it isn’t fun, and compare it to a “small liberal arts college” and exTwitter to a “large state university”.

At The Spectator, Sean Thomas regrets that “Bluesky is dying” – and claims to have known it would fail from his first visit to the site, “a bad vegan cafe, full of humorless puritans”.

Many of these pieces – Mark Cuban at Fortune, for example, and Megan McArdle at the Washington Post – blame a “lack of diversity of thought”.

As Mike Masnick writes on TechDirt in its defense (Masnick is a Bluesky board member), “It seems a bit odd: when something is supposedly dying or irrelevant, journalists can’t stop writing about it.”

Have they so soon forgotten 2014, when everyone was writing that Twitter was dead?

Commentators may be missing that success for Bluesky looks different: it’s trying to build a protocol-driven ecosystem, not a site. Twitter had one, but destroyed it as its ad-based business model took over. Both Bluesky and Mastodon, which media largely ignores, aim to let users create their own experience and are building tools that give users as much control as possible. It seems to offend some commentators that one of them lets you block people you don’t want to deal with, but that’s weird, since it’s the one every social site has.

All social media have ups and downs, especially when they’re new (I really wonder how many of these commentators experienced exTwitter in its early days or have looked at Truth Social’s user numbers). Settling into a new environment and rebuilding take time – it may look like the old place, but its affordances are different, and old friends are missing. Meanwhile, anecdotally, some seem to be leaving social media entirely, driven away by privacy issues, toxic behavior, distaste for platform power and its owners, or simply distracted by life. Few of us *have* to use social media.

***

In 2002, the UK’s Financial Services Authority was the first to implement an EU directive allowing private organizations to issue their own electronic money without a banking license if they could meet the capital requirements. At the time, the idea seemed kind of cute, especially since there was a plan to waive some of the requirements for smaller businesses. Everyone wanted micropayments; here was a framework of possibility.

And then nothing much happened. The Register’s report (the first link above) said that organizations such as the Post Office, credit card companies, and mobile operators were considering launching emoney offerings. If they did, the results sank without trace. Instead, we’re all using credit/debit cards to pay for stuff online, just as we were 23 years ago. People are relucrtant to trust weird, new-fangled forms of money.

Then, in 2008, came cryptocurrencies – money as lottery ticket.

Last week, the Wall Street Journal reported that Amazon, Wal-Mart, and other multinationals are exploring stablecoins as a customer payment option – in other words, issuing their own cryptocurrencies, pegged to the US dollar. As Andrew Kassel explains at Investopedia, the result could be to bypass credit cards and banks, saving billions in fees.

It’s not clear how this would work, but I’m suspicious of the benefits to consumers. Would I have to buy a company’s stablecoin before doing business with it? And maintain a floating balance? At Axios, Brady Dale explores other possibilities. Ultimately, it sounds like a return to the 1970s, before multipurpose credit cards, when people had store cards from the retailers they used frequently, and paid a load of bills every month. Dale seems optimistic that this could be a win for consumers as well as retailers, but I can’t really see it.

In other words, the idea seems less cute now, less fun technological experiment, more rapacious. There’s another, more disturbing, possibility: the return of the old company town. Say you work for Amazon or Wal-Mart, and they offer you a 10% bonus for taking your pay in their stablecoin. You can’t spend it anywhere but their store, but that’s OK, right, because they stock everything you could possibly want? A modern company town doesn’t necessarily have to be geographical.

I’ve long thought that company towns, which allowed companies to effectively own employees, are the desired endgame for the titans. Elon Musk is heading that way with Starbase, Texas, now inhabited primarily by SpaceX employees, as Elizabeth Crisp reports at The Hill.

I don’t know if the employees who last month voted enthusiastically for the final incorporation of Starbase realize how abusive those old company towns were.

Illustrations: The Starbase sign adjoining Texas Highway 4, in 2023 (via Jenny Hautmann at Wikimedia.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Unscreened

My Canadian friend was tolerant enough to restart his Nissan car so I could snare a pic of the startup screen (above) and read it in full. That’s when he followed the instructions to set the car to send only “limited data” to the company. What is that data? The car didn’t say. (Its manual might, but I didn’t test his patience that far.)

In 2023, a Mozilla Foundation study of US cars’ privacy called them the worst category it had ever tested and named Nissan as the worst offender: “The Japanese car manufacturer admits in their privacy policy to collecting a wide range of information, including sexual activity, health diagnosis data, and genetic data – but doesn’t specify how. They say they can share and sell consumers’ ‘preferences, characteristics, psychological trends, predispositions, behavior, attitudes, intelligence, abilities, and aptitudes’ to data brokers, law enforcement, and other third parties.”

Fortunately, my friend lives in Canada.

Granting that no one wants to read privacy policies, at least apps, and websites display them in their excruciating, fine-print detail. They can afford to because usually you encounter them when you’re not in a hurry. That can’t be said of the start-up screen in a car, when you just want to get moving. Its interference has to be short.

The new startup screen confirmed that “limited data” was now being sent. I wasn’t quick enough to read the rest, but it probably warned that some features might not now work – because that’s what they *always* say, and is what the settings warned (without specifics) when he changed them.

I assume this setup complies with Canada’s privacy laws. But the friction of consulting a manual or website to find out what data is being sent deters customers from exercising their rights. Like website dark patterns, it’s gamed to serve the company’s interests. It feels like grudging compliance with the law, especially because customers are automatically opted in.

How companies obtain consent is a developing problem. At Privacy International, Gus Hosein considers the future of AI assistants. Naturally, he focuses on the privacy implications: the access they’ll need, the data they’ll be able to extract, the new kinds of datasets they’ll create. And he predicts that automation will tempt developers to bypass the friction of consent and permission. In 2013, we considered this with respect to emotionally manipulative pet robots.

I had to pause when Hosein got to “screenless devices” to follow some links. At Digital Trends, Luke Edwards summarizes a report at The Information that OpenAI may acquire io Products. This start-up, led by renowned Apple hardware designer Jony Ive and OpenAI CEO Sam Altman, intends to create AI voice-enabled assistants that may (or may not) take the form of a screenless “phone” or household device.

Meanwhile, The Vergecast (MP3) reports that Samsung is releasing Ballie, a domestic robot infused with Google Cloud’s generative AI that can “engage in natural, conversational interactions to help users manage home environments”. Samsung suggests you can have it greet people at the door. So much nicer than being greeted by your host.

These remind of the Humane AI pin, whose ten-month product life ended in February with HP’s purchase of the company’s assets for $116 million. Or the similar Bee, whose “summaries” of meetings and conversations Victoria Song at The Verge called “fan fiction”. Yes: as factual as any other generative AI chatbot. More notably, the Bee couldn’t record silent but meaningful events, leading Song to wonder, “In a hypothetical future where everyone has a Bee, do unspoken memories simply not exist?”

In November 2018, a Reuters Institute report on the future of news found that on a desktop computer the web can offer 20 news links at a time; a smartphone has room for seven, and smart speakers just one. Four years later, smart speakers were struggling as a category because manufacturers can’t make money out of them. But apparently Silicon Valley still thinks the shrunken communications channel of voice beats typing and reading and are plowing on. It gives them greater control.

The thin, linear stream of information is why Hosein foresees the temptation to avoid the friction of consent. But the fine-grained user control he wants will, I think, mandate offloading reviewing collected data and granting or revoking permissions onto a device with a screen. Like smart watches, these screenless devices will have to be part of a system. What Hosein wants and Cory Doctorow has advocated for web browsers is that these technological objects should be loyal agents. That is, they must favor *our* interests over those of their manufacturer, or we won’t trust them.

More complicated is the situation with respect to incops – incidentally co-present [people] – whose data is also captured without their consent: me in my friend’s car, everyone Song encountered. Mozilla reported that Subaru claims that by getting in the car passengers become “users” who consent to data collection (as if); several other manufacturers say that the driver is responsible for notifying passengers. Song found it easier to mute the Bee in her office and while commuting than to ask colleagues and passersby for permission to record. Then she found it didn’t always disconnect when she told it to…

So now imagine that car saturated with an agentic AI assistant that decides where you want to go and drives you there.

“You don’t want to do that, Dave.”

Illustrations: The start-up screen in my Canadian friend’s car.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Predatory inclusion

The recent past is a foreign country; they view the world differently there.

At last week’s We Robot conference on technology, policy, and law, the indefatigably detail-oriented Sue Glueck was the first to call out a reference to the propagation of transparency and accountability by the “US and its allies” as newly out of date. From where we were sitting in Windsor, Ontario, its conjoined fraternal twin, Detroit, Michigan, was clearly visible just across the river. But: recent events.

As Ottawa law professor Teresa Scassa put it, “Before our very ugly breakup with the United States…” Citing, she Anu Bradford, she went on, “Canada was trying to straddle these two [US and EU] digital empires.” Canada’s human rights and privacy traditions seem closer to those of the EU, even though shared geography means the US and Canada are superficially more similar.

We’ve all long accepted that the “technology is neutral” claim of the 1990s is nonsense – see, just this week, Luke O’Brien’s study at Mother Jones of the far-right origins of the web-scraping facial recognition company Clearview AI. The paper Glueck called out, co-authored in 2024 by Woody Hartzog, wants US lawmakers to take a tougher approach to regulating AI and ban entirely some systems that are fundamentally unfair. Facial recognition, for example, is known to be inaccurate and biased, but improving its accuracy raises new dangers of targeting and weaponization, a reality Cynthia Khoo called “predatory inclusion”. If he were writing this paper now, Hartzog said, he would acknowledge that it’s become clear that some governments, not just Silicon Valley, see AI as a tool to destroy institutions. I don’t *think* he was looking at the American flags across the water.

Later, Khoo pointed out her paper on current negotiations between the US and Canada to develop a bilateral law enforcement data-sharing agreement under the US CLOUD Act. The result could allow US police to surveil Canadians at home, undermining the country’s constitutional human rights and privacy laws.

In her paper, Clare Huntington proposed deriving approaches to human relationships with robots from family law. It can, she argued, provide analogies to harms such as emotional abuse, isolation, addiction, invasion of privacy, and algorithmic discrimination. In response, Kate Darling, who has long studied human responses to robots, raised an additional factor exacerbating the power imbalance in such cases: companies, “because people think they’re talking to a chatbot when they’re really talking to a company.” That extreme power imbalance is what matters when trying to mitigate risk (see also Sarah Wynn-Williams’ recent book and Congressional testimony on Facebook’s use of data to target vulnerable teens).

In many cases, however, we are not agents deciding to have relationships with robots but what AJung Moon called “incops”, or “incidentally co-present”. In the case of the Estonian Starship delivery robots you can find in cities from San Francisco to Milton Keynes, that broad category includes human drivers, pedestrians, and cyclists who share their spaces. In a study, Adeline Schneider found that white men tended to be more concerned about damage to the robot, where others worried more about communication, the data they captured, safety, and security. Delivery robots are, however, typically designed with only direct users in mind, not others who may have to interact with it.

These are all social problems, not technological ones, as conference chair Kristen Thomasen observed. Carys Craig later modified it: technology “has compounded the problems”.

This is the perennial We Robot question: what makes robots special? What qualities require new laws? Just as we asked about the Internet in 1995, when are robots just new tools for old rope, and when do they bring entirely new problems? In addition, who is responsible in such cases? This was asked in a discussion of Beatrice Panattoni‘s paper on Italian proposals to impose harsher penalties for crime committed with AI or facilitated by robots. The pre-conference workshop raised the same question. We already know the answer: everyone will try to blame someone or everyone else. But in formulating a legal repsonse, will we tinker around the edges or fundamentally question the criminal justice system? Andrew Selbst helpfully summed up: “A law focusing on specific harms impedes a structural view.”

At We Robot 2012, it was novel to push lawyers and engineers to think jointly about policy and robots. Now, as more disciplines join the conversation, familiar Internet problems surface in new forms. Human-robot interaction is a four-dimensional version of human-computer interaction; I got flashbacks to old hacking debates when Elizabeth Joh wondered in response to Panattoni’s paper if transforming a robot into a criminal should be punished; and a discussion of the use of images of medicalized children for decades in fundraising invoked publicity rights and tricky issues of consent.

Also consent-related, lawyers are starting to use generative AI to draft contracts, a step that Katie Szilagyi and Marina Pavlović suggested further diminishes the bargaining power already lost to “clickwrap”. Automation may remove our remaining ability to object from more specialized circumstances than the terms and conditions imposed on us by sites and services. Consent traditionally depends on a now-absent “meeting of minds”.

The arc of We Robot began with enthusiasm for robots, which waned as big data and generative AI became players. Now, robots/AI are appearing as something being done to us.

Illustrations: Detroit, seen across the river from Windsor, Ontario with a Canadian Coast Guard boat in the foreground.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Cognitive dissonance

The annual State of the Net, in Washington, DC, always attracts politically diverse viewpoints. This year was especially divided.

Three elements stood out: the divergence between the only remaining member of the Privacy and Civil Liberties Oversight Board (PCLOB) and a recently-fired colleague; a contentious panel on content moderation; and the yay, American innovation! approach to regulation.

As noted previously, on January 29 the days-old Trump administration fired PCLOB members Travis LeBlanc, Ed Felten, and chair Sharon Bradford Franklin; the remaining seat was already empty.

Not to worry, remaining member Beth Williams, said. “We are open for business. Our work conducting important independent oversight of the intelligence community has not ended just because we’re currently sub-quorum.” Flying solo she can greenlight publication, direct work, and review new procedures and policies; she can’t start new projects. A review is ongoing of the EU-US Privacy Framework under Executive Order 14086 (2022). Williams seemed more interested in restricting government censorship and abuse of financial data in the name of combating domestic terrorism.

Soon afterwards, LeBlanc, whose firing has him considering “legal options”, told Brian Fung that the outcome of next year’s reauthorization of Section 702, which covers foreign surveillance programs, keeps him awake at night. Earlier, Williams noted that she and Richard E. DeZinno, who left in 2023, wrote a “minority report” recommending “major” structural change within the FBI to prevent weaponization of S702.

LeBlanc is also concerned that agencies at the border are coordinating with the FBI to surveil US persons as well as migrants. More broadly, he said, gutting the PCLOB costs it independence, expertise, trustworthiness, and credibility and limits public options for redress. He thinks the EU-US data privacy framework could indeed be at risk.

A friend called the panel on content moderation “surreal” in its divisions. Yael Eisenstat and Joel Thayer tried valiantly to disentangle questions of accountability and transparency from free speech. To little avail: Jacob Mchangama and Ari Cohn kept tangling them back up again.

This largely reflects Congressional debates. As in the UK, there is bipartisan concern about child safety – see also the proposed Kids Online Safety Act – but Republicans also separately push hard on “free speech”, claiming that conservative voices are being disproportionately silenced. Meanwhile, organizations that study online speech patterns and could perhaps establish whether that’s true are being attacked and silenced.

Eisenstat tried to draw boundaries between speech and companies’ actions. She can still find on Facebook the sme Telegram ads containing illegal child sexual abuse material that she found when Telegram CEO Pavel Durov was arrested. Despite violating the terms and conditions, they bring Meta profits. “How is that a free speech debate as opposed to a company responsibility debate?”

Thayer seconded her: “What speech interests do these companies have other than to collect data and keep you on their platforms?”

By contrast, Mchangama complained that overblocking – that is, restricting legal speech – is seen across EU countries. “The better solution is to empower users.” Cohn also disliked the UK and European push to hold platforms responsible for fulfilling their own terms and conditions. “When you get to whether platforms are living up to their content moderation standards, that puts the government and courts in the position of having to second-guess platforms’ editorial decisions.”

But Cohn was talking legal content; Eisenstat was talking illegal activity: “We’re talking about distribution mechanisms.” In the end, she said, “We are a democracy, and part of that is having the right to understand how companies affect our health and lives.” Instead, these debates persist because we lack factual knowledge of what goes on inside. If we can’t figure out accountability for these platforms, “This will be the only industry above the law while becoming the richest companies in the world.”

Twenty-five years after data protection became a fundamental right in Europe, the DC crowd still seem to see it as a regulation in search of a deal. Representative Kat Cammack (R-FL), who described herself as the “designated IT person” on the energy and commerce committee, was particularly excited that policy surrounding emerging technologies could be industry-driven, because “Congress is *old*!” and DC is designed to move slowly. “There will always be concerns about data and privacy, but we can navigate that. We can’t deter innovation and expect to flourish.”

Others also expressed enthusiasm for “the great opportunities in front of our country”, compared the EU’s Digital Markets Act to a toll plaza congesting I-95. Samir Jain, on the AI governance panel, suggested the EU may be “reconsidering its approach”. US senator Marsha Blackburn (R-TN) highlighted China’s threat to US cybersecurity without noting the US’s own goal, CALEA.

On that same AI panel, Olivia Zhu, the Assistant Director for AI Policy for the White House Office of Science and Technology Policy, seemed more realistic: “Companies operate globally, and have to do so under the EU AI Act. The reality is they are racing to comply with [it]. Disengaging from that risks a cacophony of regulations worldwide.”

Shortly before, Johnny Ryan, a Senior Fellow at the Irish Council for Civil Liberties posted: “EU Commission has dumped the AI Liability Directive. Presumably for “innovation”. But China, which has the toughest AI law in the world, is out innovating everyone.”

Illustrations: Kat Cammack (R-FL) at State of the Net 2025.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Blown

“This is a public place. Everyone has the right to be left in peace,” Jane (Vanessa Redgrave) tells Thomas (David Hemmings), whom she’s just spotted photographing her with her lover in the 1966 film Blow-Up, by Michelangelo Antonioni. The movie, set in London, proceeds as a mystery in which Thomas’s only tangible evidence is a grainy, blown-up shot of a blob that may be a murdered body.

Today, Thomas would probably be wielding a latest-model smartphone instead of a single lens reflex film camera. He would not bother to hide behind a tree. And Jane would probably never notice, much less challenge Thomas to explain his clearly-not-illegal, though creepy, behavior. Phones and cameras are everywhere. If you want to meet a lover and be sure no one’s photographing you, you don’t go to a public park, even one as empty as the film finds Maryon Park. Today’s 20-somethings grew up with that reality, and learned early to agree some gatherings are no-photography zones.

Even in the 1960s individuals had cameras, but taking high-quality images at a distance was the province of a small minority of experts; Antonioni’s photographer was a professional with his own darkroom and enlarging equipment. The first CCTV cameras went up in the 1960s; their proliferation became public policy issue in the 1980s, and was propagandized as “for your safety without much thought in the post-9/11 2000s. In the late 2010s, CCTV surveillance became democratized: my neighbor’s Ring camera means no one can leave an anonymous gift on their doorstep – or (without my consent) mine.

I suspect one reason we became largely complacent about ubiquitous cameras is that the images mostly remained unidentifiable, or at least unidentified. Facial recognition – especially the live variant police seem to feel they have the right to set up at will – is changing all that. Which all leads to this week, when Joseph Cox at 404 Media reports ($) (and Ars Technica summarizes) that two Harvard students have mashed up a pair of unremarkable $300 Meta Ray-Bans with the reverse image search service Pimeyes and a large language model to produce I-XRAY, an app that identifies in near-real time most of the people they pass on the street, including their name, home address, and phone number.

The students – AnhPhu Nguyen and Caine Ardayfio – are smart enough to realize the implications, imagining for Cox the scenario of a random male spotting a young woman and following her home. This news is breaking the same week that the San Francisco Standard and others are reporting that two men in San Francisco stood in front of a driverless Waymo taxi to block it from proceeding while demanding that the female passenger inside give them her phone number (we used to give such males the local phone number for time and temperature).

Nguyen and Ardayfio aren’t releasing the code they’ve written, but what two people can do, others with fewer ethics can recreate independently, as 30 years of Black Hat and Def Con have proved. This is a new level of democratizated surveillance. Today, giant databases like Clearview AI are largely only accessible to governments and law enforcement. But the data in them has been scraped from the web, like LLMs’ training data, and merged with commercial sources

This latest prospective threat to privacy has been created by the marriage of three technologies that were developed separately by different actors without regard to one another and, more important, without imagining how one might magnify the privacy risks of the others. A connected car with cameras could also run I-XRAY.

The San Francisco story is a good argument against allowing cars on the roads without steering wheels, pedals, and other controls or *something* to allow a passenger to take charge to protect their own safety. In Manhattan cars waiting at certain traffic lights often used to be approached by people who would wash the windshield and demand payment. Experienced drivers knew to hang back at red lights so they could roll forward past the oncoming would-be washer. How would you do this in a driverless car with no controls?

We’ve long known that people will prank autonomous cars. Coverage focused on the safety of the *cars* and the people and vehicles surrounding them, not the passengers. Calling a remote technical support line for help is never going to get a good enough response.

What ties these two cases together – besides (potentially) providing new ways to harass women – is the collision between new technologies and human nature. Plus, the merger of three decades’ worth of piled-up data and software that can make things happen in the physical world.

Arguably, we should have seen this coming, but the manufacturers of new technology have never been good at predicting what weird things their users will find to do with it. This mattered less when the worst outcome was using spreadsheet software to write letters. Today, that sort of imaginative failure is happening at scale in software that controls physical objects and penetrates the physical world. The risks are vastly greater and far more unsettling. It’s not that we can’t see the forest for the trees; it’s that we can’t see the potential for trees to aggregate into a forest.

Illustrations: Jane (Vanessa Redgrave) and her lover, being photographed by Thomas (David Hemmings) in Michelangelo Antonioni’s 1966 film, Blow-Up.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Review: Money in the Metaverse

Money in the Metaverse: Digital Assets, Online Identities, Spatial Computing, and Why Virtual Worlds Mean Real Business
by David Birch and Victoria Richardson
London Publishing Partnership
ISBN: 978-1-916749-05-4

In my area of London there are two buildings whose architecture unmistakably identifies them as former banks. Time has moved on, and one houses a Pizza Express, the other a Tesco Direct. The obviously-built-to-be-a-Post-Office building, too, is now a restaurant, and the post office itself now occupies a corner of a newsagent’s. They ilustrate a point David Birch has frequently made: there is nothing permanent about our financial arrangements. Banking itself is only a few hundred years old.

Writing with Victoria Richardson, in their new book Money in the Metaverse: Birch argues this point anew. At one time paper notes seemed as shocking and absurd as cryptocurrencies and non-fungible tokens do today. The skeptic reads that and wonders if the early days of paper notes were as rife with fraud and hot air as NFTs have been. Is the metaverse even still a thing? It’s all AI hype round here now.

Birch and Richardson, however, believe that increasingly our lives will be lived online – a flight to the “cyburbs”, they call it. In one of their early examples of our future, they suggest it will be good value to pay for a virtual ticket (NFT) to sit next to a friend to listen to a concert in a virtual auditorium. It may be relevant that they were likely writing this during the acute phase of the covid pandemic. By now, most of the people I zoomed with then are back doing things in the real world and are highly resistant to returning to virtual, or even hybrid, meetups.

But exactly how financial services might operate isn’t really their point and would be hard to get right even if it were. Instead, their goal is to explain various novel financial technologies and tools such as NFTs, wallets, smart contracts, and digital identities and suggest possible strategies for businesses to use them to build services. Some of the underlying ideas have been around for at least a couple of decades: software agents that negotiate on an individual’s behalf, and support for multiple disconnected identities to be used in the different roles in life we all have, for example. Others are services that seem to have little to do with the metaverse, such as paperless air travel, already being implemented, and virtual tours of travel destination, which have been with us in some form since video arrived on the web.

The key question – whether the metaverse will see mass adoption – is not one Birch and Richardson can answer. Certainly, I’m dubious about some of the use cases they propose – such as the idea of gamifying life insurance by offering reduced premiums to those who reach various thresholds of physical activity or healthy living. Insurance is supposed to manage risk by pooling it; their proposal would penalize disability and illness.

A second question occurs: what new kinds of crime will these technologies enable? Just this week, Fortune reported that cashlessness has brought a new level of crime to Sweden. Why should the metaverse be different? This, too, is beyond the scope of Birch’s and Richardson’s work, which is to explain but not to either hype or critique. The overall impression the book leaves, however, is of a too-clean computer-generated landscape or smart city mockup, where the messiness of real life is missing.

Admiring the problem

In one sense, the EU’s barely dry AI Act and the other complex legislation – the Digital Markets Act, Digital Services Act, GDPR, and so on -= is a triumph. Flawed it may be, but it’s a genuine attempt to protect citizens’ human rights against a technology that is being birthed with numerous trigger warnings. The AI-with-everything program at this year’s Computers, Privacy, and Data Protection, reflected that sense of accomplishment – but also the frustration that comes with knowing that all legislation is flawed, all technology companies try to game the system, and gaps will widen.

CPDP has had these moments before: new legislation always comes with a large dollop of frustration over the opportunities that were missed and the knowledge that newer technologies are already rushing forwards. AI, and the AI Act, more or less swallowed this year’s conference as people considered what it says, how it will play internationally, and the necessary details of implementation and enforcement. Two years at this event, inadequate enforcement of GDPR was a big topic.

The most interesting future gaps that emerged this year: monopoly power, quantum sensing, and spatial computing.

For at least 20 years we’ve been hearing about quantum computing’s potential threat to public key encryption – that day of doom has been ten years away as long as I can remember, just as the Singularity is always 30 years away. In the panel on quantum sensing, Chris Hoofnagle argued that, as he and Simson Garfinkel recently wrote at Lawfare and in their new book, quantum cryptanalysis is overhyped as a threat (although there are many opportunities for quantum computing in chemistry and materials science). However, quantum sensing is here now, works (because qubits are fragile), and is cheap. There is plenty of privacy threat here to go around: quantum sensing will benefit entirely different classes of intelligence, particularly remote, undetectable surveillance.

Hoofnagle and Garfinkel are calling this MASINT, for machine and signature intelligence, and believe that it will become very difficult to hide things, even at a national level. In Hoofnagle’s example, a quantum sensor-equipped drone could fly over the homes of parolees to scan for guns.

Quantum sensing and spatial computing have this in common: they both enable unprecedented passive data collection. VR headsets, for example, collect all sorts of biomechanical data that can be mined more easily for personal information than people expect.

Barring change, all that data will be collected by today’s already-powerful entities.

The deeper level on which all this legislation fails particularly exercised Cristina Caffarra, the co-founder of the Centre for Economic Policy Research in the panel on AI and monopoly, saying that all this legislation is basically nibbling around the edges because they do not touch the real, fundamental problem of the power being amassed by the handful of companies who own the infrastructure.

“It’s economics 101. You can have as much downstream competition as you like but you will never disperse the power upstream.” The reports and other material generated by government agencies like the UK’s Competition and Markets Authority are, she says, just “admiring the problem”.

A day earlier, the Novi Sad professor Vladen Joler had already pointed out the fundamental problem: at the dawn of the Internet anyone could start with nothing and build something; what we’re calling “AI” requires billions in investment, so comes pre-monopolized. Many people dismiss Europe for not having its own homegrown Big Tech, but that overlooks open technologies: the Raspberry Pi, Linux, and the web itself, which all have European origins.

In 2010, the now-departing MP Robert Halfon (Con-Harlow) said at an event on reining in technology companies that only a company the size of Google – not even a government – could create Street View. Legend has it that open source geeks heard that as a challenge, and so we have OpenStreetMap. Caffarra’s fiery anger raises the question: at what point do the infrastructure providers become so entrenched that they could choke off an open source competitor at birth? Caffarra wants to build a digital public interest infrastructure using the gaps where Big Tech doesn’t yet have that control.

The Dutch Groenlinks MEP Kim van Sparrentak offered an explanation for why the AI Act doesn’t address market concentration: “They still dream of a European champion who will rule the world.” An analogy springs to mind: people who vote for tax cuts for billionaires because one day that might be *them*. Meanwhile, the UK’s Competition and Markets Authority finds nothing to investigate in Microsoft’s partnership with the French AI startup Mistral.

Van Sparrentak thinks one way out is through public procurement; adopt goals of privacy and sustainability, and support European companies. It makes sense; as the AI Now Institute’s Amba Kak, noted, at the moment almost everything anyone does digitally has to go through the systems of at least one Big Tech company.

As Sebastiano Toffaletti, head of the secretariat of the European SME Alliance, put it, “Even if you had all the money in the world, these guys still have more data than you. If you don’t and can’t solve it, you won’t have anyone to challenge these companies.”

Illustrations: Vladen Joler shows Anatomy of an AI System, a map he devised with Kate Crawford of the human labor, data, and planetary resources that are extracted to make “AI”.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Microsoft can remember it for you wholesale

A new theory: somewhere in the Silicon Valley universe there’s a cadre of techies who have eidetic memories and they’re feeling them start to slip. Panic time.

That’s my best explanation for Microsoft’s latest wheeze, a new feature for its Copilot assistant that will take what’s variously called a “snapshot” or a “screenshot” of your computer (all three monitors?) every five seconds and store it for future reference. Microsoft hasn’t explained much about Recall’s inner technical workings, but according to the announcement, the data will be stored locally and will be searchable via semantic associations and some sort of “AI”. Microsoft also says the data will not be used to train AI models.

The general anger and dismay at this plan brings back, almost nostalgically, memories of the 1990s, when Microsoft was near-universally hated as the evil monopolist dominating computing. In 2008, when Google was ten years old, a BBC presenter asked me if I thought Google would ever be hated as much as Microsoft was (not then, no). In 2012, veteran journalist Charles Arthur published the book Digital Wars about how Microsoft had stagnated and lost its lead. And then suddenly, in the last few years, it’s back on top.

Possibilities occur that Microsoft doesn’t mention. For example: could software might be embedded into Windows to draw inferences from the data Recall saves? And could those inferences be forwarded to the company or used to target you with ads? That seems like a far more efficient way to invade users’ privacy than copying the data itself, if that’s what the company ultimately wants to do.

Lots of things on our computers already retain a “memory” of what we’ve been doing. Operating systems generate logs to help debug problems. Word processors retain a changelog, which powers the ability to undo mistakes. Web browsers have user-configurable histories; email software has archives; media players retain playlists. All of those are useful – but part of that usefulness is that they are contextual, limited, and either easily terminated by closing the relevant application or relatively easily edited to remove items that shouldn’t be kept.

It’s hard for almost everyone who isn’t Microsoft to understand the point of keeping everything by default. It seems like a feature only developers could love. I certainly would like Windows to be better at searching for stored files or my (Firefox) browser to be better at reloading that article I was reading yesterday. I have even longed for a personal version of Vannevar Bush’s Memex. As part of that, I might welcome a feature that let me hit a button to record the last five useful minutes of a meeting, or save a social media post to a local archive. But the key to that sort of memory expansion is curation, not remembering everything promiscuously. For most people, selective forgetting is how we survive the torrents of irrelevance hurled at us every day.

What Recall sounds most like is the lifelog science fiction writer Charlie Stross imagined in 2007 might be our future. Plummeting storage costs and expanding capacity, he reasoned, would make it possible to store *everything* in your pocket. Even then, there were (a very few) people doing that sort of thing, most notably Steve Mann, a University of Toronto professor who started wearing devices to comprhensively capture his life as a 1990s graduate student. Over the years, Mann has shrunk his personal gadget array from a laptop and peripherals to glasses and pocket devices. Many more people capture their surroundings now – but they do it on their phones. If Apple or Google were proposing a Recall feature for iOS or Android, the idea would seem a lot less weird.

The real issue is that there are many people who would like to be able to know what somone *else* has been doing on their computer at all times. Helicopter parents. Schools and teachers under government compulsion (see for example Prevent (PDF)). Employers. Border guards. Corporate spies. The Department of Work and Pensions. Authoritarian governments. Law enforcement and security agencies. Criminals. Domestic abusers… So developing any feature like this must include considering how to protect it against these threats. This does not appear to have happened.

Many others have written about the privacy issues in all this – the UK’s Information Commission’s Office is already investigating. At The Register, Richard Speed does a particularly good job of looking at some of the fine details. On Mastodon, Kevin Beaumont says inspection of the Copilot+ software suggests that Recall stores the text it extracts from all those snapshots into an easily copiable SQlite database.

But there’s still more. The kind of archive Recall appears to construct can teach an attacker how the target thinks: not just what passwords they choose but how they devise them.Those patterns can be highly valuable. Granted, few targets are worth that level of attention, but it happens, as Peter Davies, a technical director at eThales, has often warned.

Recall is not the only move – see also flawed-AI-with-everything – that suggests that the computer industry, like some politicians and governments, is badly losing touch with the public. Increasingly, what they want to do seems unrelated to what the rest of us want. If they think things like Recall are a good idea they need to read more Philip K. Dick. And then don’t invent the Torment Nexus.

Illustrations: Arnold Schwarzenegger seeking better memories in the 1990 film Total Recall.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon..

Alabama never got the bomb

There is this to be said for nuclear weapons: they haven’t scaled. Since 1969, when Tom Lehrer warned about proliferation (“We’ll try to stay serene and calm | When Alabama gets the bomb”), a world of treaties, regulation, and deterrents has helped, but even if it hadn’t, building and updating nuclear weapons remains stubbornly expensive. (That said, the current situation is scary enough.)

The same will not be true of drones, James Patton Rogers explained in a recent talk at Kings College London about his new book, Precision: A History of American Warfare. Already, he says, drones are within reach for non-governmental actors such as Mexican drug cartels. At the BBC, Jonathan Marcus estimated in February 2022 that more than 100 nations and non-state actors already have combat drones and these systems are proliferating rapidly. The brief moment in which the US and Israel had an exclusive edge is already gone; Rogers says Iran and Turkey are “drone powers”. Back to the BBC in 2022: Marcus writes that some terrorist groups had already been able to build attack drone systems using commercial components for a few hundred dollars. Rogers put the number of countries with drone capability in 2023 at 113, plus 65 armed groups. He also called them one of the “greatest threats to state security”, noting the speed and abruptness with which they’ve flipped from being protective and their potential for “assassinations, strikes, saturation attacks”.

Rogers, who calls his book an “intellectual history”, traces the beginnings of precision to the end of the long, muddy, casualty-filled conflict of World War I. Never again: instead, remote attacks on military-industrial targets that limit troops on the ground and loss of life. The arrival of the atomic bomb and Russia’s development of same changed focus to the Dr Strangelove-style desire for the technology to mount massive retaliation. John F. Kennedy successfully campaigned on the missile gap. (In this part of Rogers’ presentation, it was impossible not to imagine how effective this amount of energy could have been if directed toward climate change…)

The 1990s and the Gulf War brought a revival of precision in the form of the first cruise missiles and the first drones. But as long ago as 1988 there were warnings that the US could not monopolize drones and they would become a threat. “We need an international accord to control drone proliferation,” Rogers said.

But the threat to state security was not Rogers’ answer when an audience member asked him, “What keeps you awake at night?”

“Drone mass killings targeting ethnic diasporas in cities.”

Authoritarian governments have long reached out to control opposition outside their borders. In 1974, I rented an apartment from the Greek owner of a local highly-regarded restaurant. A day later, a friend reacted in horror: didn’t I know that restaurateur was persona-non-patronize because he had reported Greek student protesters in Ithaca, New York to the military junta then in power and there had been consequences for their families back home? No, I did not.

As an informant, landlord’s powers were limited, however. He could go to and photograph protests; if he couldn’t identify the students he could still send their pictures. But he couldn’t amass comprehensive location data tracking their daily lives, operate a facial recognition system, or monitor them on social media and infer their social graphs. A modern authoritarian government equipped with Internet connections can do all of that and more, and the data it can’t gather itself it can obtain by purchase, contract, theft, hacking, or compulsion.

In Canada, opponents of Chinese Communist Party policies report harassment and intimidation. Freedom House reports that China’s transnational repression also includes spyware, digital threats, physical assault, and cooption of other countries, all escalating since 2014. There’s no reason for this sort of thing to be limited to the Chinese (and Russians); Citizen Lab has myriad examples of governments’ use of spyware to target journalists, political opponents, and activists, inside or outside the countries where they’re active.

Today, even in democratic countries there is an ongoing trend toward increased and more militaristic surveillance of migrants and borders. In 2021, Statewatch reported on the militarization of the EU’s borders along the Mediterranean, including a collaboration between Airbus and two Israeli companies to use drones to intercept migrant vessels Another workshop that same year made plain the way migrants are being dataveilled by both governments and the aid agencies they rely on for help. In 2022, the courts ordered the UK government to stop seizing the smartphones belonging to migrants arriving in small boats.

Most people remain unaware of this unless some poliitician boasts about it as part of a tough-on-immigration platform. In general, rights for any kind of foreigners – immigrants, ethnic minorities – are a hard sell, if only because non-citizens have no vote, and an even harder one against the headwind of “they are not us” rhetoric. Threats of the kind Rogers imagined are not the sort nations are in the habit of protecting against.

It isn’t much of a stretch to imagine all those invasive technologies being harnessed to build a detailed map of particular communities. From there, given affordable drones, you just need to develop enough malevolence to want to kill them off, and be the sort of country that doesn’t care if the rest of the world despises you for it.

Illustrations: British migrants to Australia in 1949 (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

Borderlines

Think back to the year 2000. New York’s World Trade Center still stood. Personal digital assistants were a niche market. There were no smartphones (the iPhone arrived in 2006) or tablets (the iPad took until 2010). Social media was nascent; Facebook first opened in 2004. The Good Friday agreement was just two years old, and for many in Britain “terrorists” were still “Irish”. *That* was when the UK passed the Terrorism Act (2000).

Usually when someone says the law can’t keep up with technological change they mean that technology can preempt regulation at speed. What the documentary Phantom Parrot shows, however, is that technological change can profoundly alter the consequences of laws already on the books. The film’s worked example is Schedule 7 of the 2000 Terrorism Act, which empowers police to stop, question, search, and detain people passing through the UK’s borders. They do not need prior authority or suspicion, but may only stop and question people for the purpose of determining whether the individual may be or have been concerned in the commission, preparation, or instigation of acts of terrorism.

Today this law means that anyone ariving at the UK border may be compelled to unlock access to data charting their entire lives. The Hansard record of the debate on the bill shows clearly that lawmakers foresaw problems: the classification of protesters as terrorists, the uselessness of fighting terrorism by imprisoning the innocent (Jeremy Corbyn), the reversal of the presumption of innocence. But they could not foresee how far-reaching the powers the bill granted would become.

The film’s framing story begins in November 2016, when Muhammed Rabbani arrived at London’s Heathrow Airport from Doha and was stopped and questioned by police under Schedule 7. They took his phone and laptop and asked for his passwords. He refused to supply them. On previous occasions, when he had similarly refused, they’d let him go. This time, he was arrested. Under Schedule 7, the penalty for such a refusal can be up to three months in jail.

Rabbani is managing director of CAGE International, a human rights organization that began by focusing on prisoners seized under the war on terror and expanded its mission to cover “confronting other rule of law abuses taking place under UK counter-terrorism strategy”. Rabbani’s refusal to disclose his passwords was, he said later, because he was carrying 30,000 confidential documents relating to a client’s case. A lawyer can claim client confidentiality, but not NGOs. In 2018, the appeals court ruled the password demands were lawful.

In September 2017, Rabbani was convicted. He was g iven a 12-month conditional discharge and ordered to pay £620 in costs. As Rabbani says in the film, “The law made me a terrorist.” No one suspected him of being a terrorist or placing anyone in danger; but the judge made clear she had no choice under the law and so he nonetheless has been convicted of a terrorism offense. On appeal in 2018, his conviction was upheld. We see him collect his returned devices – five years on from his original detention.

Britain is not the only country that regards him with suspicion. Citing his conviction, in 2023 France banned him, and, he claims, Poland deported him.

Unsurprisingly, CAGE is on the first list of groups that may be dubbed “extremist” under the new definition of extremism released last week by communities secretary Michael Gove. The direct consequence of this designation is a ban on participation in public life – chiefly, meetings with central and local government. The expansion of the meaning of “extremist”, however, is alarming activists on all sides.

Director Kate Stonehill tells the story of Rabbani’s detention partly through interviews and partly through a reenactment using wireframe-style graphics and a synthesized voice that reads out questions and answers from the interview transcripts. A cello of doom provides background ominance. Laced through this narrative are others. A retired law enforcement office teaches a class to use extraction and analysis tools, in which we see how extensive the information available to them really is. Ali Al-Marri and his lawyer review his six years of solitary detention as an enemy combatant in Charleston, South Carolina. Lastly, Stonehill calls on Ryan Gallegher’s reporting, which exposed the titular Phantom Parrot, the program to exploit the data retained under Schedule 7. There are no records of how many downloads have been taken.

The retired law enforcement officer’s class is practically satire. While saying that he himself doesn’t want to be tracked for safety reasons, he tells students to grab all the data they can when they have the opportunity. They are in Texas: “Consent’s not even a problem.” Start thinking outside of the box, he tells them.

What the film does not stress is this: rights are largely suspended at all borders. In 2022, the UK extended Schedule 7 powers to include migrants and refugees arriving in boats.

The movie’s future is bleak. At the Chaos Computer Congress, a speaker warns that gait recognition, eye movement detection, and speech analysis (accents, emotion) and and other types of analysis will be much harder to escape and enable watchers to do far more with the ever-vaster stores of data collected from and about each of us.

“These powers are capable of being misused,” said Douglas Hogg in the 1999 Commons debate. “Most powers that are capable of being misused will be misused.” The bill passed 210-1.

Illustrations: Still shot from the wireframe reenactment of Rabbani’s questioning in Phantom Parrot.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon