The Skype of it all

This week, Microsoft shuttered Skype. For a lot of people, it’s a sad but nostalgic moment. Sad, because for older Internet users it brings back memories of the connections it facilitated; nostalgic because hardly anyone seemed to be using it any more. As Chris Stokel-Walker wrote at Wired in 2021, somehow when covid arrived and poured accelerant on remote communications, everyone turned to Zoom instead. Stokel-Walker blamed the Microsoft team for lacking focus on the bit that mattered most: keeping the video link up and stable. Zoom had better video, true, but also far better usability in terms of getting people to calls.

Skype’s service – technically, VoIP, for Voice over Internet Protocol – was pioneering in its time, which arguably peaked around 2010. Like CompuServe before it and Twitter since, there was a period when everyone had their Skype ID on their business cards. In 2005, when eBay bought it for $1.3 billion, it was being widely copied. In 2009, when eBay sold it to an investor group, it was valued at $2.75 billion.

In 2011, Microsoft bought it for $8.5 billion in cash, to general puzzlement as to *why* and why for *so much*. I thought eBay would somehow embed it into its transaction infrastructure as it had Paypal, which it had bought in 2002 for $1.5 billion (and then in 2014 spun off as a public company). Similarly, Wired talked of Microsoft embedding it into its Xbox Live network. Instead, the company fiddled with the app in the general shift from desktop to mobile. Ironic, given that Skype was a *phone* app. If it struggled like Facebook did to make the change, it’s kind of embarrassing.

Forgotten in all this is the fact that although Skype was the first VoIP application to gain mainstream acceptance, it was not the first to connect phone calls over the Internet. That was the long-forgotten Free World Dial-Up project, pioneered by Jeff Pulver. On the ground I imagined Free World Dial-Up as looking something like the switchboard and radio phone Radar O’Reilly (Gary Burghoff) used in the TV series M*A*S*H (1973-1982), who was patching phone calls being transmitted via radio networks. As Pulver described it, calls were sent across the Internet between servers, each connected to a box that patched the calls into the local phone system.

Rereading my notes from my 1995 interview with Pulver, when he was just getting his service up and running, it’s astonishing to remember how many hurdles there were for his prototype VoIP project to overcome – and this was all being done by volunteers. In many countries outside North America, charges for local phone calls made it financially risky to run a server. Some countries had prohibitive licensing regulations that made it illegal to offer such a service if you weren’t a telephone company. The hardware and software were readily available but had to be bought and required tinkering to set up. Plus, few outside the business world had continuous high-speed connections; most of us were using modems to dial up a service provider.

Small surprise that those early calls were not great. A Chicago recipient of a test call said she’d had better connections over the traditional phone network to Harare. Network lag made it more like a store-and-forward audio clipping service than a phone call. This didn’t matter as much to people with a history in ham radio, like Pulver himself; they were used to the cognitive effort to understand despite static and dropouts.

On the other hand, international calling was so wildly expensive at the time that even so FWD opened up calling for half a million people.

FWD was the experiment that proved the demand and the potential. Soon, numerous companies were setting up to offer VoIP services via desktop applications of varying quality and usability. It was into this hodge-podge that Skype was launched in 2003 from Estonia. For a time, it kept getting better: it began with free calling between Skype users and paid calls to phone lines, and moved on to offering local phone numbers around the world, as Google Voice does now.

Around the early 2000s it was popular to predict that VoIP services would kill off telephone companies. This was a moment when network neutrality, now under threat, was crucial; had telcos been allowed to discriminate against VoIP traffic, we’d all still be paying through the nose for international calling and probably wouldn’t have had video calling during the covid lockdowns.

Instead, the telcos themselves have become VoIP companies. In 2007, BT was the first to announce it was converting its entire network to IP. That process is supposed to complete this year. My landline is already a VoIP line. (Downside: no electricity, no telecommunications.)

Pulver, I find, is still pushing away at the boundaries of telecommunications. His website these days is full of virtualized conversations (vCons) and Supply Chain Integrity, Transparency, and Trust (SWICC), which he explains here (PDF). The first is an IETF proposed standard for AI-enhanced digital records. The second is an IETF proposed framework that intends to define “a set of interoperable building blocks that will allow implementers to build integrity and accountability into software supply chain systems to help assure trustworthy operation”. This is the sort of thing that may make a big difference to companies while being invisible and/or frustrating to most of us.

As for Skype, it will fade from human memory. If it ever comes up, we’ll struggle to explain what it was to a generation who have no idea that calling across the world was ever difficult and expensive.

Illustrations: Radar O’Reilly (Gary Burghoff) in the TV series M*A*S*H with his radio telephone setup.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Lawfaring

Fining companies who have spare billions down the backs of their couches is pointless, but what about threatening their executives with prosecution? In a scathing ruling (PDF), US District Judge Yvonne Gonzales Rogers finds that Apple’s vice-president of finance, Alex Roman, “lied outright under oath” and that CEO Tim Cook “chose poorly” in failing to follow her injunction in Epic Games v. Apple. She asks the US Attorney for the Northern District of California to investigate whether criminal contempt proceedings are appropriate. “This is an injunction, not a negotiation.”

As noted here last week, last year Google lost the similar Epic Games v. Google. In both cases, Epic Games complained that the punishing commissions both companies require of the makers of apps downloaded from their app stores were anti-competitive. This is the same issue that last week led the European Commission to announce fines and restrictions against Apple under the Digital Markets Act. These rulings could, as Matt Stoller suggests, change the entire app economy.

Apple has said it strongly disagrees with the decision and will appeal – but it is complying.

At TechRadar, Lance Ulanoff sounds concerned about the impact on privacy and security as Apple is forced to open up its app store. This argument reminds of a Bell Telephone engineer who confiscated a 30-foot cord from Woolworth’s that I’d plugged in, saying it endangered the telephone network. Apple certainly has the right to market its app store with promises of better service. But it doesn’t have the right to defy the court to extend its monopoly, as Mike Masnick spells out at Techdirt.

Masnick notes the absurdity of the whole thing. Apple had mostly won the case, and could have made the few small changes the ruling ordered and gone about its business. Instead, its executives lied and obfuscated for a few years of profits, and here we are. Although: Apple would still have lost in Europe.

A Perplexity search for the last S&P 500 CEO to be jailed for criminal contempt finds Kevin Trudeau. Trudeau used late-night infomercials and books to sell what Wikipedia calls “unsubstantiated health, diet, and financial advice”. He was sentenced to ten years in prison in 2013, and served eight. Trudeau and the Federal Trade Commission formally settled the fines and remaining restrictions in 2024.

The last time the CEO of a major US company was sent to prison for criminal contempt? It appears, never. The rare CEOs who have gone to prison, it’s typically been for financial fraud or insider trading. Think Worldcom’s Bernie Ebbers. Not sure this is the kind of innovation Apple wants to be known for.

***

Reuters reports that 23andMe has, after pressure from many US states, agreed to allow a court-appointed consumer protection ombudsman to ensure that customers’ genetic data is protected. In March, it filed for bankrupcy protection, fulfilling last September’s predictions that it would soon run out of money.

The issue is that the DNA 23andMe has collected from its 15 million customers is its only real asset. Also relevant: the October 2023 cyberattack, which, Cambridge Analytica-like, leveraged hacking into 14,000 accounts to access ancestry data relating to approximately 6.9 million customers. The breach sparked a class action suit accusing the company of inadequate security under the Health Insurance Portability and Accountability Act (1996). It was settled last year for $30 million – a settlement whose value is now uncertain.

Case after case has shown us that no matter what promises buyers and sellers make at the time of a sale, they generally don’t stick afterwards. In this case, every user’s account of necessity exposes information about all their relatives. And who knows where it will end up and for how long the new owner can be blocked from exploiting it?

***

There’s no particular relationship between the 23andMe bankruptcy and the US government. But they make each other scarier: at 404 Media, Joseph Cox reported two weeks ago that Palantir is merging data from a wide variety of US departments and agencies to create a “master database” to help US Immigration and Customs Enforcement target and locate prospective deportees. The sources include the Internal Revenue Service, Health and Human Services, the Department of Labor, and Housing and Urban Development; the “ATrac” tool being built already has data from the Social Security Administration and US Citizenship and Immigration Services, as well as law enforcement agencies such as the FBI, the Bureau of Alcohol, Tobacco, Firearms and Explosives, and the U.S. Marshals Service.

As the software engineer and essayist Ellen Ullman wrote in 1996 in her book Close to the Machine, databases “infect” their owners with the desire to link them together and find out things they never previously felt they needed to know. The information in these government databases was largely given out of necessity to obtain services we all pay for. In countries with data protection laws, the change of use Cox outlines would require new consent. The US has no such privacy laws, and even if it did it’s not clear this government would care.

“Never volunteer information,” used to be a commonly heard-mantra, typically in relation to law enforcement and immigration authorities. No one lives that way now.

Illustrations: DNA strands (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Review: Vassal State

Vassal State: How America Runs Britain
by Angus Hanton
Swift Press
978-1-80075390-7

Tax organizations estimate that a bit under 200,000 expatriate Americans live in the UK. It’s only a tiny percentage of the overall population of 70 million, but of course we’re not evenly distributed. In my bit of southwest London, the (recently abruptly shuttered due to rising costs) butcher has advertised “Thanksgiving turkeys” for more than 30 years.

In Vassal State, however, Angus Hanton shows that US interests permeate and control the UK in ways far more significant than a handful of expatriates. This is not, he stresses, an equal partnership, despite the perennial photos of the British prime minister being welcomed to the White House by the sitting president, as shown satirically in 1986’s Yes, Prime Minister. Hunton cites the 2020 decision to follow the US and ban Huawei as an example, writing that the US pressure at the time “demonstrated the language of partnership coupled with the actions of control”. Obama staffers, he is told, used to joke about the “special relationship”.

Why invade when you can buy and control? Hanton lists a variety of vectors for US influence. Many of Britain’s best technology startups wind up sold to US companies, permanently alienating their profits – see, for example, DeepMind, sold to Google in 2014, and Worldpay, sold to Vantiv in 2019, which then took its name. US buyers also target long-established companies, such as 176-year-old Boots, which since 2014 has been part of Walgreens and is now being bought up by the Sycamore Partners private equity fund. To Americans, this may not seem like much, but Boots is a national icon and an important part of delivering NHS services such as vaccinations. No one here voted for Sycamore Partners to benefit from that, nor did they vote for Kraft to buy Cadbury’s in 2010 and abandon its Bournville headquarters since 1824.

In addition, US companies are burrowed into British infrastructure. Government ministers communicate with each other over WhatsApp. Government infrastructure is supplied by companies like Oracle and IBM, and, lately, Palantir, which are hard to dig out once embedded. A seventh of the workforce are precariously paid by the US-dominated gig economy. The vast majority of cashless transactions pay a slice to Visa or Mastercard. And American companies use the roads, local services, and other infrastructure while paying less in tax than their UK competition. More controversially for digital rights activists, Hanton complains about the burden that US-based streamers like Netflix, Apple, and Amazon place on the telecommunications networks. Among the things he leaves out: the technology platforms in education.

Hanton’s book comes at a critical moment. Previous administrations have perhaps been more polite about demanding US-friendly policies, but now Britain, on its own outside the EU, is facing Donald Trump’s more blatant demands. Among them: that suppliers to the US government comply with its anti-DEI policies. In countries where diversity, equity, and inclusion are fundamental rights, the US is therefore demanding that its law should take precedence.

In a timeline fork in which Britain remained in the EU, it would be in a much better position to push back. In *this* timeline, Hanton’s proposed remedies – reform the tax structure, change policies, build technological independence – are much harder to implement.

Three times a monopolist

It’s multiply official: Google is a monopoly.

The latest such ruling is a decision handed down on April 17 by Judge Leonie Brinkema in United States of America v. Google LLC, a 2023 case that focuses on Google’s control over both the software publishers use to manage online ads and the exchanges where those same ads are bought and sold. In August 2024, Judge Amit P. Mehta also ruled Google was a monopoly; that United States of America v. Google LLC, filed in 2020, focused on Google’s payments to mobile phone companies, wireless carriers, and browser companies to promote its search engine. Before *that*, in 2023 a jury found in Epic Games v. Google that Google violated antitrust laws with respect to the Play Store and Judge James Donato ordered it to allow alternative app stores on Android devices by November 1, 2024. Appeals are proceeding.

Google has more trouble to look forward to. At The Overspill, veteran journalist Charles Arthur is a member of a class representative bringing a UK case against Google. The AdTechClaim case seeks £13.6 billion in damages, claiming that Google’s adtech system has diverted revenues that otherwise would have accrued to UK-based website and app publishers. Reuters reported last week on the filing of a second UK challenge, a £5 billion suit representing thousands of businesses who claim Google manipulated the search ecosystem to block out rivals and force advertisers to rely on its platform. Finally, the Competition and Markets Authority is conducting its own investigation into the company’s search and advertising practices.

It is hard to believe that all of this will go away leaving Google intact, despite the company’s resistance to each one. We know from past experience that fines change nothing; only structural remedies will

The US findings against Google seem to have taken some commentators by surprise, perhaps assuming that the Trump administration would have a dampening effect. Trump, however, seems more exercised about the EU’s and UK’s mounting regulatory actions. Just this week the European Commission fined Apple €500 million and Meta €200 million, the first under the Digital Markets Act, and ordered them to open up user choice within 60 days. The White House has called some of these recent fines a new form of economic blackmail.

I’ve observed before that antitrust cases are often well behind the times, partly because these cases take so long to litigate. It wasn’t until 2024 that Google lost its 2017 appeal to the European Court of Justice in the Foundem search case and was ordered to pay a €2.4 billion fine. That case was first brought in 2009.

In 2014, I imagined that Google’s recently-concluded purchase of Nest smart thermostats might form the basis of an antitrust suit in 2024. Obviously, that didn’t happen; I wish instead the UK government had blocked Google’s acquisition of DeepMind. Partly, because perhaps the pre-monopolization of AI could have been avoided. And partly because I’ve been reading Angus Hanton’s recent book, Vassal State, and keeping it would have hugely benefited Britain.

Unfortunately, forcing Google to divest DeepMind is not on anyone’s post-trial list of possible remedies. In October, the Department of Justice filed papers listing a series of possibilities for the search engine case. The most-discussed of these was ordering Google to divest Chrome. In a sensible world, however, one must hope remedies will be found that address the differing problems these cases were brought to address.

At Big, Matt Stoller suggests that the latest judgment increases the likelihood that Google will be broken up, the first such order since AT&T in 1984. The DoJ, now under Trump’s control, could withdraw, but, Stoller points out, the list of plaintiffs includes several state attorneys general, and the DoJ can’t dictate what they do.

Trying to figure out what remedies would make real change is a difficult game, as the folks at the the April 20 This Week In Tech podcast say. This is unlike the issue around Google’s and Apple’s app stores that the European Commission fines cover, where it’s comparatively straightforward to link opening up their systems to alternatives and changing their revenue structure to ensuring that app makers and publishers get a fairer percentage.

Breaking up the company to separate Chrome, search, adtech, and Android would disable the company’s ability to use those segments as levers. In such a situation Google and/or its parent, Alphabet, could not, as now, use them in combination to maintain its ongoing data collection and build a durable advantage in training sophisticated models to underpin automated services. But would forcing the company to divest those segments create competition in any of them? Each would likely remain dominant in its field.

Yet something must be done. Even though Microsoft was not in the end broken up in 2001 when the incoming Bush administration settled the case, the experience of being investigated and found guilty of monopolistic behavior changed the company. None of today’s technology companies are likely to follow suit unless they’re forced; these companies are too big, too powerful, too rich, and too arrogant. If Google is not forced to change its structure or its business model, all of them will be emboldened to behave in even worse ways. As unimaginable as that seems.

Illustrations: “The kind of anti-trust legislation we need”, by J. S. Pughe (via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Unscreened

My Canadian friend was tolerant enough to restart his Nissan car so I could snare a pic of the startup screen (above) and read it in full. That’s when he followed the instructions to set the car to send only “limited data” to the company. What is that data? The car didn’t say. (Its manual might, but I didn’t test his patience that far.)

In 2023, a Mozilla Foundation study of US cars’ privacy called them the worst category it had ever tested and named Nissan as the worst offender: “The Japanese car manufacturer admits in their privacy policy to collecting a wide range of information, including sexual activity, health diagnosis data, and genetic data – but doesn’t specify how. They say they can share and sell consumers’ ‘preferences, characteristics, psychological trends, predispositions, behavior, attitudes, intelligence, abilities, and aptitudes’ to data brokers, law enforcement, and other third parties.”

Fortunately, my friend lives in Canada.

Granting that no one wants to read privacy policies, at least apps, and websites display them in their excruciating, fine-print detail. They can afford to because usually you encounter them when you’re not in a hurry. That can’t be said of the start-up screen in a car, when you just want to get moving. Its interference has to be short.

The new startup screen confirmed that “limited data” was now being sent. I wasn’t quick enough to read the rest, but it probably warned that some features might not now work – because that’s what they *always* say, and is what the settings warned (without specifics) when he changed them.

I assume this setup complies with Canada’s privacy laws. But the friction of consulting a manual or website to find out what data is being sent deters customers from exercising their rights. Like website dark patterns, it’s gamed to serve the company’s interests. It feels like grudging compliance with the law, especially because customers are automatically opted in.

How companies obtain consent is a developing problem. At Privacy International, Gus Hosein considers the future of AI assistants. Naturally, he focuses on the privacy implications: the access they’ll need, the data they’ll be able to extract, the new kinds of datasets they’ll create. And he predicts that automation will tempt developers to bypass the friction of consent and permission. In 2013, we considered this with respect to emotionally manipulative pet robots.

I had to pause when Hosein got to “screenless devices” to follow some links. At Digital Trends, Luke Edwards summarizes a report at The Information that OpenAI may acquire io Products. This start-up, led by renowned Apple hardware designer Jony Ive and OpenAI CEO Sam Altman, intends to create AI voice-enabled assistants that may (or may not) take the form of a screenless “phone” or household device.

Meanwhile, The Vergecast (MP3) reports that Samsung is releasing Ballie, a domestic robot infused with Google Cloud’s generative AI that can “engage in natural, conversational interactions to help users manage home environments”. Samsung suggests you can have it greet people at the door. So much nicer than being greeted by your host.

These remind of the Humane AI pin, whose ten-month product life ended in February with HP’s purchase of the company’s assets for $116 million. Or the similar Bee, whose “summaries” of meetings and conversations Victoria Song at The Verge called “fan fiction”. Yes: as factual as any other generative AI chatbot. More notably, the Bee couldn’t record silent but meaningful events, leading Song to wonder, “In a hypothetical future where everyone has a Bee, do unspoken memories simply not exist?”

In November 2018, a Reuters Institute report on the future of news found that on a desktop computer the web can offer 20 news links at a time; a smartphone has room for seven, and smart speakers just one. Four years later, smart speakers were struggling as a category because manufacturers can’t make money out of them. But apparently Silicon Valley still thinks the shrunken communications channel of voice beats typing and reading and are plowing on. It gives them greater control.

The thin, linear stream of information is why Hosein foresees the temptation to avoid the friction of consent. But the fine-grained user control he wants will, I think, mandate offloading reviewing collected data and granting or revoking permissions onto a device with a screen. Like smart watches, these screenless devices will have to be part of a system. What Hosein wants and Cory Doctorow has advocated for web browsers is that these technological objects should be loyal agents. That is, they must favor *our* interests over those of their manufacturer, or we won’t trust them.

More complicated is the situation with respect to incops – incidentally co-present [people] – whose data is also captured without their consent: me in my friend’s car, everyone Song encountered. Mozilla reported that Subaru claims that by getting in the car passengers become “users” who consent to data collection (as if); several other manufacturers say that the driver is responsible for notifying passengers. Song found it easier to mute the Bee in her office and while commuting than to ask colleagues and passersby for permission to record. Then she found it didn’t always disconnect when she told it to…

So now imagine that car saturated with an agentic AI assistant that decides where you want to go and drives you there.

“You don’t want to do that, Dave.”

Illustrations: The start-up screen in my Canadian friend’s car.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Predatory inclusion

The recent past is a foreign country; they view the world differently there.

At last week’s We Robot conference on technology, policy, and law, the indefatigably detail-oriented Sue Glueck was the first to call out a reference to the propagation of transparency and accountability by the “US and its allies” as newly out of date. From where we were sitting in Windsor, Ontario, its conjoined fraternal twin, Detroit, Michigan, was clearly visible just across the river. But: recent events.

As Ottawa law professor Teresa Scassa put it, “Before our very ugly breakup with the United States…” Citing, she Anu Bradford, she went on, “Canada was trying to straddle these two [US and EU] digital empires.” Canada’s human rights and privacy traditions seem closer to those of the EU, even though shared geography means the US and Canada are superficially more similar.

We’ve all long accepted that the “technology is neutral” claim of the 1990s is nonsense – see, just this week, Luke O’Brien’s study at Mother Jones of the far-right origins of the web-scraping facial recognition company Clearview AI. The paper Glueck called out, co-authored in 2024 by Woody Hartzog, wants US lawmakers to take a tougher approach to regulating AI and ban entirely some systems that are fundamentally unfair. Facial recognition, for example, is known to be inaccurate and biased, but improving its accuracy raises new dangers of targeting and weaponization, a reality Cynthia Khoo called “predatory inclusion”. If he were writing this paper now, Hartzog said, he would acknowledge that it’s become clear that some governments, not just Silicon Valley, see AI as a tool to destroy institutions. I don’t *think* he was looking at the American flags across the water.

Later, Khoo pointed out her paper on current negotiations between the US and Canada to develop a bilateral law enforcement data-sharing agreement under the US CLOUD Act. The result could allow US police to surveil Canadians at home, undermining the country’s constitutional human rights and privacy laws.

In her paper, Clare Huntington proposed deriving approaches to human relationships with robots from family law. It can, she argued, provide analogies to harms such as emotional abuse, isolation, addiction, invasion of privacy, and algorithmic discrimination. In response, Kate Darling, who has long studied human responses to robots, raised an additional factor exacerbating the power imbalance in such cases: companies, “because people think they’re talking to a chatbot when they’re really talking to a company.” That extreme power imbalance is what matters when trying to mitigate risk (see also Sarah Wynn-Williams’ recent book and Congressional testimony on Facebook’s use of data to target vulnerable teens).

In many cases, however, we are not agents deciding to have relationships with robots but what AJung Moon called “incops”, or “incidentally co-present”. In the case of the Estonian Starship delivery robots you can find in cities from San Francisco to Milton Keynes, that broad category includes human drivers, pedestrians, and cyclists who share their spaces. In a study, Adeline Schneider found that white men tended to be more concerned about damage to the robot, where others worried more about communication, the data they captured, safety, and security. Delivery robots are, however, typically designed with only direct users in mind, not others who may have to interact with it.

These are all social problems, not technological ones, as conference chair Kristen Thomasen observed. Carys Craig later modified it: technology “has compounded the problems”.

This is the perennial We Robot question: what makes robots special? What qualities require new laws? Just as we asked about the Internet in 1995, when are robots just new tools for old rope, and when do they bring entirely new problems? In addition, who is responsible in such cases? This was asked in a discussion of Beatrice Panattoni‘s paper on Italian proposals to impose harsher penalties for crime committed with AI or facilitated by robots. The pre-conference workshop raised the same question. We already know the answer: everyone will try to blame someone or everyone else. But in formulating a legal repsonse, will we tinker around the edges or fundamentally question the criminal justice system? Andrew Selbst helpfully summed up: “A law focusing on specific harms impedes a structural view.”

At We Robot 2012, it was novel to push lawyers and engineers to think jointly about policy and robots. Now, as more disciplines join the conversation, familiar Internet problems surface in new forms. Human-robot interaction is a four-dimensional version of human-computer interaction; I got flashbacks to old hacking debates when Elizabeth Joh wondered in response to Panattoni’s paper if transforming a robot into a criminal should be punished; and a discussion of the use of images of medicalized children for decades in fundraising invoked publicity rights and tricky issues of consent.

Also consent-related, lawyers are starting to use generative AI to draft contracts, a step that Katie Szilagyi and Marina Pavlović suggested further diminishes the bargaining power already lost to “clickwrap”. Automation may remove our remaining ability to object from more specialized circumstances than the terms and conditions imposed on us by sites and services. Consent traditionally depends on a now-absent “meeting of minds”.

The arc of We Robot began with enthusiasm for robots, which waned as big data and generative AI became players. Now, robots/AI are appearing as something being done to us.

Illustrations: Detroit, seen across the river from Windsor, Ontario with a Canadian Coast Guard boat in the foreground.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Catoptromancy

It’s a commonly held belief that technology moves fast, and law slowly. This week’s We Robot workshop day gave the opposite impression: these lawyers are moving ahead, while the technology underlying robots is moving slower than we think.

A mainstay of this conference over the years has been Bill Smart‘s and Cindy Grimm‘s demonstrations of the limitations of the technologies that make up robots. This year, that gambit was taken up by Jason Millar and AJung Moon. Their demonstration “robot” comprised six people – one brain, four sensors, and one color sensor. Ordering it to find the purple shirt quickly showed that robot programming isn’t getting any easier. The human “sensors” can receive useful information only as far as their outstretched fingertips, and even then the signal they receive is minimal.

“Many of my students program their robots into a ditch and can’t understand why,” Moon said. It’s the required specificity. For one thing, a color sensor doesn’t see color; it sends a stream of numeric values. It’s all 1s and 0s and tiny engineering decisions whose existence is never registered at the policy level but make all the difference. One of her students, for example, struggled with a robot that kept missing the croissant it was supposed to pick up by 30 centimeters. The explanation turned out to be that the sensor was so slow that the robot was moving a half-second too early, based on historical information. They had to insert a pause before the robot could get it right.

So much of the way we talk about robots and AI misrepresents those inner workings. A robot can’t “smell honey”; it merely has a sensor that’s sensitive to some chemicals and not others. It can’t “see purple” if its sensors are the usual red, green, blue. Even green may not be identifiable to an RGB sensor if the lighting is such that reflections make a shiny green surface look white. Faster and more diverse sensors won’t change the underlying physics. How many lawmakers understand this?

Related: what does it mean to be a robot? Most people attach greater intelligence to things that can move autonomously. But a modern washing machine is smarter than a Roomba, while an iPhone is smarter than either but can’t affect the physical world, as Smart observed at the very first We Robot, in 2012.

This year we are in Canada – to be precise, in Windsor, Ontario, looking across the river to Detroit, Michigan. Canadian law, like the country itself, is a mosaic: common law (inherited from Britain), civil law (inherited from France), and myriad systems of indigenous peoples’ law. Much of the time, said Suzie Dunn, new technology doesn’t require new law so much as reinterpretation and, especially, enforcement of existing law.

“Often you can find criminal law that already applies to digital spaces, but you need to educate the legal system how to apply it,” she said. Analogous: in the late 1990s, editors of the technology section at the Daily Telegraph had a deal-breaking question: “Would this still be a story if it were about the telephone instead of the Internet?”

We can ask that same question about proposed new law. Dunn and Katie Szilagyi asked what robots and AI change that requires a change of approach. They set us to consider scenarios to study this question: an autonomous vehicle kills a cyclist; an autonomous visa system denies entry to a refugee who was identified in her own country as committing a crime when facial recognition software identifies her in images of an illegal LGBTQ protest. In the first case, it’s obvious that all parties will try to blame someone – or everyone – else, probably, as Madeleine Clare Elish suggested in 2016, on the human driver, who becomes the “moral crumple zone”. The second is the kind of case the EU’s AI Act sought to handle by giving individuals the right to meaningful information about the automated decision made about them.

Nadja Pelkey, a curator at Art Windsor-Essex, provided a discussion of AI in a seemingly incompatible context. Citing Georges Bataille, who in 1929 saw museums as mirrors, she invoked the word “catoptromancy”, the use of mirrors in mystical divination. Social and political structures are among the forces that can distort the reflection. So are the many proliferating AI tools such as “personalized experiences” and other types of automation, which she called “adolescent technologies without legal or ethical frameworks in place”.

Where she sees opportunities for AI is in what she called the “invisible archives”. These include much administrative information, material that isn’t digitized, ephemera such as exhibition posters, and publications. She favors small tools and small private models used ethically so they preserve the rights of artists and cultural contexts, and especially consent. In a schematic she outlined a system that can’t be scraped, that allows data to be withdrawn as well as added, and that enables curiosity and exploration. It’s hard to imagine anything less like the “AI” being promulgated by giant companies. *That* type of AI was excoriated in a final panel on technofascism and extractive capitalism.

It’s only later I remember that Pelkey also said that catoptromancy mirrors were first made of polished obsidian.

In other words, black mirrors.

Illustrations: Divination mirror made of polished obsidian by artisans of the Aztec Empire of Mesoamerica between the 15th and 16th centuries (via Wikimedia

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

A short history of We Robot 2012-

On the eve of We Robot 2025, here are links to my summaries of previous years. 2014 is missing; I didn’t make it that year for family reasons. There was no conference in 2024 in order to move the event back to its original April schedule (covid caused its move to September in 2020). These are my personal impressions; nothing I say here should be taken as representing the conference, its founders, its speakers, or their institutions.

We Robot was co-founded by Michael Froomkin, Ryan Calo, and Ian Kerr to bring together lawyers and engineers to think early about the coming conflicts in robots, law, and policy.

2024 No conference.

2023 The end of cool. After struggling to design a drone delivery service that had any benefits over today’s cycling couriers, we find ourselves less impressed by robot that can do somersaults but not do anything useful.

2022 Insert a human. “Robots” are now “sociotechnical systems”.

Workshop day Coding ethics. The conference struggles to design an ethical robot.

2021 Plausible diversions. How will robots rehape human space?

Workshop day Is the juice worth the squeeze?. We think about how to regulate delivery robots, which will likely have no user-serviceable parts. Title from Woody Hartzog.

2020 (virtual) The zero on the phone. AI exploitation becomes much more visible.

2019 Math, monsters, and metaphors. The trolley problem is dissected; the true danger is less robots than the “pile of math that does some stuff”.

Workshop day The Algernon problem. New participants remind that robots/AI are carrying out the commands of distant owners.

2018 Deception. The conference tries to tease out what makes robots different and revisits Madeleine Clare Elish’s moral crumple zones after the first pedestrian death by self-driving car.

Workshop day Late, noisy, and wrong. Engineers Bill Smart and Cindy Grimm explain why sensors never capture what you think they’re capturing and how AI systems use their data.

2017 Have robot, will legislate. Discussion of risks this year focused on the intermediate sitaution, when automation and human norms clash.

2016 Humans all the way down. Madeline Clare Elish introduces “moral crumple zones”.

Workshop day: The lab and the world. Bill Smart uses conference attendees in formation to show why building a robot is difficult.

2015 Multiplicity. A robot pet dog begs its owner for an upgraded service subscription.

2014 Missed conference

2013 Cautiously apocalyptic. Diversity of approaches to regulation will be needed to handle the diversity of robots.

2012 A really fancy hammer with a gun. Unsentimental engineer Bill Smart provided the title.

wg

Review: Careless People

Careless People: A Cautionary Tale of Power, Greed, and Lost Idealism
By Sarah-Wynn-Williams
Macmillan
ISBN: 978-1035065929

In his 2021 book Social Warming, Charles Arthur concludes his study of social media with the observation that the many harms he documented happened because no one cared to stop them. “Nobody meant for this to happen,” he writes to open his final chapter.

In her new book, Careless People, about her time at Facebook, former New Zealand diplomat Sarah Wynn-Williams shows the truth of Arthur’s take. A sad tale of girl-meets-company, girl-loses-company, girl-tells-her-story, it starts with Wynn-Williams stalking Facebook to identify the right person to pitch hiring her to build its international diplomatic relationships. I kept hoping increasing dissent and disillusion would lead her to quit. Instead, she stays until she’s fired after HR dismisses her complaint of sexual harassment.

In 2011, when Wynn-Williams landed her dream job, Facebook’s wild expansion was at an early stage. CEO Mark Zuckerberg is awkward, sweaty, and uncomfortable around world leaders, who are dismissive. By her departure in 2017, presidents of major countries want selfies with him and he’s much more comfortable – but no longer cares. Meanwhile, then-Chief Operating Officer Sheryl Sandberg, wealthy from her time at Google, becomes a celebrity via her book, Lean In, written with the former TV comedy writer Nell Scovell. Sandberg’s public feminism clashes with her employee’s experience. When Wynn-Williams’s first child is a year old, a fellow female employee congratulates her on keeping the child so well-hidden she didn’t know it existed.

The book provides hysterically surreal examples of American corporatism. She is in the delivery room, feet in stirrups, ordered to push, when a text arrives: can she draft talking points for Davos? (She tries!) For an Asian trip, Zuckerberg wants her to arrange a riot or peace rally so he can appear to be “gently mobbed”. When the company fears “Mark” or “Sheryl” might be arrested if they travel to Korea, managers try to identify a “body” who can be sent in as a canary. Wynn-Williams’s husband has to stop her from going. Elsewhere, she uses her diplomatic training to land Zuckerberg a “longer-than-normal handshake” with Xi Jinping.

So when you get to her failure to get her bosses to beef up the two-person content moderation team for Myanmar’s 60 million people, rewrite the section so Burmese characters render correctly, and post country-specific policies, it’s obvious what her bosses will decide. The same is true of internal meetings discussing the tools later revealed to let advertisers target depressed teens. Wynn-Williams hopes for a safe way forward, but warns that company executives’ “lethal carelessness” hasn’t changed.

Cultural clash permeates this book. As a New Zealander, she’s acutely conscious of the attitudes she encounters, and especially of the wealth and class disparity that divide the early employees from later hires. As pregnancies bring serious medical problems and a second child, the very American problem of affording health insurance makes offending her bosses ever riskier.

The most important chapters, whose in-the-room tales fill in gaps in books by Frances Haugen, Sheera Frankel and Cecilia Kang, and Steven Levy, are those in which Wynn-Williams recounts the company’s decision to embrace politics and build its business in China. If, her bosses reason, politicians become dependent on Facebook for electoral success, they will balk at regulating it. Donald Trump’s 2016 election, which Zuckerberg initially denied had been significantly aided by Facebook, awakened these political aspirations. Meanwhile, Zuckerberg leads the company to build a censorship machine to please China. Wynn-Williams abhors all this – and refuses to work on China. Nonetheless, she holds onto the hope that she can change the company from inside.

Apparently having learned little from Internet history, Meta has turned this book into a bestseller by trying to suppress it. Wynn-Williams managed one interview, with Business Insider, before an arbitrator’s injunction stopped her from promoting the book or making any “disparaging, critical or otherwise detrimental comments” related to Meta. This fits the man Wynn-Williams depicts who hates to lose so much that his employees let him win at board games.

The risks of recklessness

In 1997, when the Internet was young and many fields were still an unbroken green, the United States Institute of Peace convened a conference on virtual diplomacy. In my writeup for the Telegraph, I saw that organizer Bob Schmitt had convened two communities – computer and diplomacy – who were both wondering how they could get the other to collaborate but had no common ground.

On balance, the computer folks, who saw a potential market as well as a chance to do some good, were probably more eager than the diplomats, who favored caution and understood that in their discipline speed was often a bad idea. They were also less attracted than one might think to the notion of virtual meetings despite the travel it would save. Sometimes, one told me, it’s the random conversations around the water cooler that make plain what’s really happening. Why is Brazil mad? In a virtual meeting, it may be harder to find out that it’s not the negotiations but the fact that their soccer team lost last night.

I thought at the time that the conference would be the first of many to tackle these issues. But as it’s turned out, I’ve never been at an event anything like it…until now, nearly 30 years later. This week, a group of diplomats and human rights advocates met, similarly, to consider how the cyber world is changing diplomacy and international relations.

The timing is unexpectedly fortuitous. This week’s revelation that someone added Atlantic editor-in-chief Jeffrey Goldberg to a Signal chat in which US cabinet officials discussed plans for an imminent military operation in Yemen shows the kinds of problems you get when you rely too much on computer mediation. In the usual setting, a Sensitive Compartmented Information Facility (SCIF), you can see exactly who’s there, and communications to anyone outside that room are entirely blocked. As a security clearance-carrying friend of mine said, if he’d made such a blunder he’d be in prison.

The Signal blunder was raised by almost every speaker. It highlights something diplomats think about a lot: who is or is not in the room. Today, as in 1997, behavioral cues are important; one diplomat estimated that meeting virtually costs you 50% to 60% of the communication you have when meeting face-to-face. There are benefits, too, of course, such as opening side channels to remote others who can advise on specific questions, or the ability to assemble a virtual team a small country could never afford to send in person.

These concerns have not changed since 1997. But it’s clear that today’s diplomats feel they have less choice about what new technology gets deployed and how than they did then, when the Internet’s most significant predecessor new technology was the global omnipresence of news network CNN, founded in 1980. Now, much of what control they had then is disappearing, both because human behavior overrides their careful, rulebound, friction-filled diplomatic channels and processes via shadow IT, but also because the biggest technology companies own so much of what we call “public” infrastructure.

Another key difference: many people don’t see the need for education to learn facts; it’s a particular problem for diplomats, who rely on historical data to show the world they aspire to build. And another: today a vastly wider array of actors, from private companies to individuals and groups of individuals, can create world events. And finally: in 1997 multinational companies were already challenging the hegemony of governments, but they were not yet richer and more powerful than countries.

Cue for a horror thought: what if Big Tech, which is increasingly interested in military markets, and whose products are increasingly embedded at the hearts of governments decide that peace is bad for business? Already they are allying with politicians to resist human rights principles, most notably privacy.

Which cues another 1997 memory: Nicholas Negroponte absurdly saying that the Internet would bring world peace by breaking down national borders. In 20 years, he said (that would be eight years ago) children would not know what nationalism is. Instead, on top of all today’s wars and internal conflicts, we’re getting virtual infrastructure attacks more powerful than bullets, and proactive agents fueled by large language models. And all fueled by the performative-outrage style of social media, which is becoming just how people speak, publicly and privately.

All this is more salient when you listen to diplomats and human rights activists as they are the ones who see up close the human lives lost. Meta’s name comes up most often, as in Myanmar and Ethiopia.

The mood was especially touchy because a couple of weeks ago a New Zealand diplomat was recalled after questioning US president Donald Trump’s understanding of history during a public panel in London – ironically in Chatham House under the Chatham House rule.

“You say the wrong thing on the wrong platform at the wrong time, and your career is gone,” one observed. Their people perimeter is gone, as it has been for so many of us for a decade or more. But more than most people, diplomats who don’t have trust have nothing. And so: “We’re in a time when a single message can up-end relationships.”

No surprise, then, that the last words reflected 1997’s conclusion: “Diplomacy is still a contact sport.”

Illustrations: Internet meme rewriting Wikipedia’s Alice and Bob page explaining man-in-the-middle attacks with the names Hegseth, Waltz, and Goldberg, referencing the Signal snafu.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.