Unscreened

My Canadian friend was tolerant enough to restart his Nissan car so I could snare a pic of the startup screen (above) and read it in full. That’s when he followed the instructions to set the car to send only “limited data” to the company. What is that data? The car didn’t say. (Its manual might, but I didn’t test his patience that far.)

In 2023, a Mozilla Foundation study of US cars’ privacy called them the worst category it had ever tested and named Nissan as the worst offender: “The Japanese car manufacturer admits in their privacy policy to collecting a wide range of information, including sexual activity, health diagnosis data, and genetic data – but doesn’t specify how. They say they can share and sell consumers’ ‘preferences, characteristics, psychological trends, predispositions, behavior, attitudes, intelligence, abilities, and aptitudes’ to data brokers, law enforcement, and other third parties.”

Fortunately, my friend lives in Canada.

Granting that no one wants to read privacy policies, at least apps, and websites display them in their excruciating, fine-print detail. They can afford to because usually you encounter them when you’re not in a hurry. That can’t be said of the start-up screen in a car, when you just want to get moving. Its interference has to be short.

The new startup screen confirmed that “limited data” was now being sent. I wasn’t quick enough to read the rest, but it probably warned that some features might not now work – because that’s what they *always* say, and is what the settings warned (without specifics) when he changed them.

I assume this setup complies with Canada’s privacy laws. But the friction of consulting a manual or website to find out what data is being sent deters customers from exercising their rights. Like website dark patterns, it’s gamed to serve the company’s interests. It feels like grudging compliance with the law, especially because customers are automatically opted in.

How companies obtain consent is a developing problem. At Privacy International, Gus Hosein considers the future of AI assistants. Naturally, he focuses on the privacy implications: the access they’ll need, the data they’ll be able to extract, the new kinds of datasets they’ll create. And he predicts that automation will tempt developers to bypass the friction of consent and permission. In 2013, we considered this with respect to emotionally manipulative pet robots.

I had to pause when Hosein got to “screenless devices” to follow some links. At Digital Trends, Luke Edwards summarizes a report at The Information that OpenAI may acquire io Products. This start-up, led by renowned Apple hardware designer Jony Ive and OpenAI CEO Sam Altman, intends to create AI voice-enabled assistants that may (or may not) take the form of a screenless “phone” or household device.

Meanwhile, The Vergecast (MP3) reports that Samsung is releasing Ballie, a domestic robot infused with Google Cloud’s generative AI that can “engage in natural, conversational interactions to help users manage home environments”. Samsung suggests you can have it greet people at the door. So much nicer than being greeted by your host.

These remind of the Humane AI pin, whose ten-month product life ended in February with HP’s purchase of the company’s assets for $116 million. Or the similar Bee, whose “summaries” of meetings and conversations Victoria Song at The Verge called “fan fiction”. Yes: as factual as any other generative AI chatbot. More notably, the Bee couldn’t record silent but meaningful events, leading Song to wonder, “In a hypothetical future where everyone has a Bee, do unspoken memories simply not exist?”

In November 2018, a Reuters Institute report on the future of news found that on a desktop computer the web can offer 20 news links at a time; a smartphone has room for seven, and smart speakers just one. Four years later, smart speakers were struggling as a category because manufacturers can’t make money out of them. But apparently Silicon Valley still thinks the shrunken communications channel of voice beats typing and reading and are plowing on. It gives them greater control.

The thin, linear stream of information is why Hosein foresees the temptation to avoid the friction of consent. But the fine-grained user control he wants will, I think, mandate offloading reviewing collected data and granting or revoking permissions onto a device with a screen. Like smart watches, these screenless devices will have to be part of a system. What Hosein wants and Cory Doctorow has advocated for web browsers is that these technological objects should be loyal agents. That is, they must favor *our* interests over those of their manufacturer, or we won’t trust them.

More complicated is the situation with respect to incops – incidentally co-present [people] – whose data is also captured without their consent: me in my friend’s car, everyone Song encountered. Mozilla reported that Subaru claims that by getting in the car passengers become “users” who consent to data collection (as if); several other manufacturers say that the driver is responsible for notifying passengers. Song found it easier to mute the Bee in her office and while commuting than to ask colleagues and passersby for permission to record. Then she found it didn’t always disconnect when she told it to…

So now imagine that car saturated with an agentic AI assistant that decides where you want to go and drives you there.

“You don’t want to do that, Dave.”

Illustrations: The start-up screen in my Canadian friend’s car.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Predatory inclusion

The recent past is a foreign country; they view the world differently there.

At last week’s We Robot conference on technology, policy, and law, the indefatigably detail-oriented Sue Glueck was the first to call out a reference to the propagation of transparency and accountability by the “US and its allies” as newly out of date. From where we were sitting in Windsor, Ontario, its conjoined fraternal twin, Detroit, Michigan, was clearly visible just across the river. But: recent events.

As Ottawa law professor Teresa Scassa put it, “Before our very ugly breakup with the United States…” Citing, she Anu Bradford, she went on, “Canada was trying to straddle these two [US and EU] digital empires.” Canada’s human rights and privacy traditions seem closer to those of the EU, even though shared geography means the US and Canada are superficially more similar.

We’ve all long accepted that the “technology is neutral” claim of the 1990s is nonsense – see, just this week, Luke O’Brien’s study at Mother Jones of the far-right origins of the web-scraping facial recognition company Clearview AI. The paper Glueck called out, co-authored in 2024 by Woody Hartzog, wants US lawmakers to take a tougher approach to regulating AI and ban entirely some systems that are fundamentally unfair. Facial recognition, for example, is known to be inaccurate and biased, but improving its accuracy raises new dangers of targeting and weaponization, a reality Cynthia Khoo called “predatory inclusion”. If he were writing this paper now, Hartzog said, he would acknowledge that it’s become clear that some governments, not just Silicon Valley, see AI as a tool to destroy institutions. I don’t *think* he was looking at the American flags across the water.

Later, Khoo pointed out her paper on current negotiations between the US and Canada to develop a bilateral law enforcement data-sharing agreement under the US CLOUD Act. The result could allow US police to surveil Canadians at home, undermining the country’s constitutional human rights and privacy laws.

In her paper, Clare Huntington proposed deriving approaches to human relationships with robots from family law. It can, she argued, provide analogies to harms such as emotional abuse, isolation, addiction, invasion of privacy, and algorithmic discrimination. In response, Kate Darling, who has long studied human responses to robots, raised an additional factor exacerbating the power imbalance in such cases: companies, “because people think they’re talking to a chatbot when they’re really talking to a company.” That extreme power imbalance is what matters when trying to mitigate risk (see also Sarah Wynn-Williams’ recent book and Congressional testimony on Facebook’s use of data to target vulnerable teens).

In many cases, however, we are not agents deciding to have relationships with robots but what AJung Moon called “incops”, or “incidentally co-present”. In the case of the Estonian Starship delivery robots you can find in cities from San Francisco to Milton Keynes, that broad category includes human drivers, pedestrians, and cyclists who share their spaces. In a study, Adeline Schneider found that white men tended to be more concerned about damage to the robot, where others worried more about communication, the data they captured, safety, and security. Delivery robots are, however, typically designed with only direct users in mind, not others who may have to interact with it.

These are all social problems, not technological ones, as conference chair Kristen Thomasen observed. Carys Craig later modified it: technology “has compounded the problems”.

This is the perennial We Robot question: what makes robots special? What qualities require new laws? Just as we asked about the Internet in 1995, when are robots just new tools for old rope, and when do they bring entirely new problems? In addition, who is responsible in such cases? This was asked in a discussion of Beatrice Panattoni‘s paper on Italian proposals to impose harsher penalties for crime committed with AI or facilitated by robots. The pre-conference workshop raised the same question. We already know the answer: everyone will try to blame someone or everyone else. But in formulating a legal repsonse, will we tinker around the edges or fundamentally question the criminal justice system? Andrew Selbst helpfully summed up: “A law focusing on specific harms impedes a structural view.”

At We Robot 2012, it was novel to push lawyers and engineers to think jointly about policy and robots. Now, as more disciplines join the conversation, familiar Internet problems surface in new forms. Human-robot interaction is a four-dimensional version of human-computer interaction; I got flashbacks to old hacking debates when Elizabeth Joh wondered in response to Panattoni’s paper if transforming a robot into a criminal should be punished; and a discussion of the use of images of medicalized children for decades in fundraising invoked publicity rights and tricky issues of consent.

Also consent-related, lawyers are starting to use generative AI to draft contracts, a step that Katie Szilagyi and Marina Pavlović suggested further diminishes the bargaining power already lost to “clickwrap”. Automation may remove our remaining ability to object from more specialized circumstances than the terms and conditions imposed on us by sites and services. Consent traditionally depends on a now-absent “meeting of minds”.

The arc of We Robot began with enthusiasm for robots, which waned as big data and generative AI became players. Now, robots/AI are appearing as something being done to us.

Illustrations: Detroit, seen across the river from Windsor, Ontario with a Canadian Coast Guard boat in the foreground.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Competitive instincts

This week – Wednesday, March 6 – saw the EU’s Digital Markets Act come into force. As The Verge reminds us, the law is intended to give users more choice and control by forcing technology’s six biggest “gatekeepers” to embrace interoperability and avoid preferencing their own offerings across 22 specified services. The six: Alphabet, Amazon, Apple, ByteDance, Meta, and Microsoft. Alphabet’s covered list is the longest: advertising, app store, search engine, maps, and shopping, plus Android, Chrome, and YouTube. For Apple, it’s the app store, operating system, and web browser. Meta’s list includes Facebook, WhatsApp, and Instagram, plus Messenger, Ads, and Facebook Marketplace. Amazon: third-party marketplace and advertising business. Microsoft: Windows and internal features. ByteDance just has TikTok.

The point is to enable greater competition by making it easier for us to pick a different web browser, uninstall unwanted features (like Cortana), or refuse the collection and use of data to target us with personalized ads. Some companies are haggling. Meta, for example, is trying to get Messenger and Marketplace off the list, while Apple has managed to get iMessage removed from the list. More notably, though, the changes Apple is making to support third-party app stores have been widely cricitized as undermining any hope of success for independents.

Americans visiting Europe are routinely astonished at the number of cookie consent banners that pop up as they browse the web. Comments on Mastodon this week have reminded that this was their churlish choice to implement the 2009 Cookie Directive and 2018 General Data Protection Regulation in user-hostile ways. It remains to be seen how grown-up the technology companies will be in this new round of legal constraints. Punishing users won’t get the EU law changed.

***

The last couple of weeks have seen a few significant outages among Internet services. Two weeks ago, AT&T’s wireless service went down for many hours across the US after a failed software update. On Tuesday, while millions of Americans were voting in the presidential primaries, it was Meta’s turn, when a “technical issue” took out both Facebook and Instagram (and with the latter, Threads) for a couple of hours. Concurrently but separately, users of Ad Manager had trouble logging in at Google, and users of Microsoft Teams and exTwitter also reported some problems. Ironically, Meta’s outage could have been fixed faster if the engineers trying to fix it hadn’t had trouble gaining remote access to the servers they needed to fix (and couldn’t gain access to the physical building because their passes didn’t work either).

Outages like these should serve as reminders not to put all your login eggs in one virtual container. If you use Facebook to log into other sites, besides the visibility you’re giving Meta into your activities elsewhere, those sites will be inaccessible any time Facebook goes down. In the case of AT&T, one reason this outage was so disturbing – the FTC is formally investigating it – is that the company has applied to get rid of its landlines in California. While lots of people no longer have landlines, they’re important in rural areas where cell service can be spotty, some services such as home alarm systems and other equipment depend on them, and they function in emergencies when electric power fails.

But they should also remind that the infrastructure we’re deprecating in favor of “modern” Internet stuff was more robust than the new systems we’re increasingly relying on. A home with smart devices that cannot function without an uninterrupted Internet connection is far more fragile and has more points of failure than one without them, just as you can read a paper map when your phone is dead. At The Verge, Jennifer Pattison Tuohy tests a bunch of smart kitchen appliances including a faucet you can operate via Alexa or Google voice assistants. As in digital microwave ovens, telling the faucet the exact temperature and flow rate you want…seems unnecessarily detailed. “Connect with your water like never before,” the faucet manufacturer’s website says. Given the direction of travel of many companies today, I don’t want new points of failure between me and water.

***

It has – already! – been three years since Australia’s News Media Bargaining Code led to Facebook and Google signing three-year deals that have primarily benefited Rupert Murdoch’s News Corporation, owner of most of Australia’s press. A week ago, Meta announced it will not renew the agreement. At The Conversation, Rod Sims, who chaired the commission that formulated the law, argues it’s time to force Meta into the arbitration the code created. At ABC Science, however, James Purtill counters that the often “toxic” relationship between Facebook and news publishers means that forcing the issue won’t solve the core problem of how to pay for news, since advertising has found elsewheres it would rather be. (Separately, in Europe, 32 media organizations covering 17 countries have filed a €2.1 billion lawsuit against Google, matching a similar one filed last year in the UK, alleging that the company abused its dominant position to deprive them of advertising revenue.)

Purtill predicts, I think correctly, that attempting to force Meta to pay up will instead bring Facebook to ban news, as in Canada, following the passage of a similar law. Facebook needed news once; it doesn’t now. But societies do. Suddenly, I’m glad to pay the BBC’s license fee.

Illustrations: Red deer (via Wikimedia.)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

Power cuts

In the latest example of corporate destruction, the Guardian reports on the disturbing trend in which streaming services like Disney and Warner Bros Discovery are deleting finished, even popular, shows for financial reasons. It’s like Douglas Adams’ rock star Hotblack Desiato spending a year dead for tax reasons.

Given that consumers’ budgets are stretched so thin that many are reevaluating the streaming services they’re paying for, you would think this would be the worst possible time to delete popular entertainments. Instead, the industry seems to be possessed by a death wish in which it’s making its offerings *less* attractive. Even worse, the promise they appeared to offer to showrunners was creative freedom and broad and permanent access to their work. The news that Disney+ is even canceling finished shows (Nautilus) shortly before their scheduled release in order to pay less *tax* should send a chill through every creator’s spine. No one wants to spend years of their life – for almost *any* amount of money – making things that wind up in the corporate equivalent of the warehouse at the end of Raiders of the Lost Ark.

It’s time, as the Masked Scheduler suggested recently on Mastodon, for the emergence of modern equivalents of creator-founded studios United Artists and Desilu.

***

Many of us were skeptical about Meta’s Oversight Board; it was easy to predict that Facebook would use it to avoid dealing with the PR fallout from controversial cases, but never relinquish control. And so it is proving.

This week, Meta overruled the Board‘s recommendation of a six-month suspension of the Facebook account belonging to former Cambodian prime minister Hun Sen. At issue was a video of one of Sen’s speeches, which everyone agreed incited violence against his opposition. Meta has kept the video up on the grounds of “newsworthiness”; Meta also declined to follow the Board’s recommendation to clarify its rules for public figures in “contexts in which citizens are under continuing threat of retaliatory violence from their governments”.

In the Platformer newsletter Casey Newton argues that the Board’s deliberative process is too slow to matter – it took eight months to decide this case, too late to save the election at stake or deter the political violence that has followed. Newton also concludes from the list of decisions that the Board is only “nibbling round the edges” of Meta’s policies.

A company with shareholders, a business model, and a king is never going to let an independent group make decisions that will profoundly shape its future. From Kate Klonick’s examination, we know the Board members are serious people prepared to think deeply about content moderation and its discontents. But they were always in a losing position. Now, even they must know that.

***

It should go without saying that anything that requires an Internet connection should be designed for connection failures, especially when the connected devices are required to operate the physical world. The downside was made clear by the 2017 incident, when lost signal meant a Tesla-owning venture capitalist couldn’t restart his car. Or the one in 2021, when a bunch of Tesla owners found their phone app couldn’t unlock their car doors. Tesla’s solution both times was to tell car owners to make sure they always had their physical car keys. Which, fine, but then why have an app at all?

Last week, Bambu 3D printers began printing unexpectedly when they got disconnected from the cloud. The software managing the queue of printer jobs lost the ability to monitor them, causing some to be restarted multiple times. Given the heat and extruded material 3D printers generate, this is dangerous for both themselves and their surroundings.

At TechRadar, Bambu’s PR acknowledges this: “It is difficult to have a cloud service 100% reliable all the time, but we should at least have designed the system more carefully to avoid such embarrassing consequences.” As TechRadar notes, if only embarrassment were the worst risk.

So, new rule: before installation test every new “smart” device by blocking its Internet connection to see how it behaves. Of course, companies should do this themselves, but as we/’ve seen, you can’t rely on that either.

***

Finally, in “be careful what you legislate for”, Canada is discovering the downside of C-18, which became law in June. and requires the biggest platforms to pay for the Canadian news content they host. Google and Meta warned all along that they would stop hosting Canadian news rather than pay for it. Experts like law professor Michael Geist predicted that the bill would merely serve to dramatically cut traffic to news sites.

On August 1, Meta began adding blocks for news links on Facebook and Instagram. A coalition of Canadian news outlets quickly asked the Competition Bureau to mount an inquiry into Meta’s actions. At TechDirt Mike Masnick notes the irony: first legacy media said Meta’s linking to news was anticompetitive; now they say not linking is anticompetitive.

However, there are worse consequences. Prime minister Justin Trudeau complains that Meta’s news block is endangering Canadians, who can’t access or share local up-to-date information about the ongoing wildfires.

In a sensible world, people wouldn’t rely on Facebook for their news, politicians would write legislation with greater understanding, and companies like Meta would wield their power responsibly. In *this* world, a we have a perfect storm.

Illustrations:XKCD’s Dependency.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Wendy M. GrossmanPosted on Categories Infrastructure, Intellectual Property, Law, Media, Net lifeTags , , Leave a comment on Power cuts