My Canadian friend was tolerant enough to restart his Nissan car so I could snare a pic of the startup screen (above) and read it in full. That’s when he followed the instructions to set the car to send only “limited data” to the company. What is that data? The car didn’t say. (Its manual might, but I didn’t test his patience that far.)
In 2023, a Mozilla Foundation study of US cars’ privacy called them the worst category it had ever tested and named Nissan as the worst offender: “The Japanese car manufacturer admits in their privacy policy to collecting a wide range of information, including sexual activity, health diagnosis data, and genetic data – but doesn’t specify how. They say they can share and sell consumers’ ‘preferences, characteristics, psychological trends, predispositions, behavior, attitudes, intelligence, abilities, and aptitudes’ to data brokers, law enforcement, and other third parties.”
Fortunately, my friend lives in Canada.
Granting that no one wants to read privacy policies, at least apps, and websites display them in their excruciating, fine-print detail. They can afford to because usually you encounter them when you’re not in a hurry. That can’t be said of the start-up screen in a car, when you just want to get moving. Its interference has to be short.
The new startup screen confirmed that “limited data” was now being sent. I wasn’t quick enough to read the rest, but it probably warned that some features might not now work – because that’s what they *always* say, and is what the settings warned (without specifics) when he changed them.
I assume this setup complies with Canada’s privacy laws. But the friction of consulting a manual or website to find out what data is being sent deters customers from exercising their rights. Like website dark patterns, it’s gamed to serve the company’s interests. It feels like grudging compliance with the law, especially because customers are automatically opted in.
How companies obtain consent is a developing problem. At Privacy International, Gus Hosein considers the future of AI assistants. Naturally, he focuses on the privacy implications: the access they’ll need, the data they’ll be able to extract, the new kinds of datasets they’ll create. And he predicts that automation will tempt developers to bypass the friction of consent and permission. In 2013, we considered this with respect to emotionally manipulative pet robots.
I had to pause when Hosein got to “screenless devices” to follow some links. At Digital Trends, Luke Edwards summarizes a report at The Information that OpenAI may acquire io Products. This start-up, led by renowned Apple hardware designer Jony Ive and OpenAI CEO Sam Altman, intends to create AI voice-enabled assistants that may (or may not) take the form of a screenless “phone” or household device.
Meanwhile, The Vergecast (MP3) reports that Samsung is releasing Ballie, a domestic robot infused with Google Cloud’s generative AI that can “engage in natural, conversational interactions to help users manage home environments”. Samsung suggests you can have it greet people at the door. So much nicer than being greeted by your host.
These remind of the Humane AI pin, whose ten-month product life ended in February with HP’s purchase of the company’s assets for $116 million. Or the similar Bee, whose “summaries” of meetings and conversations Victoria Song at The Verge called “fan fiction”. Yes: as factual as any other generative AI chatbot. More notably, the Bee couldn’t record silent but meaningful events, leading Song to wonder, “In a hypothetical future where everyone has a Bee, do unspoken memories simply not exist?”
In November 2018, a Reuters Institute report on the future of news found that on a desktop computer the web can offer 20 news links at a time; a smartphone has room for seven, and smart speakers just one. Four years later, smart speakers were struggling as a category because manufacturers can’t make money out of them. But apparently Silicon Valley still thinks the shrunken communications channel of voice beats typing and reading and are plowing on. It gives them greater control.
The thin, linear stream of information is why Hosein foresees the temptation to avoid the friction of consent. But the fine-grained user control he wants will, I think, mandate offloading reviewing collected data and granting or revoking permissions onto a device with a screen. Like smart watches, these screenless devices will have to be part of a system. What Hosein wants and Cory Doctorow has advocated for web browsers is that these technological objects should be loyal agents. That is, they must favor *our* interests over those of their manufacturer, or we won’t trust them.
More complicated is the situation with respect to incops – incidentally co-present [people] – whose data is also captured without their consent: me in my friend’s car, everyone Song encountered. Mozilla reported that Subaru claims that by getting in the car passengers become “users” who consent to data collection (as if); several other manufacturers say that the driver is responsible for notifying passengers. Song found it easier to mute the Bee in her office and while commuting than to ask colleagues and passersby for permission to record. Then she found it didn’t always disconnect when she told it to…
So now imagine that car saturated with an agentic AI assistant that decides where you want to go and drives you there.
“You don’t want to do that, Dave.”
Illustrations: The start-up screen in my Canadian friend’s car.
Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.