Boxed up

If the actions of the owners of streaming services are creating the perfect conditions for the return of piracy, it’s equally true that the adtech industry’s decisions continue to encourage installing ad blockers as a matter of self-defense. This is overall a bad thing, since most of us can’t afford to pay for everything we want to read online.

This week, Google abruptly aborted a change it’s been working on for four years: it will abandon its plan to replace third-party cookies with new technology it called Privacy Sandbox. From the sounds of it, Google will continue working on the Sandbox, but will continue to retain third-party cookies. The privacy consequences of this are…muddy.

To recap: there are two kinds of cookies, which are small files websites place on your computer, distinguished by their source and use. Sites use first-party cookies to give their pages the equivalent of memory. They’re how the site remembers which items you’ve put in your cart, or that you’ve logged in to your account. These are the “essential cookies” that some consent banners mention, and without them you couldn’t use the web interactively.

Third-party cookies are trackers. Once a company deposits one of these things on your computer, it can use it to follow along as you browse the web, collecting data about you and your habits the whole time. To capture the ickiness of this, Demos researcher Carl Miller has suggested renaming them slime trails. Third-party cookies are why the same ads seem to follow you around the web. They are also why people in the UK and Europe see so many cookie consent banners: the EU’s General Data Protection Regulation requires all websites to obtain informed consent before dropping them on our machines. Ad blockers help here. They won’t stop you from seeing the banners, but they can save you the time you’d have to spend adjusting settings on the many sites that make it hard to say no.

The big technology companies are well aware that people hate both ads and being tracked in order to serve ads. In 2020, Apple announced that its Safari web browser would block third-party cookies by default, continuing work it started in 2017. This was one of several privacy-protecting moves the company made; in 2021, it began requiring iPhone apps to offer users the opportunity to opt out of tracking for advertising purposes at installation. In 2022, Meta estimated Apple’s move would cost it $10 billion that year.

If the cookie seemed doomed at that point, it seemed even more so when Google announced it was working on new technology that would do away with third-party cookies in its dominant Chrome browser. Like Apple, however, Google proposed to give users greater control only over the privacy invasions of third parties without in any way disturbing Google’s own ability to track users. Privacy advocates quickly recognized this.

At Ars Technica, Ron Amadeo describes the Sandbox’s inner workings. Briefly, it derives a list of advertising topics from the websites users visits, and shares those with web pages when they ask. This is what you turn on when you say yes to Chrome’s “ad privacy feature”. Back when it was announced, EFF’s Bennett Cyphers was deeply unimpressed: instead of new tracking versus old tracking, he asked, why can’t we have *no* tracking? Just a few days ago, EFF followed up with the news that its Privacy Badger browser add-on now opts users out of the Privacy Sandbox (EFF has also published manual instructions.).

Google intended to make this shift in stages, beginning the process of turning off third-party cookies in January 2024 and finishing the job in the second half of 2024. Now, when the day of completion should be rapidly approaching, the company has said it’s over – that is, it no longer plans to turn off third-party cookies. As Thomas Claburn writes at The Register, implementing the new technology still requires a lot of work from a lot of companies besides Google. The technology will remain in the browser – and users will “get” to choose which kind of tracking they prefer; Kevin Purdy reports at Ars Technica that the company is calling this a “new experience”.

At The Drum, Kendra Barnett reports that the UK’s Information Commissioner’s Office is unhappy about Google’s decision. Even though it had also identified possible vulnerabilities in the Sandbox’s design, the ICO had welcomed the plan to block third-party cookies.

I’d love to believe that Google’s announcement might have been helped by the fact that Sandbox is already the subject of legal action. Last month the privacy-protecting NGO noyb complained to the Austrian data protection authority, arguing that Sandbox tracking still requires user consent. Real consent, not obfuscated “ad privacy feature” stuff, as Richard Speed explains at The Register. But far more likely it’s money, At the Press Gazette, Jim Edwards reports that Sandbox could cost publishers 60% of their revenue “from programmatically sold ads”. Note, however, that the figure is courtesy of adtech company Criteo, likely a loser under Sandbox.

The question is what comes next. As Cyphers said, we deserve real choices: *whether* we are tracked, not just who gets to do it. Our lives should not be the leverage big technology companies use to enhance their already dominant position.

Illustrations: A sandbox (via Wikimedia)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Game of carrots

The big news of the week has been the result of the Epic Games v. Google antitrust trial. A California jury took four hours to agree with Epic that Google had illegally tied together its Play Store and billing service, so that app makers could only use the Play Store to distribute their apps if they also used Google’s service for billing, giving Google a 30% commission. Sort of like, I own half the roads in this town, and if you want to sell anything to my road users you have to have a store in my mall and pay me a third of your sales revenue, and if you don’t like it, tough, because you can’t reach my road users any other way. Meanwhile, the owner of the other half of the town’s roads is doing exactly the same thing, so you can’t win.

At his BIG Substack, antitrust specialist Matt Stoller, who has been following the trial closely, gloats, “the breakup of Big Tech begins”. Maybe not so fast: Epic lost its similar case against Apple. Both of these cases are subject to appeal. Stoller suggests, however, that the latest judgment will carry more weight because it came from a jury of ordinary citizens rather than, as in the Apple case, a single judge. Stoller believes the precedent set by a jury trial is harder to ignore in future cases.

At The Verge, Sean Hollister, who has been covering the trial in detail, offers a summary of 20 key points he felt the trial established. Written before the verdict, Hollister’s assessment of Epic’s chances proved correct.

Even if the judgment is upheld in the higher courts, it will be a while before users see any effects. But: even if the judgment is overturned in the higher courts, my guess is that the technology companies will begin to change their behavior at least a bit, in self-defense. The real question is, what changes will benefit us, the people whose lives are increasingly dominated by these phones?

I personally would like it to be much easier to use an Android phone without ever creating a Google account, and to be confident that the phone isn’t sending masses of tracking data to either Google or the phone’s manufacturer.

But…I would still like to be able to download the apps I want from a source I can trust. I care less about who provides the source than I do about what data they collect about me and the cost.

I want that source to be easy to access, easy to use, and well-stocked, defining “well-stocked” as “has the apps I want” (which, granted, is a short list). The nearest analogy that springs to mind is TV channels. You don’t really care what channel the show you want to watch is on; you just want to be able to watch the show without too much hassle. If there weren’t so many rights holders running their own streaming services, the most sensible business logic would be for every show to be on every service. Then instead of competing on their catalogues, the services would be competing on privacy, or interface design, or price. Why shouldn’t we have independent app stores like that?

Mobile phones have always been more tightly controlled than the world of desktop computing, largely because they grew out of the tightly controlled telecommunications world. Desktop computing, like the Internet, served first the needs of the military and academic research, and they remain largely open even when they’re made by the same companies who make mobile phone operating systems. Desktop systems also developed at a time when American antitrust law still sought to increase competition.

It did not stay that way. As current FTC chair Lina Khan made her name pointing out in 2017, antitrust thinking for the last several decades has been limited to measuring consumer prices. The last big US antitrust case to focus on market effects was Microsoft, back in 1995. In the years since, it’s been left to the EU to act as the world’s antitrust enforcer. Against Google, the EU has filed three cases since 2010: over Shopping (Google was found guilty in 2017 and fined €2.4 billion, upheld on appeal in 2021); Android, over Google apps and the Play Store (Google was found guilty in 2018 and fined €4.3 billion and required to change some of its practices); and AdSense (fined €1.49 billion in 2019). But fines – even if the billions eventually add up to real money – don’t matter enough to companies with revenues the size of Google’s. Being ordered to restructure its app store might.

At the New York Times, Steve Lohr compares the Microsoft and Epic v Google cases. Microsoft used its contracts with PC makers to prevent them from preinstalling its main web browser rival, Netscape, in order to own users’ path into the accelerating digital economy. Google’s contracts instead paid Apple, Samsung, Mozilla, and others to favor it on their systems – “carrots instead of sticks,” NYU law professor Harry First told Lohr.

The best thing about all this is that the Epic jury was not dazzled by the incomprehensibility effect of new technology. Principles are coming back into focus. Tying – leveraging your control over one market in order to dominate another – is no different if you say it in app stores than if you say it in gas stations or movie theaters.

Illustrations: “The kind of anti-trust legislation that is needed”, by J.S. Pughe (via Library of Congress).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

New phone, who dis?

So I got a new phone. What makes the experience remarkable is that the old phone was a Samsung Galaxy Note 4, which, if Wikipedia is correct, was released in 2014. So the phone was at least eight, probably nine, years old. When you update incrementally, like a man who gets his hair cut once a week, it’s hard to see any difference. When you leapfrog numerous generations of updates, it’s seeing the man who’s had his first haircut in a year: it’s a shock.

The tl;dr: most of what I don’t like about the switch is because of Google.

There were several reasons why I waited so long. It was a good enough phone and it had a very good camera for its time; I finessed the lack of security updates by not using the phone for functions where it mattered. Also, I didn’t want to give up the disappearing headphone jack, home button, or, especially, user-replaceable battery. The last of those is why I could keep the phone for so long, and it was the biggest deal-breaker.

For that reason, I’ve known for years that the Note’s eventual replacement would likely be a Fairphone, a Dutch outfit that is doing its best to produce sustainable phones. It’s repairable and user-upgradable (it takes one screwdriver to replace a cracked screen or the camera), and changing the bettery takes a second. I had to compromise on the headphone jack, which requires a USB-C dongle. Not having the home button is hard to get used to; I used it constantly. It turns out, though, that it’s even harder to get used to not having the soft button on the bottom left that used to show me recently used apps so I could quickly switch back to the thing I was using a few minutes ago. But that….is software.

The biggest and most noticeable change between Android 6 (the Note 4 got its last software update in 2017) and Android 13 (last week) is the assumptions both Android chief Google and the providers of other apps make about what users want. On the Note 4, I had a quick-access button to turn the wifi on and off. Except for the occasional call over Signal, I saw no reason to keep it on to drain the battery unnecessarily. Today, that same switch is buried several layers deep in settings with apparently no way to move that into the list of quick-access functions. That’s just one example. But no acommodation for my personal quirks can change the sense of being bullied into giving away more data and control than I’d like.

Giving in to Google does, however, mean an easy transfer of your old phone’s contents to your new phone (if transferring the external SD card isn’t enough).

Too late I remembered the name Murena – a company that equips Fairphones with de-Googlified Android. As David Pierce writes at The Verge, that requires a huge effort. Murena has built replacements for the standard Google apps, a cloud system for email, calendars, and productivity software. Even so, Pierce writes, apps hit the limit: despite Murena’s effort to preserve user anonymity, it’s just not possible to download them without interacting with Google, especially when payment is required. And who wants to run their phone without third-party apps? Not even me (although I note that many of those I use can still be sideloaded).

The reality is I would have preferred to wait even longer to make the change. I was pushed by the fact that several times recently the Note has complained that it can’t download email because it was running out of storage space (which is why I would prefer to store everything on an external SD card, but: not an option for email and apps). And on a recent trip to the US, there were numerous occasions where the phone simply didn’t work, even though there shouldn’t be any black spots in places like Boston and San Francisco. A friend suggested that in all likelihood there were freuqency bands being turned off while other newer ones were probably ones the Note couldn’t use. I had forgotten that 5G, which I last thought about in 2018, had been arriving. So: new phone. Resentfully.

This kind of forced wastefulness is one of the things Donald Norman talks about in his new book, Design for a Better World. To some extent, the book is a mea culpa: after decades of writing about how to design things better to benefit us as individuals, Norman has recognized the necessity to rethink and replace human-centered design with humanity-centered design. Sustainability is part of that.

Everything around us is driven by design choices. Building unrepairable phones is a choice, and a destructive one, given the amount of rare materials used inside that wind up in landfills instead of, new phones or some other application. The Guardian’s review of the latest Fairphone asks, “Could this be the first phone to last ten years?” I certainly hope so, but if something takes it down before then it will be an externality like switched-off bands, the end of software updates, or a bank’s decision to require customers use an app for two-factor authentication and then update it so older phones can’t run it. These are, as Norman writes, complex systems in which the incentives are all misplaced. And so: new phone. Largely unnecessarily.

Illustrations: Personally owned 1970s AT&T phone.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

The data grab

It’s been a good week for those who like mocking flawed technology.

Numerous outlets have reported, for example, that “AI is getting dumber at math”. The source is a study conducted by researchers at Stanford and the University of California Berkeley comparing GPT-3.5’s and GPT-4’s output in March and June 2023. The researchers found that, among other things, GPT-4’s success rate at identifying prime numbers dropped from 84% to 51%. In other words, in June 2023 ChatGPT-4 did little better than chance at identifying prime numbers. That’s psychic level.

The researchers blame “drift”, the problem that improving one part of a model may have unhelpful knock-on effects in other parts of the model. At Ars Technica, Benj Edwards is less sure, citing qualified critics who question the study’s methodology. It’s equally possible, he suggests, that as the novelty fades, people’s attempts to do real work surface problems that were there all along. With no access to the algorithm itself and limited knowledge of the training data, we can only conduct such studies by controlling inputs and observing the outputs, much like diagnosing allergies by giving a child a series of foods in turn and waiting to see which ones make them sick. Edwards advocates greater openness on the part of the companies, especially as software developers begin building products on top of their generative engines.

Unrelated, the New Zealand discount supermarket chain Pak’nSave offered an “AI” meal planner that, set loose, promptly began turning out recipes for “poison bread sandwiches”, “Oreo vegetable stir-fry”, and “aromatic water mix” – which turned out to be a recipe for highly dangerous chlorine gas.

The reason is human-computer interaction: humans, told to provide a list of available ingredients, predictably became creative. As for the computer…anyone who’s read Janelle Shane’s 2019 book, You Look LIke a Thing and I Love You, or her Twitter reports on AI-generated recipes could predict this outcome. Computers have no real world experience against which to judge their output!

Meanwhile, the San Francisco Chronicle reports, Waymo and Cruise driverless taxis are making trouble at an accelerating rate. The cars have gotten stuck in low-hanging wires after thunderstorms, driven through caution tape, blocked emergency vehicles and emergency responders, and behaved erratically enough to endanger cyclists, pedestrians, and other vehicles. If they were driven by humans they’d have lost their licenses by now.

In an interesting side note that reminds of the cars’ potential as a surveillance network, Axios reports that in a ten-day study in May Waymo’s driverless cars found that human drivers in San Francisco speed 33% of the time. A similar exercise in Phoenix, Arizona observed human drivers speeding 47% of the time on roads with a 35mph speed limit. These statistics of course bolster the company’s main argument for adoption: improving road safety.

The study should – but probably won’t – be taken as a warning of the potential for the cars’ data collection to become embedded in both law enforcement and their owners’ business models. The frenzy surrounding ChatGPT-* is fueling an industry-wide data grab as everyone tries to beef up their products with “AI” (see also previous such exercises with “meta”, “nano”, and “e”), consequences to be determined.

Among the newly-discovered data grabbers is Intel, whose graphics processing unit (GPU) drivers are collecting telemetry data, including how you use your computer, the kinds of websites you visit, and other data points. You can opt out, assuming you a) realize what’s happening and b) are paying attention at the right moment during installation.

Google announced recently that it would scrape everything people post online to use as training data. Again, an opt-out can be had if you have the knowledge and access to follow the 30-year-old robots.txt protocol. In practical terms, I can configure my own site, pelicancrossing.net, to block Google’s data grabber, but I can’t stop it from scraping comments I leave on other people’s blogs or anything I post on social media sites or that’s professionally published (though those sites may block Google themselves). This data repurposing feels like it ought to be illegal under data protection and copyright law.

In Australia, Gizmodo reports that the company has asked the Australian government to relax copyright laws to facilitate AI training.

Soon after Google’s announcement the law firm Clarkson filed a class action lawsuit against Google to join its action against OpenAI. The suit accuses Google of “stealing” copyrighted works and personal data,

“Google does not own the Internet,” Clarkson wrote in its press release. Will you tell it, or shall I?

Whatever has been going on until now with data slurping in the interests of bombarding us with microtargeted ads is small stuff compared to the accelerating acquisition for the purpose of feeding AI models. Arguably, AI could be a public good in the long term as it improves, and therefore allowing these companies to access all available data for training is in the public interest. But if that’s true, then the *public* should own the models, not the companies. Why should we consent to the use of our data so they can sell it back to us and keep the proceeds for their shareholders?

It’s all yet another example of why we should pay attention to the harms that are clear and present, not the theoretical harm that someday AI will be general enough to pose an existential threat.

Illustrations: IBM Watson, Jeopardy champion.

Wendy M. Grossman is the 2013 winner of the Enigma Award and contributing editor for the Plutopia News Network podcast. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Mastodon.

Own goals

There’s no point in saying I told you so when the people you’re saying it to got the result they intended.

At the Guardian, Peter Walker reports the Electoral Commission’s finding that at least 14,000 people were turned away from polling stations in May’s local elections because they didn’t have the right ID as required under the new voter ID law. The Commission thinks that’s a huge underestimate; 4% of people who didn’t vote said it was because of voter ID – which Walker suggests could mean 400,000 were deterred. Three-quarters of those lacked the right documents; the rest opposed the policy. The demographics of this will be studied more closely in a report due in September, but early indications are that the policy disproportionately deterred people with disabilities, people from certain ethnic groups, and people who are unemployed.

The fact that the Conservatives, who brought in this policy, lost big time in those elections doesn’t change its wrongness. But it did lead the MP Jacob Rees-Mogg (Con-North East Somerset) to admit that this was an attempt to gerrymander the vote that backfired because older voters, who are more likely to vote Conservative, also disproportionately don’t have the necessary ID.

***

One of the more obscure sub-industries is the business of supplying ad services to websites. One such little-known company is Criteo, which provides interactive banner ads that are generated based on the user’s browsing history and behavior using a technique known as “behavioral retargeting”. In 2018, Criteo was one of seven companies listed in a complaint Privacy International and noyb filed with three data protection authorities – the UK, Ireland, and France. In 2020, the French data protection authority, CNIL, launched an investigation.

This week, CNIL issued Criteo with a €40 million fine over failings in how it gathers user consent, a ruling noyb calls a major blow to Criteo’s business model.

It’s good to see the legal actions and fines beginning to reach down into adtech’s underbelly. It’s also worth noting that the CNIL was willing to fine a *French* company to this extent. It makes it harder for the US tech giants to claim that the fines they’re attracting are just anti-US protectionism.

***

Also this week, the US Federal Trade Commission announced it’s suing Amazon, claiming the company enrolled millions of US consumers into its Prime subscription service through deceptive design and sabotaged their efforts to cancel.

“Amazon used manipulative, coercive, or deceptive user-interface designs known as “dark patterns” to trick consumers into enrolling in automatically-renewing Prime subscriptions,” the FTC writes.

I’m guessing this is one area where data protection laws have worked, In my UK-based ultra-brief Prime outings to watch the US Open tennis, canceling has taken at most two clicks. I don’t recognize the tortuous process Business Insider documented in 2022.

***

It has long been no secret that the secret behind AI is human labor. In 2019, Mary L. Gray and Siddharth Suri documented this in their book Ghost Work. Platform workers label images and other content, annotate text, and solve CAPTCHAs to help train AI models.

At MIT Technology Review, Rhiannon Williams reports that platform workers are using ChatGPT to speed up their work and earn more. A team of researchers from the Swiss Federal Institute of Technology study (PDF)found that between 33% and 46% of the 44 workers they tested with a request to summarize 16 extracts from medical research papers used AI models to complete the task.

It’s hard not to feel a little gleeful that today’s “AI” is already eating itself via a closed feedback loop. It’s not good news for platform workers, though, because the most likely consequence will be increased monitoring to force them to show their work.

But this is yet another case in which computer people could have learned from their own history. In 2008, researchers at Google published a paper suggesting that Google search data could be used to spot flu outbreaks. Sick people searching for information about their symptoms could provide real-time warnings ten days earlier than the Centers for Disease Control could.

This actually worked, some of the time. However, as early as 2009, Kaiser Fung reported at Harvard Business Review in 2014, Google Flu Trends missed the swine flu pandemic; in 2012, researchers found that it had overestimated the prevalence of flu for 100 out of the previous 108 weeks. More data is not necessarily better, Fung concluded.

In 2013, as David Lazer and Ryan Kennedy reported for Wired in 2015 in discussing their investigation into the failure of this idea, GFT missed by 140% (without explaining what that means). Lazer and Kennedy find that Google’s algorithm was vulnerable to poisoning by unrelated seasonal search terms and search terms that were correlated purely by chance, and failed to take into account changing user behavior as when it introduced autosuggest and added health-related search terms. The “availability” cognitive bias also played a role: when flu is in the news, searches go up whether or not people are sick.

While the parallels aren’t exact, large language modelers could have drawn the lesson that users can poison their models. ChatGPT’s arrival for widespread use will inevitably thin out the proportion of text that is human-written – and taint the well from which LLMs drink. Everyone imagines the next generation’s increased power. But it’s equally possible that the next generation will degrade as the percentage of AI-generated data rises.

Illustrations: Drunk parrot seen in a Putney garden (by Simon Bisson).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Mastodon or Twitter.