The end of cool

For a good bit of this year’s We Robot, it felt like abstract “AI” – that is, algorithms running on computers with no mobility – had swallowed the robots whose future this conference was invented to think about. This despite a pre-conference visit to Boston Dynamics, which showed off its Atlas
robot
‘s ability to do gymnastics. It’s cute, but is it useful? Your washing machine is smarter, and its intelligence solves real problems like how to use less water.

There’s always some uncertainty about boundaries at this event: is a machine learning decision making system a robot? At the inaugural We Robot in 2012, the engineer Bill Smart summed up the difference: “My iPhone can’t stab me in my bed.” Of course, neither could an early Roomba, which most would agree was the first domestic robot. However, it was also dumb as a floor tile, achieving cleanliness through random repetition rather than intelligent mapping. In the Roomba 1.0 sense, a “robot” is “a device that does boring things so I don’t have to”. Not cool, but useful, and solves a real problem

During a session in which participants played a game designed to highlight the conflicts inherent in designing an urban drone delivery system, Lael Odhner offered yet another definition: “A robot is a literary device we use to voice our discomfort with technology.” In the context of an event where participants think through the challenges robots bring to law and policy, this may be the closest approximation.

In the design exercise, our table’s three choices were: fund the FAA (so they can devise and enforce rules and policies), build it as a municipally-owned public service both companies and individuals can use as customers, and ban advertising on the drones for reasons of both safety and offensiveness. A similar exercise last year produced more specific rules, but also led us to realize that a drone delivery service had no benefits over current delivery services.

Much depends on scale. One reason we chose a municipal public service was the scale of noise and environmental impact inevitably generated by multiple competing commercial services. In a paper, Woody Hartzog examined the meaning of “scale”: is scale *more*, or is scale *different*? You can argue, as net.wars often has, that scale *creates* difference, but it’s rarely clear where to place the threshold, or how reaching it changes a technology’s harms or who it makes vulnerable. Ryan Calo and Daniella DiPaola suggested that rather than associate vulnerability with particular classes of people we should see it as variable with circumstances: “Everyone is vulnerable sometimes, and vulnerability is a state that can be created and manipulated toward particular ends.” This seems a more logical and fairer approach.

An aspect of this is that there are two types of rules: harm rules, which empower institutions to limit harm, and power rules, which empower individuals to protect themselves. A possible worked example soon presented itself in Kegan J Strawn;s and Daniel Sokol‘s paper on safety techniques in mobile robots, which suggested copying medical ethics’ consent approach. Then someone described the street scene in which every pedestrian had to give consent to every passing experimental Tesla, a possibly an even worse scenario than ad-bearing delivery drones. Pedestrians get nothing out of the situation, and Teslas don’t become safer. What you really want is for car companies not to test the safety of autonomous vehicles on public roads with pedestrians as unwitting crash test dummies.

I try to think every year how our ideas about inegrating robots into society are changing over time. An unusual paper from Maria P. Angel considered this question with respect to privacy scholarship by surveying 1990s writing and 20 years of papers presented at Privacy Law Scholars. We Robot co-founders Calo, Michael Froomkin, and Ian Kerr partly copied its design. Angel’s conclusion is roughly that the 1990s saw calls for an end to self-regulation while the 2000s moved from privacy as necessary for individual autonomy and self-determination to collective benefits and most recently to its importance for human flourishing.

As Hartzog commented, he came to the first We Robot with the belief that “Robots are magic”, only to encounter Smart’s “really fancy hammers.” And, Smart and Cindy Grimm added in 2018, controlled by sensors that are “late, noisy, and wrong”. Hartzog’s early excitement was shared by many of us; the future looked so *interesting* when it was almost entirely imaginary.

Over time, the robotic future has become more nowish, and has shifted in response to technological development; the discussion has become more about real systems (2022) than imagined future ones. The arrival of real robots on our streets – for example, San Francisco’s 2017 use of security robots to deter homeless camps – changed parts of the discussion from theoretical to practical.

In the mid-2010s, much discussion focused on problems of fairness, especially to humans in the loop, who, Madeleine Claire Elish correctly predicted in 2016 would be blamed for failures. More recently, the proliferation of data-gathering devices (sensors, cameras) into everything from truckers’ cabs to agriculture and the arrival of new algorithmic systems dubbed AI has raised awareness of the companies behind these technologies. And, latterly, that often the technology diverts attention from the better possibilities of structural change.

But that’s not as cool.

Illustrations: Boston Dynamics’ Atlas robots doing synchronized backflips (via YouTube).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Review: Data Driven

Data Driven: Truckers, Technology, and the New Workplace Surveillance
By Karen Levy
Princeton University Press
ISBN: 978-0-6911-7530-0

The strikes in Hollywood show actors and writers in an existential crisis: a highly lucrative industry used to pay them a good middle class living but now has the majority struggling just to survive. In her recent book, Data Driven, Cornell assistant professor Karen Levy finds America’s truckers in a similar plight.

Both groups have had their industries change around them because of new technology. In Hollywood, streaming came along to break the feedback loop that powered a highly successful business model for generations. In trucking, the culprit is electronic logging devices (ELDs), which are changing the profession entirely.

Levy has been studying truckers since 2011. At that point, ELDs were beginning to appear in truckers’ cabs but were purely voluntary. That changed in 2017, when the Federal Motor Carrier Safety Administration’s rule mandating their use came into force. The intention, as always, is reasonably benign: to improve safety by ensuring that truckers on the road remain alert and comply with the regulations governing the hours they’re allowed to work.

As part of this work, Levy has interviewed truckers, family members, and managers, and studied trucker-oriented media such as online forums, radio programs, and magazines. She was also able to examine auditing practices in both analog and digital formats.

Some of her conclusions are worrying. For example, she finds that taking truckers’ paper logs into an office away from the cab allowed auditors more time to study them and greater ability to ask questions about them. ELDs, by contrast, are often wired into the cab, and the auditor must inspect them in situ. Where the paper logs were simply understood, many inspectors struggle with the ELDs’ inconsistent interfaces, and being required to enter what is after all the trucker’s personal living space tends to limit the time they spend.

Truckers by and large experience the ELDs as intrusive. Those who have been at the wheel the longest most resent the devaluation of their experience the devices bring. Unlike the paper logs, which remained under the truckers’ control, ELDs often send the data they collect direct to management, who may respond by issuing instructions that override the trucker’s own decisions and on-site information.

Levy’s main point would resonate with those Hollywood strikers. ELDs are being used to correct the genuine problem of tired, and therefore unsafe, truckers. Yet the reason truckers are so tired and take the risk of overworking is the way the industry is structured. Changing how drivers are paid from purely by the mile to including the hours they spend moving their trucks around the yards waiting to unload and other periods of unavoidable delay would be far more effective. Worse, it’s the most experienced truckers who are most alienated by the ELDs’ surveillance. Replacing them with younger, less experienced drivers will not improve road safety for any of us.

Surveillance machines on wheels

After much wrangling and with just a few days of legislative time between the summer holidays and the party conference season, on Tuesday night the British Parliament passed the Online Safety bill, which will become law as soon as it gets royally signed (assuming they can find a pen that doesn’t leak). The government announcement brims with propagandist ecstasy, while the Open Rights Group’s statement offers the reality: Briton’s online lives will be less secure as a result. Which means everyone’s will.

Parliament – and the net.wars archive – dates the current version of this bill to 2022, and the online harms white paper on which it’s based to 2020. But it *feels* like it’s been a much longer slog; I want to say six years.

This is largely because the fight over two key elements – access to encrypted messaging and age verification – *is* that old. Age verification was enshrined in the Digital Economy Act (2017), and we reviewed the contenders to implement it in 2016. If it’s ever really implemented, age verification will make Britain the most frustrating place in the world to be online.

Fights over strong encryption have been going on for 30 years. In that time, no new mathematics has appeared to change the fact that it’s not possible to create a cryptographic hole that only “good guys” can use. Nothing will change about that; technical experts will continue to try to explain to politicians that you can have secure communications or you can have access on demand, but you can’t have both.

***

At the New York Times, Farhood Manjou writes that while almost every other industry understands that the huge generation of aging Boomers is a business opportunity, outside of health care Silicon Valley is still resolutely focused on under-30s. This, even though the titans themselves age; boy-king Mark Zuckerberg is almost 40. Hey, it’s California; they want to turn back aging, not accept it.

Manjou struggles to imagine the specific directions products might take, but I like his main point: where’s the fun? What is this idea that after 65 you’re just something to send a robot to check up on? Yes, age often brings impairments, but why not build for them? You would think that given the right affordances, virtual worlds and online games would have a lot to offer people whose lives are becoming more constrained.

It’s true that by the time you realize that ageism pervades our society you’re old enough that no one’s listening to you any more. But even younger people must struggle with many modern IT practices: the pale, grey type that pervades the web, the picklists, the hidden passwords you have to type twice… And captchas, which often display on my desktop too small to see clearly and are resistant to resizing upwards. Bots are better at captchas than humans anyway, so what *is* the point?

We’re basically back where we were 30 years ago, when the new discipline of human-computer interaction fought to convince developers that if the people who struggle to operate their products look stupid the problem is bad design. And all this is coming much more dangerously to cars; touch screens that can’t be operated by feel are Exhibit A.

***

But there is much that’s worse about modern cars. A few weeks ago, the Mozilla Foundation published a report reviewing the privacy of modern cars. Tl;dr: “Cars are the worst product category we have ever reviewed for privacy.”

The problems are universal across the 25 brands Mozilla researchers Jen Caltrider, Misha Rykov, and Zoë MacDonald reviewed: “Modern cars are surveillance-machines on wheels souped-up with sensors, radars, cameras, telematics, and apps that can detect everything we do inside.” Cars can collect all the data that phones and smart home devices can. But unlike phones, space is a non-issue, and unlike smart speakers, video cameras, and thermostats, cars move with you and watch where you go. Drivers, passengers, passing pedestrians…all are fodder for data collection in the new automotive industry, where heated seats and unlocking extra battery range are subscription add-ons, and the car you buy isn’t any more yours than the £6-per-hour Zipcar in the designated space around the corner.

Then there are just some really weird clauses in the companies’ privacy policies. Some collect “genetic data” (here the question that arises is not only “why?” but “how?). Nissan says it can collect information about owners’ “sexual activity” for use in “direct marketing” or to share with marketing partners. ” The researchers ask, “What on earth kind of campaign are you planning, Nissan?”

Still unknown: whether the data is encrypted while held on the car; how securely it’s held; and whether the companies will resist law enforcement requests at all. We do know that that car companies share and sell the masses of intimate information they collect, especially the cars’ telematics with insurance companies.

The researchers also note that new features allow unprecedented levels of control. VW’s Car-Net, for example, allows parents – or abusers – to receive a phone alert if the car is driven outside of set hours or in or near certain locations. Ford has filed a patent on a system for punishing drivers who miss car payments.

“I got old at the right time,” a friend said in 2019. You can see his point.

Illustrations: Artist Dominic Wilcox‘s imagined driverless sleeper car of the future, as seen at the Science Museum in 2019.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

Guarding the peace

Police are increasingly attempting to prevent crime by using social media targeting tools to shape public behavior, says a new report from the Scottish Institute for Policing Research (PDF) written by a team of academic researchers led by Ben Collier at the University of Edinburgh. There is no formal regulation of these efforts, and the report found many examples of what is genteelly calls “unethical practice”.

On the one hand, “behavioral change marketing” seems an undeniably clever use of new technological tools. If bad actors can use targeted ads to scam, foment division, and incite violence, why shouldn’t police use them to encourage the opposite? The tools don’t care whether you’re a Russian hacker targeting 70-plus white pensioners with anti-immigrant rhetoric or a charity trying to reach vulnerable people to offer help. Using them is a logical extension of the drive toward preventing, rather than solving, crime. Governments have long used PR techniques to influence the public, from benign health PSAs on broadcast media to Theresa May’s notorious , widely cricised, and unsuccessful 2013 campaign of van ads telling illegal immigrants to go home.

On the other hand, it sounds creepy as hell. Combining police power with poorly-evidenced assumptions about crime and behavior and risk and the manipulation and data gathering of surveillance capitalism…yikes.

The idea of influence policing derives at least in part from Cass R. Sunstein‘s and Richard H. Thaler‘s 2008 book Nudge. The “nudge theory” it promoted argued that the use of careful design (“choice architecture”) could push people into making more desirable decisions.

The basic contention seems unarguable; using design to push people toward decisions they might not make by themselves is the basis of many large platforms’ design decisions. Dark patterns are all about that.

Sunstein and Thaler published their theory at the post-financial crisis moment when governments were looking to reduce costs. As early as 2010, the UK’s Cabinet Office set up the Behavioural Insights Team to improve public compliance with government policies. The “Nudge Unit” has been copied widely across the world.

By 2013, it was being criticized for forcing job seekers to fill out a scientifically invalid psychometric test. In 2021, Observer columnist Sonia Sodha called its record “mixed”, deploring the expansion of nudge theory into complex, intractable social problems. In 2022, new research cast doubt on the whole idea that nudges have little effect on personal behavior.

The SIRP report cites the Government Communications Service, the outgrowth of decades of government work to gain public compliance with policy. The GCS itself notes its incorporation of marketing science and other approaches common in the commercial sector. Its 7,000 staff work in departments across government.

This has all grown up alongside the increasing adoption of digital marketing practices across the UK’s public sector, including the tax authorities (HMRC), the Department of Work and Pensions, and especially, the Home Office – and alongside the rise of sophisticated targeting tools for online advertising.

The report notes: “Police are able to develop ‘patchwork profiles’ built up of multiple categories provided by ad platforms and detailed location-based categories using the platform targeting categories to reach extremely specific groups.”

The report’s authors used the Meta Ad Library to study the ads, the audiences and profiles police targeted, and the cost. London’s Metropolitan Police, which a recent scathing report found endemically racist and misogynist, was an early adopter and is the heaviest studied user of digitally targeted ads on Meta.

Many of the cample campaigns these organizations run sound mostly harmless. Campaigns intended to curb domestic violence, for example, may aim at encouraging bystanders to come forward with concerns. Others focus on counter-radicalisation and security themes or, increasingly, preventing online harms and violence against women and girls.

As a particular example of the potential for abuse, the report calls out the Home Office Migrants on the Move campaign, a collaboration with a “migration behavior change” agency called Seefar. This targeted people in France seeking asylum in the UK and attempted to frighten them out of trying to cross the Channel in small boats. The targeting was highly specific, with many ads aimed at as few as 100 to 1,000 people, chosen for their language and recent travel in or through Brussels and Calais.

The report’s authors raise concerns: the harm implicit in frightening already-extremely vulnerable people, the potential for damaging their trust in authorities to help them, and the privacy implications of targeting such specific groups. In the report’s example, Arabic speakers in Brussels might see the Home Office ads but their French neighbors would not – and those Arabic speakers would be unlikely to be seeking asylum. The Home Office’s digital equivalent of May’s van ads, therefore, would be seen only by a selection of microtargeted individuals.

The report concludes: “We argue that this campaign is a central example of the potential for abuse of these methods, and the need for regulation.”

The report makes a number of recommendations including improved transparency, formalized regulation and oversight, better monitoring, and public engagement in designing campaigns. One key issue is coming up with better ways of evaluating the results. Surprise, surprise: counting clicks, which is what digital advertising largely sells as a metric, is not a useful way to measure social change.

All of these arguments make sense. Improving transparency in particular seems crucial, as does working with the relevant communities. Deterring crime doesn’t require tricks and secrecy; it requires collaboration and openness.

Illustrations: Theresa May’s notorious van ad telling illegal immigrants to go home.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

The data grab

It’s been a good week for those who like mocking flawed technology.

Numerous outlets have reported, for example, that “AI is getting dumber at math”. The source is a study conducted by researchers at Stanford and the University of California Berkeley comparing GPT-3.5’s and GPT-4’s output in March and June 2023. The researchers found that, among other things, GPT-4’s success rate at identifying prime numbers dropped from 84% to 51%. In other words, in June 2023 ChatGPT-4 did little better than chance at identifying prime numbers. That’s psychic level.

The researchers blame “drift”, the problem that improving one part of a model may have unhelpful knock-on effects in other parts of the model. At Ars Technica, Benj Edwards is less sure, citing qualified critics who question the study’s methodology. It’s equally possible, he suggests, that as the novelty fades, people’s attempts to do real work surface problems that were there all along. With no access to the algorithm itself and limited knowledge of the training data, we can only conduct such studies by controlling inputs and observing the outputs, much like diagnosing allergies by giving a child a series of foods in turn and waiting to see which ones make them sick. Edwards advocates greater openness on the part of the companies, especially as software developers begin building products on top of their generative engines.

Unrelated, the New Zealand discount supermarket chain Pak’nSave offered an “AI” meal planner that, set loose, promptly began turning out recipes for “poison bread sandwiches”, “Oreo vegetable stir-fry”, and “aromatic water mix” – which turned out to be a recipe for highly dangerous chlorine gas.

The reason is human-computer interaction: humans, told to provide a list of available ingredients, predictably became creative. As for the computer…anyone who’s read Janelle Shane’s 2019 book, You Look LIke a Thing and I Love You, or her Twitter reports on AI-generated recipes could predict this outcome. Computers have no real world experience against which to judge their output!

Meanwhile, the San Francisco Chronicle reports, Waymo and Cruise driverless taxis are making trouble at an accelerating rate. The cars have gotten stuck in low-hanging wires after thunderstorms, driven through caution tape, blocked emergency vehicles and emergency responders, and behaved erratically enough to endanger cyclists, pedestrians, and other vehicles. If they were driven by humans they’d have lost their licenses by now.

In an interesting side note that reminds of the cars’ potential as a surveillance network, Axios reports that in a ten-day study in May Waymo’s driverless cars found that human drivers in San Francisco speed 33% of the time. A similar exercise in Phoenix, Arizona observed human drivers speeding 47% of the time on roads with a 35mph speed limit. These statistics of course bolster the company’s main argument for adoption: improving road safety.

The study should – but probably won’t – be taken as a warning of the potential for the cars’ data collection to become embedded in both law enforcement and their owners’ business models. The frenzy surrounding ChatGPT-* is fueling an industry-wide data grab as everyone tries to beef up their products with “AI” (see also previous such exercises with “meta”, “nano”, and “e”), consequences to be determined.

Among the newly-discovered data grabbers is Intel, whose graphics processing unit (GPU) drivers are collecting telemetry data, including how you use your computer, the kinds of websites you visit, and other data points. You can opt out, assuming you a) realize what’s happening and b) are paying attention at the right moment during installation.

Google announced recently that it would scrape everything people post online to use as training data. Again, an opt-out can be had if you have the knowledge and access to follow the 30-year-old robots.txt protocol. In practical terms, I can configure my own site, pelicancrossing.net, to block Google’s data grabber, but I can’t stop it from scraping comments I leave on other people’s blogs or anything I post on social media sites or that’s professionally published (though those sites may block Google themselves). This data repurposing feels like it ought to be illegal under data protection and copyright law.

In Australia, Gizmodo reports that the company has asked the Australian government to relax copyright laws to facilitate AI training.

Soon after Google’s announcement the law firm Clarkson filed a class action lawsuit against Google to join its action against OpenAI. The suit accuses Google of “stealing” copyrighted works and personal data,

“Google does not own the Internet,” Clarkson wrote in its press release. Will you tell it, or shall I?

Whatever has been going on until now with data slurping in the interests of bombarding us with microtargeted ads is small stuff compared to the accelerating acquisition for the purpose of feeding AI models. Arguably, AI could be a public good in the long term as it improves, and therefore allowing these companies to access all available data for training is in the public interest. But if that’s true, then the *public* should own the models, not the companies. Why should we consent to the use of our data so they can sell it back to us and keep the proceeds for their shareholders?

It’s all yet another example of why we should pay attention to the harms that are clear and present, not the theoretical harm that someday AI will be general enough to pose an existential threat.

Illustrations: IBM Watson, Jeopardy champion.

Wendy M. Grossman is the 2013 winner of the Enigma Award and contributing editor for the Plutopia News Network podcast. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Mastodon.

Watson goes to Wimbledon

The launch of the Fediverse-compatible Meta app Threads seems to have slightly overshadowed the European Court of Justice’s ruling, earlier in the week. This ruling deserves more attention: it undermines the basis of Meta’s targeted advertising. In noyb’s initial reaction, data protection legal bulldog Max Schrems suggests the judgment will make life difficult for not just Meta but other advertising companies.

As Alex Scroxton explains at Computer Weekly, the ruling rejects several different claims by Meta that all attempt to bypass the requirement enshrined in the General Data Protection Regulation that where there is no legal basis for data processing users must actively consent. Meta can’t get by with claiming that targeted advertising is a part of its service users expect, or that it’s technically necessary to provide its service.

More interesting is the fact that the original complaint was not filed by a data protection authority but by Germany’s antitrust body, which sees Meta’s our-way-or-get-lost approach to data gathering as abuse of its dominant position – and the CJEU has upheld this idea.

All this is presumably part of why Meta decided to roll out Threads in many countries but *not* the EU, In February, as a consequence of Brexit, Meta moved UK users to its US agreements. The UK’s data protection law is a clone of GDPR and will remain so until and unless the British Parliament changes it via the pending Data Protection and Digital Information bill. Still, it seems the move makes Meta ready to exploit such changes if they do occur.

Warning to people with longstanding Instagram accounts who want to try Threads: if your plan is to try and (maybe) delete, set up a new Instagram account for the purpose. Otherwise, you’ll be sad to discover that deleting your new Threads account means vaping your old Instagram account along with it. It’s the Hotel California method of Getting Big Fast.

***

Last week the Irish Council for Civil Liberties warned that a last-minute amendment to the Courts and Civil Law (Miscellaneous) bill will allow Ireland’s Data Protection Commissioner to mark any of its proceedings “confidential” and thereby bar third parties from publishing information about them. Effectively, it blocks criticism. This is a muzzle not only for the ICCL and other activists and journalists but for aforesaid bulldog Schrems, who has made a career of pushing the DPC to enforce the law it was created to enforce. He keeps winning in court, too, which I’m sure must be terribly annoying.

The Irish DPC is an essential resource for everyone in Europe because Ireland is the European home of so many of American Big Tech’s subsidiaries. So this amendment – which reportedly passed the Oireachta (Ireland’s parliament) – is an alarming development.

***

Over the last few years Canadian law professor Michael Geist has had plenty of complaints about Canada’s Online News Act, aka C-18. Like the Australian legislation it emulates, C-18 requires intermediaries like Facebook and Google to negotiate and pay for licenses to link to Canadian news content. The bill became law on June 22.

Naturally, Meta and Google have warned that they will block links to Canadian news media from their services when the bill comes into force six months hence. They also intend to withdraw their ongoing programs to support the Canadian press. In response, the Canadian government has pulled its own advertising from Meta platforms Facebook and Instagram. Much hyperbolic silliness is taking place

Pretty much everyone who is not the Canadian government thinks the bill is misconceived. Canadian publishers will lose traffic, not gain revenues, and no one will be happy. In Australia, the main beneficiary appears to be Rupert Murdoch, with whom Google signed a three-year agreement in 2021 and who is hardly the sort of independent local media some hoped would benefit. Unhappily, the state of California wants in on this game; its in-progress Journalism Preservation Act also seeks to require Big Tech to pay a “journalism usage fee”.

The result is to continue to undermine the open Internet, in which the link is fundamental to sharing information. If things aren’t being (pay)walled off, blocked for copyright/geography, or removed for corporate reasons – the latest announced casualty is the GIF hosting site Gfycat – they’re being withheld to avoid compliance requirements or withdrawn for tax reasons. None of us are better off for any of this.

***

Those with long memories will recall that in 2011 IBM’s giant computer, Watson, beat the top champions at the TV game show Jeopardy. IBM predicted a great future for Watson as a medical diagnostician.

By 2019, that projected future was failing. “Overpromised and underdelivered,” ran a IEEE Spectrum headline. IBM is still trying, and is hoping for success with cancer diagnosis.

Meanwhile, Watson has a new (marketing) role: analyzing the draw and providing audio and text commentary for back-court tennis matches at Wimbledon and for highlights clips. For each match, Watson also calculates the competitors’ chances of winning and the favorability of their draw. For a veteran tennis watcher, it’s unsatisfying, though: IBM offers only a black box score, and nothing to show how that number was reached. At least human commentators tell you – albeit at great, repetitive length – the basis of their reasoning.

Illustrations: IBM’s Watson, which beat two of Jeopardy‘s greatest champions in 2011.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Twitter.

Own goals

There’s no point in saying I told you so when the people you’re saying it to got the result they intended.

At the Guardian, Peter Walker reports the Electoral Commission’s finding that at least 14,000 people were turned away from polling stations in May’s local elections because they didn’t have the right ID as required under the new voter ID law. The Commission thinks that’s a huge underestimate; 4% of people who didn’t vote said it was because of voter ID – which Walker suggests could mean 400,000 were deterred. Three-quarters of those lacked the right documents; the rest opposed the policy. The demographics of this will be studied more closely in a report due in September, but early indications are that the policy disproportionately deterred people with disabilities, people from certain ethnic groups, and people who are unemployed.

The fact that the Conservatives, who brought in this policy, lost big time in those elections doesn’t change its wrongness. But it did lead the MP Jacob Rees-Mogg (Con-North East Somerset) to admit that this was an attempt to gerrymander the vote that backfired because older voters, who are more likely to vote Conservative, also disproportionately don’t have the necessary ID.

***

One of the more obscure sub-industries is the business of supplying ad services to websites. One such little-known company is Criteo, which provides interactive banner ads that are generated based on the user’s browsing history and behavior using a technique known as “behavioral retargeting”. In 2018, Criteo was one of seven companies listed in a complaint Privacy International and noyb filed with three data protection authorities – the UK, Ireland, and France. In 2020, the French data protection authority, CNIL, launched an investigation.

This week, CNIL issued Criteo with a €40 million fine over failings in how it gathers user consent, a ruling noyb calls a major blow to Criteo’s business model.

It’s good to see the legal actions and fines beginning to reach down into adtech’s underbelly. It’s also worth noting that the CNIL was willing to fine a *French* company to this extent. It makes it harder for the US tech giants to claim that the fines they’re attracting are just anti-US protectionism.

***

Also this week, the US Federal Trade Commission announced it’s suing Amazon, claiming the company enrolled millions of US consumers into its Prime subscription service through deceptive design and sabotaged their efforts to cancel.

“Amazon used manipulative, coercive, or deceptive user-interface designs known as “dark patterns” to trick consumers into enrolling in automatically-renewing Prime subscriptions,” the FTC writes.

I’m guessing this is one area where data protection laws have worked, In my UK-based ultra-brief Prime outings to watch the US Open tennis, canceling has taken at most two clicks. I don’t recognize the tortuous process Business Insider documented in 2022.

***

It has long been no secret that the secret behind AI is human labor. In 2019, Mary L. Gray and Siddharth Suri documented this in their book Ghost Work. Platform workers label images and other content, annotate text, and solve CAPTCHAs to help train AI models.

At MIT Technology Review, Rhiannon Williams reports that platform workers are using ChatGPT to speed up their work and earn more. A team of researchers from the Swiss Federal Institute of Technology study (PDF)found that between 33% and 46% of the 44 workers they tested with a request to summarize 16 extracts from medical research papers used AI models to complete the task.

It’s hard not to feel a little gleeful that today’s “AI” is already eating itself via a closed feedback loop. It’s not good news for platform workers, though, because the most likely consequence will be increased monitoring to force them to show their work.

But this is yet another case in which computer people could have learned from their own history. In 2008, researchers at Google published a paper suggesting that Google search data could be used to spot flu outbreaks. Sick people searching for information about their symptoms could provide real-time warnings ten days earlier than the Centers for Disease Control could.

This actually worked, some of the time. However, as early as 2009, Kaiser Fung reported at Harvard Business Review in 2014, Google Flu Trends missed the swine flu pandemic; in 2012, researchers found that it had overestimated the prevalence of flu for 100 out of the previous 108 weeks. More data is not necessarily better, Fung concluded.

In 2013, as David Lazer and Ryan Kennedy reported for Wired in 2015 in discussing their investigation into the failure of this idea, GFT missed by 140% (without explaining what that means). Lazer and Kennedy find that Google’s algorithm was vulnerable to poisoning by unrelated seasonal search terms and search terms that were correlated purely by chance, and failed to take into account changing user behavior as when it introduced autosuggest and added health-related search terms. The “availability” cognitive bias also played a role: when flu is in the news, searches go up whether or not people are sick.

While the parallels aren’t exact, large language modelers could have drawn the lesson that users can poison their models. ChatGPT’s arrival for widespread use will inevitably thin out the proportion of text that is human-written – and taint the well from which LLMs drink. Everyone imagines the next generation’s increased power. But it’s equally possible that the next generation will degrade as the percentage of AI-generated data rises.

Illustrations: Drunk parrot seen in a Putney garden (by Simon Bisson).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Mastodon or Twitter.

Snowden at ten

As almost every media outlet has headlined this week, it is now ten years since Edward Snowden alerted the world to the real capabilities of the spy agencies, chiefly but not solely the US National Security Agency. What is the state of surveillance now? most of the stories ask.

Some samples: at the Open Rights Group executive director Jim Killock summarizes what Snowden revealed; Snowden is interviewed; the Guardian’s editor at the time, Alan Rusbridger, recounts events at the Guardian, which co-published Snowden’s discoveries with the Washington Post; journalist Heather Brooke warns of the increasing sneakiness of government surveillance; and Jessica Lyons Hardcastle outlines the impact. Finally, at The Atlantic, Ewen MacAskill, one of the Guardian journalists who worked on the Snowden stories, says only about 1% of Snowden’s documents were ever published.

As has been noted here recently, it seems as though everywhere you look surveillance is on the rise: at work, on privately controlled public streets, and everywhere online by both government and commercial actors. As Brooke writes and the Open Rights Group has frequently warned, surveillance that undermines the technical protections we rely on puts us all in danger.

The UK went on to pass the Investigatory Powers Act, which basically legalized what the security services were doing, but at least did add some oversight. US courts found that the NSA had acted illegally and in 2015 Congress made bulk collection of Americans’ phone records illegal. But, as Bruce Schneier has noted, Snowden’s cache of documents was aging even in 2013; now they’re just old. We have no idea what the secret services are doing now.

The impact in Europe was significant: in 2016 the EU adopted the General Data Protection Regulation. Until Snowden, data protection reform looked like it might wind up watering down data protection law in response to an unprecedented amount of lobbying by the technology companies. Snowden’s revelations raised the level of distrust and also gave Max Schrems some additional fuel in bringing his legal actions< against EU-US data deals and US corporate practices that leave EU citizens open to NSA snooping.

The really interesting question is this: what have we done *technically* in the last decade to limit government’s ability to spy on us at will?

Work on this started almost immediately. In early 2014, the World Wide Web Consortium and the Internet Engineering Task Force teamed up on a workshop called Strengthening the Internet Against Pervasive Monitoring (STRINT). Observing the proceedings led me to compare the size of the task ahead to boiling the ocean. The mood of the workshop was united: the NSA’s actions as outlined by Snowden constituted an attack on the Internet and everyone’s privacy, a view codified in RFC 7258, which outlined the plan to mitigate pervasive monitoring. The workshop also published an official report.

Digression for non-techies: “RFC” stands for “Request for Comments”. The thousands of RFCs since 1969 include technical specifications for Internet protocols, applications, services, and policies. The title conveys the process: they are published first as drafts and incorporate comments before being finalized.

The crucial point is that the discussion was about *passive* monitoring, the automatic, ubiquitous, and suspicionless collection of Internet data “just in case”. As has been said so many times about backdoors in encryption, the consequence of poking holes in security is to make everyone much more vulnerable to attacks by criminals and other bad actors.

So a lot of that workshop was about finding ways to make passive monitoring harder. Obviously, one method is to eliminate vulnerabilities, especially those the NSA planted. But it’s equally effective to make monitoring more expensive. Given the law of truly large numbers, even a tiny extra cost per user creates unaffordable friction. They called it a ten-year project, which takes us to…almost now.

Some things have definitely improved, largely through the expanded use of encryption to protect data in transit. On the web, Let’s Encrypt, now ten years old, makes it easy and cheap to obtain a certificate for any website. Search engines contribute by favoring encrypted (that is, HTTPS) web links over unencrypted ones (HTTP). Traffic between email servers has gone from being transmitted in cleartext to being almost all encrypted. Mainstream services like WhatsApp have added end-to-end encryption to the messaging used by billions. Other efforts have sought to reduce the use of fixed long-term identifiers such as MAC addresses that can make tracking individuals easier.

At the same time, even where there are data protection laws, corporate surveillance has expanded dramatically. And, as has long been obvious, governments, especially democratic governments, have little motivation to stop it. Data collection by corporate third parties does not appear in the public budget, does not expose the government to public outrage, and is available via subpoena any time government officials want. If you are a law enforcement or security service person, this is all win-win; the only data you can’t get is the data that isn’t collected.

In an essay reporting on the results of the work STRINT began as part of the ten-year assessment currently circulating in draft, STRINT convenor Stephen Farrell writes, “So while we got a lot right in our reaction to Snowden’s revelations, currently, we have a “worse” Internet.”

Illustrations: Edward Snowden, speaking to Glenn Greenwald in a screenshot from Laura Poitras’ film Prism from Praxis Films (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Mastodon or Twitter.

Microsurveillance

“I have to take a photo,” the courier said, raising his mobile phone to snap a shot of the package on the stoop in front of my open doorway.

This has been the new thing. I guess the spoken reason is to ensure that the package recipient can’t claim that it was never delivered, protecting all three of the courier, the courier company, and the shipper from fraud. But it feels like the unspoken reason is to check that the delivery guy has faithfully completed his task and continued on his appointed round without wasting time. It feels, in other words, like the delivery guy is helping the company monitor him.

I say this, and he agrees. I had, in accordance with the demands of a different courier, pinned a note to my door authorizing the deliverer to leave the package on the doorstep in my absence. “I’d have to photograph the note,” he said.

I mentioned American truck drivers, who are pushing back against in-cab cameras and electronic monitors. “They want to do that here, too,” he said. “They want to put in dashboard cameras.” Since then, in at least some cases – for example, Amazon – they have.

Workplace monitoring was growing in any case, but, as noted in 2021, the explosion in remote working brought by the pandemic normalized a level of employer intrusion that might have been more thoroughly debated in less fraught times. The Trades Union Congress reported in 2022 that 60% of employees had experiened being tracked in the previous years. And once in place, the habit of surveillance is very hard to undo.

When I was first thinking about this piece in 2021, many of these technologies were just being installed. Two years later, there’s been time for a fight back. One such story comes from the France-based company Teleperformance, one of those obscure, behind-the-scenes suppliers to the companies we’ve all heard of. In this case, the company in the shadows supplies remote customer service workers to include, just in the UK, the government’s health and education departments, NHS Digital, the RAF and Royal Navy, and the Student Loans Company, as well as Vodafone, eBay, Aviva, Volkswagen, and the Guardian itself; some of Teleperformance’s Albanian workers provide service to Apple UK

In 2021, Teleperformance demanded that remote workers in Colombia install in-home monitoring and included a contract clause requiring them to accept AI-powered cameras with voice analytics in their homes and allowing the company to store data on all members of the worker’s family. An earlier attempt at the same thing in Albania failed when the Information and Data Protection Commissioner stepped in.

Teleperformance tried this in the UK, where the unions warned about the normalization of surveillance. The company responded that the cameras would only be used for meetings, training, and scheduled video calls so that supervisors could check that workers’ desks were free of devices deemed to pose a risk to data security. Even so, In August 2021 Teleperformance told Test and Trace staff to limit breaks to ten minutes in a six-hour shift and to select “comfort break” on their computers (so they wouldn’t be paid for that time).

Other stories from the pandemic’s early days show office workers being forced to log in with cameras on for a daily morning meeting or stay active on Slack. Amazon has plans to use collected mouse movements and keystrokes to create worker profiles to prevent impersonation. In India, the government itself demanded that its accredited social health activists install an app that tracks their movements via GPS and monitors their uses of other apps.

More recently, Politico reports that Uber drivers must sign in with a selfie; they will be banned if the facial recognition verification software fails to find a match.

This week, at the Guardian Clea Skopoleti updated the state of work. In one of her examples, monitoring software calculates “activity scores” based on typing and mouse movements – so participating in Zoom meetings, watching work-related video clips, and thinking don’t count. Young people, women, and minority workers are more likely to be surveilled.

One employee Skopoleti interviews takes unpaid breaks to carve out breathing space in which to work; another reports having to explain the length of his toilet breaks. Another, a English worker in social housing, reports his vehicle is tracked so closely that a manager phones if they think he’s not in the right place or taking too long.

This is a surveillance-breeds-distrust-breeds-more-surveillance cycle. As Ellen Ullman long ago observed, systems infect their owners with the desire to do more and more with them. It will take time for employers to understand the costs in worker burnout, staff turnover, and absenteeism.

One way out is through enforcing the law: In 2020, the ICO investigated Barclay’s Bank, which was accused of spying on staff via software that tracked how they spent their time; the bank dropped it. In many of these stories, however, the surveillance suppliers say they operate within the law.

The more important way out is worker empowerment. In Colombia, Teleperformance has just guaranteed its 40,000 workers the right to form a union.

First, crucially, we need to remember that surveillance is not normal.

Illustrations: The boss tells Charlie Chaplin to get back to work in Modern Times (1936).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Mastodon or Twitter.

The arc of surveillance

“What is the point of introducing contestability if the system is illegal?” a questioner asked at this year’s Compiuters, Privacy, and Data Protection, or more or less.

This question could have been asked in any number of sessions where tweaks to surface problems leave the underlying industry undisturbed. In fact, the questioner raised it during the panel on enforcement, GDPR, and the newly-in-force Digital Markets Act. Maria Luisa Stasi explained the DMA this way: it’s about business models. It’s a step into a deeper layer.
.
The key question: will these new laws – the DMA, the recent Digital Services Act, which came into force in November, the in-progress AI Act – be enforced better than GDPR has been?

The frustration has been building all five years of GDPR’s existence. Even though this week, Meta was fined €1.2 billion for transferring European citizens’ data to the US, Noyb reports that 85% of its 800-plus cases remain undecided, 58% of them for more than 18 months. Even that €1.2 billion decision took ten years, €10 million, and three cases against the Irish Data Protection Commissioner to push through – and will now be appealed. Noyb has an annotated map of the various ways EU countries make litigation hard. The post-Snowden political will that fueled GDPR’s passage has had ten years to fade.

It’s possible to find the state of privacy circa 2023 depressing. In the 30ish years I’ve been writing about privacy, numerous laws have been passed, privacy has become a widespread professional practice and area of study in numerous fields, and the number of activists has grown from a literal handful to tens of thousands around the world. But overall the big picture is one of escalating surveillance of all types and by all sorts of players. At the 2000 Computers, Freedom, and Privacy conference, Neal Stephenson warned not to focus on governments. Watch the “Little Brothers”, he said. Google was then a tiny self-funded startup, and Mark Zuckerberg was 16. Stephenson was prescient.

And yet, that surveillance can be weirdly patchy. In a panel on children online, Leanda Barrington-Leach noted platforms’ selective knowledge: “How do they know I like red Nike trainers but don’t know I’m 12?” A partial answer came later: France’s CNIL has looked at age verification technologies and concluded that none are “mature enough” to both do the job and protect privacy.

In a discussion of deceptive practices, paraphrasing his recent paper, Mark Leiser pinpointed a problem: “We’re stuck with a body of law that looks at online interface as a thing where you look for dark patterns, but there’s increasing evidence that they’re being embedded in the systems architecture underneath and I’d argue we’re not sufficiently prepared to regulate that.”

As a response, Woody Hartzog and Neil Richards have proposed the concept of “data loyalty”. Similar to a duty of care, the “loyalty” in this case is owed by the platform to its users. “Loyalty is the requirement to make the interests of the trusted party [the platform] subservient to those of the trustee or vulnerable one [the user],” Hartzog explained. And the more vulnerable you are the greater the obligation on the powerful party.

The tone was set early with a keynote from Julie Cohen that highlighted structural surveillance and warned against accepting the Big Tech mantra that more technology naturally brings improved human social welfare..

“What happens to surveillance power as it moves into the information infrastructure?” she asked. Among other things, she concluded, it disperses accountability, making it harder to challenge but easier to embed. And once embedded, well…look how much trouble people are having just digging Huawei equipment out of mobile networks.

Cohen’s comments resonate. A couple of years ago, when smart cities were the hot emerging technology, it became clear that many of the hyped ideas were only really relevant to large, dense urban areas. In smaller cities, there’s no scope for plotting more efficient delivery routes, for example, because there aren’t enough options. As a result, congestion is worse in a small suburban city than in Manhattan, where parallel routes draw off traffic. But even a small town has scope for surveillance, and so some of us concluded that this was the technology that would trickle down. This is exactly what’s happening now: the Fusus technology platform even boasts openly of bringing the surveillance city to the suburbs.

Laws will not be enough to counter structural surveillance. In a recent paper, Cohen wrote, “Strategies for bending the arc of surveillance toward the safe and just space for human wellbeing must include both legal and technical components.”

And new approaches, as was shown by an unusual panel on sustainability, raised by the computational and environmental costs of today’s AI. This discussion suggested a new convergence: the intersection, as Katrin Fritsch put it, of digital rights, climate justice, infrastructure, and sustainability.

In the deception panel, Roseamunde van Brakel similarly said we need to adopt a broader conception of surveillance harm that includes social harm and risks for society and democracy and also the impact on climate of use of all these technologies. Surveillance, in other words, has environmental costs that everyone has ignored.

I find this convergence hopeful. The arc of surveillance won’t bend without the strength of allies..

Illustrations: CCTV camera at 22 Portobello Road, London, where George Orwell lived.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Mastodon or Twitter.