In a series of stories, Lisa O’Carroll at the Guardian finds that His Majesty’s Revenue and Customs has had its hand in the cookie jar of airline passenger records. In hot pursuit of its goal of finding £350 million in benefit fraud, it’s been scouring these records to find people who have left the country for more than a month and not returned, so are no longer eligible.
In one case, a family was turned away at the gate when one of the children had an epileptic seizure; their child benefit was stopped because they had “emigrated” though they’d never left. A similar accusation was leveled at a women who booked a flight to Oslo even though she never checked in or flew.
These families can provide documentation proving they remained in the UK, but as one points out, the onus is on them to clean up an error they didn’t make. There are many others. Many simply traveled and returned by different routes. As of November 1, HMRC had reinstated 1,979 of the families affected but sticks to its belief that the rest have been correctly identified. HMRC also says it will check its PAYE records first for evidence someone is still here and working. This would help, but it’s not the only issue.
It’s unclear whether HMRC has the right to use this data in this way. The Guardian reports that the Information Commissioner’s Office, the data protection authority, has contacted HMRC to ask questions.
For privacy advocates, the case is disturbing. It is a clear example of the way data can mislead when it’s moved to a new context. For the people involved, it’s a hostage situation: there is no choice about providing the data siphoned from airlines to Home Office nor the financial information held by HMRC and no control over what happens next.
The essayist and former software engineer Ellen Ullman warned 20 years ago that she had never seen an owner of multiple databases who didn’t want to link them together. So this sort of “sharing” is happening all over the place.
In the US, Pro Publica reported this week that individual states have begun using a system provided by the Department of Homeland Security to check their voter rolls for non-citizens that has incorporated information from the Social Security Administration. Here again, data collected by one agency for one purpose is being shared with another for an entirely different one.
In both cases, data is being used for a purpose that wasn’t envisioned when it was collected. An airline collecting booking data isn’t checking it for errors or omissions that might cost a passenger their benefits. Similarly, the Social Security Administration isn’t normally concerned with whether you’re a citizen for voting purposes, just whether you qualify for one or another program – as it should be. Both changes of use fail to recognize the change in the impact of errors that goes along with them, especially at national scale.
I assume that in this age of AI-for-government-efficiency the goal for the future is to automate these systems even further while pulling in more sources of data.
Privacy advocates are used to encountering pushback that takes this form: “They know everything about me anyway.” I would dispute that. “They” certainly *can* collect a lot of uncorrelated data points about you if “they” aggregate the many available sources of data. But until recently, doing that was effortful enough that it didn’t happen unless you were suspected of something. Now, we’re talking sharing data and mining at scale as a matter of routine.
One of the most important lessons learned from 14 years of We, Robot conferences is that when someone shows a video clip of a robot doing something one should always ask how much it’s been speeded up.
This probably matters less in a home robot doing chores, as long as you don’t have to supervise. Leave a robot to fold laundry, and it can’t possibly matter if it takes all night.
From reports by Erik Kain at Forbes and Nilesh Christopher at the LA Times, it appears that 1X’s new Neo robot is indeed slow, even in its promotional video clips. The company says it has layers of security to prevent it from turning “murderous”, which seems an absurd bit of customer reassurance. However, 1X also calls it “lightweight”. The Neo is five foot six and weighs 66 pounds (30 kilos), which seems quite enough to hurt someone if it falls on them, even with padding. Granting the contributory design issues, Lime bikes weigh 50 pounds and break people’s legs. 1X’s website shows the Neo hugged by an avuncular taller man; imagine it instead with a five-foot 90-year-old woman.
Can we ask about hacking risks? And what happens if, like so many others, 1X shuts it down?
More incredibly, in buying one you must agree to allow a remote human operatorto drive the robot, along the way peering into your home. This is close to the original design of the panopticon, which chilled because those under surveillance never know whether they are being watched or not.
And it can be yours for the low, low price of $20,000 or $500 a month.
Illustrations: Jeremy Bentham’s original drawing of his design for the panopticon (via Wikimedia).
Also this week:
The Plutopia podcast interviews Sophie Nightingale on her research into deepfakes and the future of disinformation.
TechGrumps 3.33 podcast, The Final Step is Removing the Consumer, discusses AI web brorwsers, the Amazon outage, Python Foundation and DEI.
Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.