Passing the Uncanny Valley

A couple of weeks ago, the Greenwich Skeptics in the Pub played host to Sophie Nightingale, who studies the psychology of AI deepfakes. The particular project she spoke about was an experiment in whether people can be trained to be better at distinguishing them from real images.

In Nightingale’s experiments, she carefully matched groups of real images to synthetic ones, first created by generative adversarial networks (GANs), later by diffusion models (GeeksforGeeks raters’ demographics.

Then the humans were given some training in what to look for to detect fakes and the experiment was rerun with new sets of faces. The bad news: the training made a little difference, but not much. She went on to do similar experiments with diffusion images.

Nightingale has gone on to do some cross-modal experiments, including audio as well as images, following the 2024 election incident in which New Hampshire voters received robocalls from a faked Joe Biden intended to discourage voters in the January 2024 primary. In the audio experiment, she played the test subjects very short snippets. Played for us in the pub, it was very hard to tell real from fake, and her experimental subjects did no better. I would expect longer clips to be more identifiable as fake. The Biden call succeeded in part because that type of fake had never been tried before. Now, voters, at least in New Hampshire, will know it’s possible that the call they’re getting is part of a newer type of disinformation campaign aimed at

In another experiment, she asked participants to rate the trustworthiness of the facial images they were shown, and was dismayed when they rated the synthetic faces slightly (7.7%) higher than the real ones. In the resulting paper for Journal of Vision, she hypothesizes that this may be because synthetic faces tend to look more like “average” faces, which tend to be rated higher in trustworthiness, even if they’re not the most attractive.

Overall, she concludes that both still images and voice have “passed the Uncanny Valley“, and video will soon follow. In the past, I’ve chosen optimism about this sort of thing, on the basis that earlier generations have been fooled by technological artifacts that couldn’t fool us now for a second. The Cottingley Fairies looks ridiculous after generations of knowledge of photography. On the other hand, Johannes Vermeer’s Girl with a Pearl Earring looks more real than modern deepfakes, even though the subject is generally described as imaginary. So it’s possible to think of it as a “deepfake”, painted in oils in the 17th century.

Fakes have always been with us. What generative AI has done to change this landscape is to democratize and scale their creation, just as it’s amping up the scale and speed of cyber attacks. It’s no longer necessary to be even barely competent; the tools keep getting easier.

Listening to Nightingale it seems most likely that work like that in progress by an audience member on identifying technological artifacts that identify fakes will prove to be the right way forward. If those differences can be reliably identified, they could be built into technological tools that can spot indicators we can’t perceive directly. If something like that can be embedded into devices – phones, eyeglasses, wristwatches, laptops – and spot and filter out fakes in real time, and we should be able to regain some ability to trust what we see.

There are some obvious problems with this hoped-for future. Some people will continue to seek to exploit fakes; some may prefer them. The most likely outcome will be an arms race like that surrounding email spam and other battles between malware producers and security people. Still, it’s the first approach that seems to offer a practical solution to coping with a vastly diminished ability to know what’s real and what isn’t.

***

On the Internet your home always leaves you, part 4,563. Twenty-two-year-old blogging site Typepad will disappear in a few weeks. To those of us who have read blogs ever since they began, this news is shocking, like someone’s decided to tear down an old community church. Yes, the congregation has shrunk and aged, and it’s drafty and built on creaking old technology (in Typepad’s case, Moveable Type), but it’s part of shared local history. Except it isn’t, because, as Wikipedia documents, corporate musical chairs means it’s now owned by private equity. Apparently it’s been closed to new signups since 2020, and its bloggers are now being told to move their sites before everything is deleted in September. It feels like the stars of the open web are winking out, one by one.

On the Internet everything is forever, but everything is also ephemeral. Ironically, the site’s marketing slug still reads: “Typepad is the reliable, flexible blogging platform that puts the publisher in control.”

Illustrations: “Girl with a Pearl Earring”, painted by Johannes Vermeer circa 1665.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Email to Ofgem

So, the US has claimed victory against the UK.

Regular readers may recall that in February the UK’s Home Office secretly asked Apple to put a backdoor in the Advanced Data Protection encryption it offers as a feature for iCloud users. In March, Apple challenged the order. The US objected to the requirement that the backdoor should apply to all users worldwide. How dare the Home Office demand the ability to spy on Americans?

On Tuesday, US director of national intelligence Tulsi Gabbard announced the UK is dropping its demand for the backdoor in Apple’s encryption “that would have enabled access to the protected encrypted data of American citizens”. The key here is “American citizens”. The announcement – which the Home Office is refusing to comment on – ignores everyone else and also the requirement for secrecy. It’s safe to say that few other countries would succeed in pressuring the UK in this way.

As Bll Goodwin reports at Computer Weekly, the US deal does nothing to change the situation for people in Britain or elsewhere. The Investigatory Powers Act (2016) is unchanged. As Parmy Olson writes at Bloomberg, the Home Office can go on issuing Technical Capability Notices to Apple and other companies demanding information on their users that the criminalization of disclosure will keep the companies silent. The Home Office can still order technology companies operating in the UK to weaken their security. And we will not know they’ve done it. Surprisingly, support for this point of view comes from the Federal Trade Commission, which has posted a letter to companies deploring foreign anti-encryption policy (ignoring how often undermining encryption has been US policy, too) and foreign censorship of Americans’ speech. This is far from over, even in the US.

Within the UK, the situation remains as dangerously uncertain as ever. With all countries interconnected, the UK’s policy risks the security of everyone everywhere. And, although US media may have forgotten, the US has long spied on its citizens by getting another country to do it.

Apple has remained silent, but so far has not withdrawn its legal challenge. Also continuing is the case filed by Privacy International, Liberty, and two individuals. In a recent update, PI says both legal cases will be heard over seven days in 2026 as much as possible in the open.

***

For non-UK folk: The Office of Gas and Electricity Markets (Ofgem) is the regulator for Britain’s energy market. Its job is to protect consumers.

To Ofgem:

Today’s Guardian (and many others) carries the news that Tesla EMEA has filed an application to supply British homes and businesses with energy.

Please do not approve this application.

I am a journalist who has covered the Internet and computer industries for 35 years. As we all know, Tesla is owned by Elon Musk. Quite apart from his controversial politics and actions within the US government, Elon Musk has shown himself to be an unstable personality who runs his companies recklessly. Many who have Tesla cars love them – but the cars have higher rates of quality control problems than those from other manufacturers, and Musk’s insistence on marketing the “Full Self Drive” feature has cost lives according to the US National Highway and Transportation Safety Agency, which launched yet another investigation into the company just yesterday. In many cases, when individuals have sought data from Tesla to understand why their relatives died in car fires or crashes the company has refused to help them. During the covid emergency, thousands of Tesla workers got covid because Musk insisted on reopening the Tesla factory. This is not a company people should trust with their homes.

With Starlink, Musk has exercised his considerable global power by turning off communications in Ukraine while it was fighting back Russian attacks. SpaceX launches continue to crash. According to the children’s commissioner’s latest report, far more children encounter pornography online on Musk’s X than on pornography sites, a problem that has gotten far worse since Musk took it over.

More generally, he is an enemy of workers’ rights. Misinformation on X helped fuel the Southport riots, and Musk himself has considered trying to oust Keir Starmer as prime minister.

Many are understandably awed by his technological ideas. But he uses these to garner government subsidies and undermine public infrastructure, which he then is able to wield as a weapon to suit his latest whims.

Musk is already far too powerful in the world. His actions in the White House have shown he is either unable to understand or entirely uninterested in the concerns and challenges that face people living on sums that to him seem negligible. He is even less interested in – and often actively opposes – social justice, fairness, and equity. No amount of separation between him and Tesla EMEA will be sufficient to counter his control of and influence over his company. Tesla’s board, just weeks ago, voted to award him $30 billion in shares to “energise and focus” him.

Please do not grant him a foothold in Britain’s public infrastructure. Whatever his company is planning, it does not have British interests at heart.

Ofgem is accepting public comments on Tesla’s application until close of business on Friday, August 22, 2025.

Illustration: Artist Dominic Wilcox’s Stained Glass Driverless Sleeper Car..

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Drought conditions

At 404 Media, Matthew Gault was first to spot a press release from the UK’s National Drought Group offering a list of things we can do to save water. The meeting makes sense: people think of the UK as a rainy country, but an increasing number of parts of the UK are experiencing extraordinarily dry weather. This “green and pleasant England” is brown.

Last on the Group’s list of things we can do to save water at home: “Delete old emails and pictures as data centres require vast amounts of water to cool their systems.”

I had to look up the National Drought Group. Says Water Magazine: “The National Drought Group includes the Met[eorology] Office, government, regulators, water companies, farmers, the [Canal and River Trust], angling groups and conservation experts. With further warm, dry weather expected, the NDG will continue to meet regularly to coordinate the national response and safeguard water supplies for people, agriculture, and the environment.”

For those outside the UK: its ten water companies are particular unpopular just now. Created by privatization during Margaret Thatcher’s decade as prime minister, six are being sued for £500 million for “underreporting sewage spills”. Others are being sued for overcharging 35 million household water customers. As just one example, Thames Water will raise prices by 35% over the next three years (on top of other recent rises), and expects customers to pay £7.5 billion for a new reservoir in Oxfordshire. It already has £17 billion in debt, and this week we learned environment secretary Steve Reed has made contingency plans in case the company goes bust. As George Monbiot writes at the Guardian, money that should have been invested in infrastructure went instead to shareholders. Climate change is a factor, sure, but so is poor water management.

All this being the case, the impact consumers can have by doing even the most effective things is dwarfed by the water companies’ failures. Deleting emails is not one of the most effective things.

At his The Weird Turn Pro Substack, Andy Masley provides some useful comparisons. Basic conclusion: you’d have to delete billions of emails to equal the savings of fixing your leaking toilet (if you have one). The whole thing reminds me of a while back when everyone was being told to save electricity by unplugging everything to extinguish all those standby lights. Last year, Which pointed out that the savings are really, really small.

The bizarre idea of deleting emails is coming, at least in part, from a government that is proposing a raft of technology-related legislation and wants, in the next five to ten years, to mastermind all sorts of IT projects, from making AI pervasive throughout government to bringing in a digital ID card. Are they thinking about the data centers they’ll need and the impact they’ll have on water management? Maybe instead tell people not to use generative AI or mine cryptocurrencies?

This much is true: data centers are a problem across the world because they require extreme amounts of water for cooling. In recent examples: at the New York Times, Eli Tan visits the US state of Georgia. At Rest of World, last year Ushar Daniele and Khadija Alam predicted upcoming water shortages in Malaysia, and Claudia Urquieta and Daniela Dib found protests in Chile, where 28 new data centers are planned.

Telling people to delete emails and pictures is just embarrassing – and sad, if people actually do it and sacrifice personal history they care about. As Masley writes, “Major governments should really know better than this.”

***

Two weeks ago we noted the arrival of age verification in the UK. Related, on May 8 the Wikimedia Foundation announced it had filed a legal challenge to the categorization provisions of the Online Safety Act (not the Act itself). The basic problem: there is little in the Act to distinguish between Wikipedia, a crowd-edited provider of highly curated information, and Facebook…or X.

The Foundation says nearly 260,000 volunteers worldwide in 300 languages contribute to Wikipedia. I do myself, but verified or not, I’m in no danger. Many are contributing factual information in countries where the facts offend an authoritarian government intent on shutting them up. The Foundation argues that 1) Wikipedia is “one of the world’s most trusted and widely used digital public goods; 2) it is at risk of being placed in the highest-risk category because of its size and interactive structure; 2) being so categorized would force it to verify the identity of contributors, placing many at risk; 4) could endanger the existence of tools the site uses to combat harmful content; 5) “criminal anonymous abuse”, which is what the Category 1 duty is supposed to help solve, isn’t a problem Wikipedia has. Instead, identifying volunteers is more likely to expose them to it.

So bad news: on August 11, the High Court of Justice dismissed the case.

The better news is that Justice Jeremy Johnson warned that if Ofcom does place Wikipedia in Category 1, it would have to be justifiable as proportionate. The judge also acknowledged the testimony of a user identified as “BLN”, who provided evidence of the extensive threats editors can face.

No one claims Wikipedia is perfect. But it remains an extraordinary collaborative achievement and a public good. It would be a horrifying consequence if legislation intended to protect children deprived them of it.

Illustrations: Kew Green, August 2025.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Machine learning

For decades, technologists imagined teaching machines. Instead, although edtech is indeed permeating classrooms, human teachers have remained in demand. And then came generative AI…

At Rest of World, Laura Rodríguez Salamanca explores AI’s impact in rural Colombia classrooms since Meta added AI bots to WhatsApp, Instagram, and Facebook and made copying and pasting answers frictionless. Result: first, a big leap in the quality of homework, then kids failing exams.

From a tiny set of conversations, it seems little different in the UK. Underlying is one of those existential questions: what is education for? For many of today’s kids, it’s just a series of hoops to jump through rather than something to love for itself. The result, says a teacher friend, is enormous amounts of pressure on kids from all sides.

“Kids are breaking under the pressure,” she says, adding that they are burdened with far more work than in previous generations. “There’s much less time for discussion or being a human. It’s all about learning to write an essay for maximum marks.” Small wonder if they are attracted to shortcuts.

A university lecturer tells me that at his institution there’s a general argument that AI is part of the world and students should know how to use it productively, but little guidance on acceptable use. Recently, he tried letting students use AI as a critical thinking exercise, focusing on a historical event whose cause is not definitively known. The results were disappointing, as he found it hard to get the students past what the AI said. One student did read a paper the chatbot recommended, but lacked the basic textbook knowledge to recognize that the paper was wrong.

“It’s an ongoing problem, and not that different from Google Scholar or PubMed,” he says.

Thirty years ago, there was a plagiarism panic, as students discovered all the material they could copy from the Internet at large. Kids I spoke to then sounded just like an annoyed university student friend now: people who use these shortcuts are cheating themselves out of their education.

There is some research to support this view. At the MIT Media Lab, Nataliya Kos’myna finds that using generative AI for essay-writing correlates to lower engagement to the point that users “struggled to accurately quote their own work”.

Of course, even before that, student clubs kept copies of old exams, or cribbed from the translations readily found in library stacks. My teacher friend thinks the difference is significant: “They were still engaging with the material to a degree you don’t have to with ChatGPT”. I tell her the story that sparked my interest at the time: a US professor had received a paper about a student’s religious faith and their struggle when deciding to have an abortion – submitted by a male student.

As a counter, she points out that led to services like Turnitin, long widely used to check for copying. “The Internet has made plagiarism a lot easier to detect.” But, she says, chatbots’ output passes the plagiarism checkers. Those are now in an arms race to detect generative AI while it keeps improving.

My university student friend nonetheless finds fellow students using chatbots to generate text, which is against her university’s rules (they do allow students to use chatbots to find citations). In her observations, students are more likely to get away with it for short answers where longer ones are more likely to get flagged. Similarly, in small seminars it’s harder to use chatbot output without being caught; it’s easier to get away with it in larger classes. She also sees it more in subject areas like business, accounting, and economics, where the degree is meant to lead directly to a job.

She finds it surprising. “I don’t understand the point in an academic setting. Why waste the opportunity when you’re the one who will have to pay the student loans?” In her only attempt, she tried to get the chatbot to generate vocabulary flash cards: “There was missing information and some were wrong.” She found it quicker to make her own.

It’s harder for her to suggest what universities should do about it. “There’s a drought of [valuing learning for its own sake] in general. A lot go only because their parents expect them to.”

Like plagiarism detectors, teachers are trying to adapt. In the Rest of World article, Rodríguez Salamanca profiles a teacher who now builds classroom debates around hyperlocal topics unlikely to feature in large language models. In a UK university setting, however, assessing students based on oral debate poses problems: the potential for bias, the need to accommodate non-native speakers and those who have come out of different education systems, and differing cultural norms around classroom behavior. After covid began, many exams shifted to open book; the arrival of chatbots has led my university contact to try to set questions that force the use of multiple sources and that are intended to be things that LLMs don’t handle well.

“We will have to drive more person-to-person,” says the secondary school teacher, citing an example seen on social media of a teacher who gave students a practice exam and time for them to read it together and discuss it before setting them to work on it. “There are implications for workload. But if you can do a lot of routine homework as automated and checked, then you can focus on the meat in the classroom. It makes it a more important place.”

Illustrations: “The Schoolroom”, by Henry Raleigh (from the Smithsonian American Art Museum).

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Big bang

In 2008, when the recording industry was successfully lobbying for an extension to the term of copyright to 95 years, I wrote about a spectacular unfairness that was affecting numerous folk and other musicians. Because of my own history and sometimes present with folk music, I am most familiar with this area of music, which aside from a few years in the 1960s has generally operated outside of the world of commercial music.

The unfairness was this: the remnants of a label that had recorded numerous long-serving and excellent musicians in the 1970s were squatting on those recordings and refusing to either rerelease them or return the rights. The result was both artistic frustration and deprivation of a sorely-needed source of revenue.

One of these musicians is the Scottish legend Dick Gaughan, who had a stroke in 2016 and was forced to give up performing. Gaughan, with help from friends, is taking action: a GoFundMe is raising the money to pay “serious lawyers” to get his rights back. Whether one loved his early music or not – and I regularly cite Gaughan as an important influence on what I play – barring him from benefiting from his own past work is just plain morally wrong. I hope he wins through; and I hope the case sets a precedent that frees other musicians’ trapped work. Copyright is supposed to help support creators, not imprison their work in a vault to no one’s benefit.

***

This has been the first week of requiring age verification for access to online content in the UK; the law came into effect on July 25. Reddit and Bluesky, as noted here two weeks ago, were first, but with Ofcom starting enforcement, many are following. Some examples: Spotify; X (exTwitter); Pornhub.

Two classes of problems are rapidly emerging: technical and political. On the technical side, so far it seems like every platform is choosing a different age verification provider. These AVPs are generally unfamiliar companies in a new market, and we are being asked to trust them with passports, driver’s licenses, credit cards, and selfies for age estimation. Anyone who uses multiple services will find themselves having to widely scatter this sensitive information. The security and privacy risks of this should be obvious. Still, Dan Malmo reports at the Guardian that AVPs are already processing five million age checks a day. It’s not clear yet if that’s a temporary burst of one-time token creation or a permanently growing artefact of repetitious added friction, like cookie banners.

X says it will examine users’ email addresses and contact books to help estimate ages. Some systems reportedly send referring page links, opening the way for the receiving AVP to store these and build profiles. Choosing a trustworthy VPN can be tricky, and these intermediaries are in a position to log what you do and exploit the results.

The BBC’s fact-checking service finds that a wide range of public interest content, including news about Ukraine and Gaza and Parliamentary debates, is being blocked on Reddit and X. Sex workers see adults being locked out of legal content.

Meanwhile, many are signing up for VPNs at pace, as predicted. The spike has led to rumors that the government is considering banning them. This seems unrealistic: many businesses rely on VPNs to secure connections for remote workers. But the idea is alarming; its logical extension is the war on general-purpose computation Cory Doctorow foresaw as a consequence of digital rights management in 2011. A terrible and destructive policy can serve multiple masters’ interests and is more likely to happen if it does.

On the political side, there are three camps. One wants the legislation repealed. Another wants to retain aspects many people agree on, such criminalizing cyberflashing and some other types of online abuse, and fix its flaws. The third thinks the OSA doesn’t go far enough, and they’re already saying they want it expanded to include all services, generative AI, and private messaging.

More than 466,000 people have signed a petition calling on the government to repeal the OSA. The government responded: thanks, but no. It will “work with Ofcom” to ensure enforcement will be “robust but proportionate”.

Concrete proposals for fixing the OSA’s worst flaws are rare, but a report from the Open Rights Group offers some; it advises an interoperable system that gives users choice and control over methods and providers. Age verification proponents often compare age-gating websites to ID checks in bars and shops, but those don’t require you to visit a separate shop the proprietor has chosen and hand over personal information. At Ctrl-Shift, Kirra Pendergast explains some of the risks.

Surrounding all that is noise. A US lawyer wants to sue Ofcom in a US federal court (huh?). Reform leader Nigel Farage has called for the Act’s repeal, which led technology secretary Peter Kyle to accuse him – and then anyone else who criticizes the act – of being on the side of sexual predators. Kyle told Mumsnet he apologizes to the generation of UK kids who were “let down” by being exposed to toxic online content because politicians failed to protect them all this time. “Never again…”

In other news, this government has lowered the voting age to 16.

Illustrations: The back cover of Dick Gaughan’s out-of-print 1972 first album, No More Forever.

Wendy M. Grossman is an award-winnning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.