The horns of a dilemma

It has always been possible to conceive a future for Mastodon and the Fediverse that goes like this: incomers join the biggest servers (“instances”). The growth of those instances, if they can afford it, accelerates. When the sysadmins of smaller instances burn out and withdraw, their users also move to the largest instances. Eventually, the Fediverse landscape is dominated by a handful of very large instances (who enshittify in the traditional way) with a long tail of small and smaller ones. The very large ones begin setting rules – mostly for good reasons like combating abuse, improving security, and offering new features – that the very small ones struggle to keep up with. Eventually, it becomes too hard for most small instances to function.

This is the history of email. In 2003, when I set up my own email server at home, almost every techie had one. By this year, when I decommissioned it in favor of hosted email, almost everyone had long since moved to Gmail or Hotmail. It’s still possible to run an independent server, but the world is increasingly hostile to them.

Another possible Fediverse future: the cultural norms that Mastodon and other users have painstakingly developed over time become swamped by a sudden influx of huge numbers of newcomers when a very large instance joins the federation. The newcomers, who know nothing of the communities they’re joining, overwhelm their history and culture. The newcomers are despised and mocked – but meanwhile, much of the previous organically grown culture is lost, and people wanting intelligent conversation leave to find it elsewhere.

This is the history of Usenet, which in 1994 struggled to absorb 1 million AOLers arriving via a new gateway and software whose design reflected AOL’s internal design rather than Usenet’s history and culture. The result was to greatly exacerbate Usenet’s existing problems of abuse.

A third possible Fediverse future: someone figures out how to make money out of it. Large and small instances continue to exist, but many become commercial enterprises, and small instances increasingly rely on large instances to provide services the small instances need to stay functional. While both profit from that division of labor, the difficulty of discover means small servers stay small, and the large servers become increasingly monopolistic, exploitative, and unpleasant to use. This is the history of the web, with a few notable exceptions such as Wikipedia and the Internet Archive.

A fourth possible future: the Fediverse remains outside the mainstream, and admins continue to depend on donations to maintain their servers. Over time, the landscape of servers will shift as some burn out or run out of money and are replaced. This is roughly the history of IRC, which continues to serve its niche. Many current Mastodonians would be happy with this; as long as there’s no corporate owner no one can force anyone out of business for being insufficiently profitable.

These forking futures are suddenly topical as Mastodon administrators consider how to respond to this: Facebook will launch a new app that will interoperate with Mastodon and any other network that uses the ActivityPub protocol. Early screenshots suggest a clone of Twitter, Meta’s stated target, and reports say that Facebook is talking to celebrities like Oprah Winfrey and the Dalai Lama as potential users. The plan is reportedly that users will access the new service via their Instagram IDs and passwords. Top-down and celebrity-driven is the opposite of the Fediverse.

It should not be much comfort to anyone that the competitor the company wants to kill with this initiative is Twitter, not Mastodon, because either way Meta doesn’t care about Mastodon and its culture. Mastodon is rounding error even for just Instagram. Twitter is also comparatively small (and, like Reddit, too text-based to grow much further) but Meta sees in it the opportunity to capture its influencers and build profits around them.

The Fediverse is a democracy in the sense that email and Usenet were; admins get to decide their server’s policy, and users can only accept or reject by moving their account (which generally loses their history). For admins, how to handle Meta is not an easy choice. Meta has approached for discussions the admins of some of the larger Mastodon instances, who must sign an NDA or give up the chance to influence developments. That decision is for the largest few; but potentially every Mastodon instance operator will have to decide the bigger question: do they federate with Meta or not? Refusal means their users can’t access Meta’s wider world, which will inevitably include many of their friends; acceptance means change and loss of control. As I’ve said here before, something that is “open” only to your concept of “good people” isn’t open at all; it’s closed.

At Chronicles of the Instantly Curious, Carey Lening deplores calls to shun Meta as elitist; the AOL comparison draws itself. Even so, the more imminent bad future for Mastodon is this fork that could split the Fediverse into two factions. Of course the point of being decentralized is to allow more choice over who you socially network with. But until now, none of those choices took on the religious overtones associated with the most heated cyberworld disputes. Fasten your seatbelts…

Illustrations: A mastodon by Heinrich Harder (public domain, via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Mastodon.

Own goals

There’s no point in saying I told you so when the people you’re saying it to got the result they intended.

At the Guardian, Peter Walker reports the Electoral Commission’s finding that at least 14,000 people were turned away from polling stations in May’s local elections because they didn’t have the right ID as required under the new voter ID law. The Commission thinks that’s a huge underestimate; 4% of people who didn’t vote said it was because of voter ID – which Walker suggests could mean 400,000 were deterred. Three-quarters of those lacked the right documents; the rest opposed the policy. The demographics of this will be studied more closely in a report due in September, but early indications are that the policy disproportionately deterred people with disabilities, people from certain ethnic groups, and people who are unemployed.

The fact that the Conservatives, who brought in this policy, lost big time in those elections doesn’t change its wrongness. But it did lead the MP Jacob Rees-Mogg (Con-North East Somerset) to admit that this was an attempt to gerrymander the vote that backfired because older voters, who are more likely to vote Conservative, also disproportionately don’t have the necessary ID.

***

One of the more obscure sub-industries is the business of supplying ad services to websites. One such little-known company is Criteo, which provides interactive banner ads that are generated based on the user’s browsing history and behavior using a technique known as “behavioral retargeting”. In 2018, Criteo was one of seven companies listed in a complaint Privacy International and noyb filed with three data protection authorities – the UK, Ireland, and France. In 2020, the French data protection authority, CNIL, launched an investigation.

This week, CNIL issued Criteo with a €40 million fine over failings in how it gathers user consent, a ruling noyb calls a major blow to Criteo’s business model.

It’s good to see the legal actions and fines beginning to reach down into adtech’s underbelly. It’s also worth noting that the CNIL was willing to fine a *French* company to this extent. It makes it harder for the US tech giants to claim that the fines they’re attracting are just anti-US protectionism.

***

Also this week, the US Federal Trade Commission announced it’s suing Amazon, claiming the company enrolled millions of US consumers into its Prime subscription service through deceptive design and sabotaged their efforts to cancel.

“Amazon used manipulative, coercive, or deceptive user-interface designs known as “dark patterns” to trick consumers into enrolling in automatically-renewing Prime subscriptions,” the FTC writes.

I’m guessing this is one area where data protection laws have worked, In my UK-based ultra-brief Prime outings to watch the US Open tennis, canceling has taken at most two clicks. I don’t recognize the tortuous process Business Insider documented in 2022.

***

It has long been no secret that the secret behind AI is human labor. In 2019, Mary L. Gray and Siddharth Suri documented this in their book Ghost Work. Platform workers label images and other content, annotate text, and solve CAPTCHAs to help train AI models.

At MIT Technology Review, Rhiannon Williams reports that platform workers are using ChatGPT to speed up their work and earn more. A team of researchers from the Swiss Federal Institute of Technology study (PDF)found that between 33% and 46% of the 44 workers they tested with a request to summarize 16 extracts from medical research papers used AI models to complete the task.

It’s hard not to feel a little gleeful that today’s “AI” is already eating itself via a closed feedback loop. It’s not good news for platform workers, though, because the most likely consequence will be increased monitoring to force them to show their work.

But this is yet another case in which computer people could have learned from their own history. In 2008, researchers at Google published a paper suggesting that Google search data could be used to spot flu outbreaks. Sick people searching for information about their symptoms could provide real-time warnings ten days earlier than the Centers for Disease Control could.

This actually worked, some of the time. However, as early as 2009, Kaiser Fung reported at Harvard Business Review in 2014, Google Flu Trends missed the swine flu pandemic; in 2012, researchers found that it had overestimated the prevalence of flu for 100 out of the previous 108 weeks. More data is not necessarily better, Fung concluded.

In 2013, as David Lazer and Ryan Kennedy reported for Wired in 2015 in discussing their investigation into the failure of this idea, GFT missed by 140% (without explaining what that means). Lazer and Kennedy find that Google’s algorithm was vulnerable to poisoning by unrelated seasonal search terms and search terms that were correlated purely by chance, and failed to take into account changing user behavior as when it introduced autosuggest and added health-related search terms. The “availability” cognitive bias also played a role: when flu is in the news, searches go up whether or not people are sick.

While the parallels aren’t exact, large language modelers could have drawn the lesson that users can poison their models. ChatGPT’s arrival for widespread use will inevitably thin out the proportion of text that is human-written – and taint the well from which LLMs drink. Everyone imagines the next generation’s increased power. But it’s equally possible that the next generation will degrade as the percentage of AI-generated data rises.

Illustrations: Drunk parrot seen in a Putney garden (by Simon Bisson).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Mastodon or Twitter.

All change

One of the reasons Silicon Valley technology company leaders sometimes display such indifference to the desires of their users is that they keep getting away with it. At Facebook, now Meta, after each new privacy invasion, the user base just kept getting bigger. At Twitter, despite much outrage at its new owner’s policies, although it feels definitely emptier the exodus toward other sites appears to have dropped off. At Reddit, where CEO Steve Huffman has used the term “landed gentry” to denigrate moderators leading protests against a new company policy…well, we’ll see.

In April, Reddit announced it would begin charging third parties for access to its API, the interface that gives computers outside its system access to the site’s data. Charges will apply to everyone except developers building apps and bots that help people use Reddit and academic/non-commercial researchers studying Reddit.

In May, the company announced pricing: $12,000 per 50 million requests. This compares to Twitter’s recently announced $42,000 per 50 million tweets and photo site Imgur‘s $166 per 50 million API calls. Apollo, maker of the popular iOS Reddit app, estimates that it would now cost $20 million a year to keep its app running.

The reasoning behind this could be summed up as, “They cost us real money; why should we help them?” Apollo’s app is popular, it appears, because it offers a cleaner interface. But it also eliminates Reddit’s ads, depriving the site of revenue. Reddit is preparing for an IPO later this year against stiff headwinds.

A key factor in this timing is the new gold rush around large language models, which are being built by scraping huge amounts of text anywhere they can find it. Taking “our content”, Huffman calls it, suggesting Reddit deserves to share in the profits while eliding the fact that said content is all user-created.

This week, thousands of moderators shuttered their forums (subreddits) in protest. At The Verge, Jay Peters reports that more than 8,000 (out of 138,000) subreddits went dark for 48 hours from Monday to Wednesday. Given Huffman’s the-blackout-will-pass refusal to budge, some popular forums have vowed to continue the protest indefinitely.

Some redditors have popped up on other social media to ask about viable alternatives (they’re also discussing this question on Reddit itself). But moving communities is hard, which is why these companies understand their users’ anger is rarely an existential threat.

The most likely outcome is that redditors are about to confront the fate that eventually befalls almost every online community: the people they *thought* cared about them are going to sell them to people who *don’t* care about them. Reddit as they knew it is entering a phase of precarity that history says will likely end with the system’s shutdown or abandonment. Shareholders’ and owners’ desire to cash out and indifference to Twitter’s actual users is how Elon Musk ended up in charge. It’s how NBC Universal shut down Television without Pity, how Yahoo killed GeoCities, and how AOL spitefully dismantled CompuServe.

The lesson from all of these is: shareholders and corporate owners don’t have to care about users.

The bigger issue, however, is that Reddit, like Twitter, is not currently a sustainable business. Founded in 2005, it was a year old when Conde Nast bought it, only to spin it out again into an independent subsidiary in 2011. Since then it has held repeated funding rounds, most recently in 2021, when it raised $700 million. Since its IPO filing in December 2021, its value has dropped by a third. It will not survive in any form without new sources of revenue; it’s also cutting costs with layoffs.

Every Internet service or site, from Flickr to bitcoin, begins with founders and users sharing the same goal: for the service to grow and prosper. Once the service has grown past a certain point, however, their interests diverge. Users generally seek community, entertainment, and information; investors only seek profits. The need to produce revenues led Google’s chiefs, who had previously held that ads would inevitably corrupt search results, hired Sheryl Sandberg to build the company’s ad business. Seven years later, facomg the same problem, Facebook did the same thing – and hired the same person to do it. Reddit has taken much longer than most Internet companies to reach this inevitable fork.

Yet the volunteer human moderators Huffman derided are the key to Reddit’s success; they set the tone in each subreddit community. Reddit’s topic-centered design means much more interaction with strangers than the person-centered design of blogs and 2010-era social media, but it also allows people with niche interests to find both experts and each other. That fact plus human curation means that lately many add “reddit” to search terms in order to get better results. Reddit users’ loss is therefore also our loss as we try to cope with t1he enshittification of the most monopolistic Internet services.

Its board still doesn’t have to care.

None of this is hopeful. Even if redditors win this round and find some compromise to save their favorite apps, once the IPO is past, any power they have will be gone.

“On the Internet your home will always leave you,” someone observed on Twitter a couple of years ago. I fear that moment is now coming for Reddit. Next time, build your community in a home you can own.

Illustration: Reddit CEO and co-founder Steve Huffman speaking at the Oxford Union in 2019.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Mastodon or Twitter.

Snowden at ten

As almost every media outlet has headlined this week, it is now ten years since Edward Snowden alerted the world to the real capabilities of the spy agencies, chiefly but not solely the US National Security Agency. What is the state of surveillance now? most of the stories ask.

Some samples: at the Open Rights Group executive director Jim Killock summarizes what Snowden revealed; Snowden is interviewed; the Guardian’s editor at the time, Alan Rusbridger, recounts events at the Guardian, which co-published Snowden’s discoveries with the Washington Post; journalist Heather Brooke warns of the increasing sneakiness of government surveillance; and Jessica Lyons Hardcastle outlines the impact. Finally, at The Atlantic, Ewen MacAskill, one of the Guardian journalists who worked on the Snowden stories, says only about 1% of Snowden’s documents were ever published.

As has been noted here recently, it seems as though everywhere you look surveillance is on the rise: at work, on privately controlled public streets, and everywhere online by both government and commercial actors. As Brooke writes and the Open Rights Group has frequently warned, surveillance that undermines the technical protections we rely on puts us all in danger.

The UK went on to pass the Investigatory Powers Act, which basically legalized what the security services were doing, but at least did add some oversight. US courts found that the NSA had acted illegally and in 2015 Congress made bulk collection of Americans’ phone records illegal. But, as Bruce Schneier has noted, Snowden’s cache of documents was aging even in 2013; now they’re just old. We have no idea what the secret services are doing now.

The impact in Europe was significant: in 2016 the EU adopted the General Data Protection Regulation. Until Snowden, data protection reform looked like it might wind up watering down data protection law in response to an unprecedented amount of lobbying by the technology companies. Snowden’s revelations raised the level of distrust and also gave Max Schrems some additional fuel in bringing his legal actions< against EU-US data deals and US corporate practices that leave EU citizens open to NSA snooping.

The really interesting question is this: what have we done *technically* in the last decade to limit government’s ability to spy on us at will?

Work on this started almost immediately. In early 2014, the World Wide Web Consortium and the Internet Engineering Task Force teamed up on a workshop called Strengthening the Internet Against Pervasive Monitoring (STRINT). Observing the proceedings led me to compare the size of the task ahead to boiling the ocean. The mood of the workshop was united: the NSA’s actions as outlined by Snowden constituted an attack on the Internet and everyone’s privacy, a view codified in RFC 7258, which outlined the plan to mitigate pervasive monitoring. The workshop also published an official report.

Digression for non-techies: “RFC” stands for “Request for Comments”. The thousands of RFCs since 1969 include technical specifications for Internet protocols, applications, services, and policies. The title conveys the process: they are published first as drafts and incorporate comments before being finalized.

The crucial point is that the discussion was about *passive* monitoring, the automatic, ubiquitous, and suspicionless collection of Internet data “just in case”. As has been said so many times about backdoors in encryption, the consequence of poking holes in security is to make everyone much more vulnerable to attacks by criminals and other bad actors.

So a lot of that workshop was about finding ways to make passive monitoring harder. Obviously, one method is to eliminate vulnerabilities, especially those the NSA planted. But it’s equally effective to make monitoring more expensive. Given the law of truly large numbers, even a tiny extra cost per user creates unaffordable friction. They called it a ten-year project, which takes us to…almost now.

Some things have definitely improved, largely through the expanded use of encryption to protect data in transit. On the web, Let’s Encrypt, now ten years old, makes it easy and cheap to obtain a certificate for any website. Search engines contribute by favoring encrypted (that is, HTTPS) web links over unencrypted ones (HTTP). Traffic between email servers has gone from being transmitted in cleartext to being almost all encrypted. Mainstream services like WhatsApp have added end-to-end encryption to the messaging used by billions. Other efforts have sought to reduce the use of fixed long-term identifiers such as MAC addresses that can make tracking individuals easier.

At the same time, even where there are data protection laws, corporate surveillance has expanded dramatically. And, as has long been obvious, governments, especially democratic governments, have little motivation to stop it. Data collection by corporate third parties does not appear in the public budget, does not expose the government to public outrage, and is available via subpoena any time government officials want. If you are a law enforcement or security service person, this is all win-win; the only data you can’t get is the data that isn’t collected.

In an essay reporting on the results of the work STRINT began as part of the ten-year assessment currently circulating in draft, STRINT convenor Stephen Farrell writes, “So while we got a lot right in our reaction to Snowden’s revelations, currently, we have a “worse” Internet.”

Illustrations: Edward Snowden, speaking to Glenn Greenwald in a screenshot from Laura Poitras’ film Prism from Praxis Films (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Mastodon or Twitter.

Review: A Hacker’s Mind

A Hacker’s Mind: How the Powerful Bend Society’s Rules, and How to Bend them Back
by Bruce Schneier
Norton
ISBN: 978-0-393-86666-7

One of the lessons of the Trump presidency has been how much of the US government runs on norms that have developed organically over the republic’s 247-year history. Trump felt no compunction about breaking those norms. In computer security parlance, he hacked the system by breaking those norms in ways few foresaw or thought possible.

This is the kind of global systemic hacking Bruce Scheneir explores in his latest book, A Hacker’s Mind. Where most books on this topic limit their focus to hacking computers, Schneier opts to start with computer hacking, use it to illustrate the hacker’s habit of mind, and then find that mindset in much larger and more consequential systemic abuses. In his array of hacks by the rich and powerful, Trump is a distinctly minor player.

First, however, Schneier introduces computer hacking from the 1980s onward. In this case, “hacking” is defined in the old way: active subversion of a system to make it do things its designers never intended. In the 1980s, “hacker” was a term of respect applied to you by others admiring your cleverness. It was only in the 1990s that common usage equated hacking with committing crimes with a computer. In his 1984 book Hackers, Steven Levy showed this culture in action at MIT. It’s safe to say that without hacks we wouldn’t have the Internet.

The hacker’s habit of mind can be applied to far more than just technology. It can – and is today being used to – subvert laws, social norms, financial systems, politics, and democracy itself. This is Schneier’s main point. You can draw a straight line from technological cleverness to Silicon Valley’s “disrupt” to the aphorism coined by Georgetown law professor Julie Cohen, whom Schneier quotes: “Power interprets regulation as damage, and routes around it”.

In the first parts of the book he discusses the impact of system vulnerabilities, the kinds of responses one can make, and the basic types of response. In a compact amount of space, he covers patching, hardening, and simplifying systems, evaluating threat models as they change, and limiting the damage the hack can cause. Or, the hack may be normalized, becoming part of our everyday landscape.

Then he gets serious. In the bulk of the book, he explores applications: hacking financial, legal, political, cognitive, and AI systems. Specialized AI – Schneier wisely avoids the entirely speculative hype and fear around artificial general intelligence – is both exceptionally vulnerable to hacks and an exceptional vector for them. Anthropomorphic robots especially can be designed to hack our emotional responses.

“The rich are better at hacking,” he observes. They have greater resources, more powerful allies, and better access. If the good side of hacking is innovation, the bad side is societal damage, increasing unfairness and inequality, and the subversion of the systems we used to trust. Schneier believes all of this will get worse because today’s winners have so much ability to hack what’s left. Hacking, he says, is an existential threat. Nonetheless, he has hope: we *can* build resilient governance structures. We must hack hacking.

Microsurveillance

“I have to take a photo,” the courier said, raising his mobile phone to snap a shot of the package on the stoop in front of my open doorway.

This has been the new thing. I guess the spoken reason is to ensure that the package recipient can’t claim that it was never delivered, protecting all three of the courier, the courier company, and the shipper from fraud. But it feels like the unspoken reason is to check that the delivery guy has faithfully completed his task and continued on his appointed round without wasting time. It feels, in other words, like the delivery guy is helping the company monitor him.

I say this, and he agrees. I had, in accordance with the demands of a different courier, pinned a note to my door authorizing the deliverer to leave the package on the doorstep in my absence. “I’d have to photograph the note,” he said.

I mentioned American truck drivers, who are pushing back against in-cab cameras and electronic monitors. “They want to do that here, too,” he said. “They want to put in dashboard cameras.” Since then, in at least some cases – for example, Amazon – they have.

Workplace monitoring was growing in any case, but, as noted in 2021, the explosion in remote working brought by the pandemic normalized a level of employer intrusion that might have been more thoroughly debated in less fraught times. The Trades Union Congress reported in 2022 that 60% of employees had experiened being tracked in the previous years. And once in place, the habit of surveillance is very hard to undo.

When I was first thinking about this piece in 2021, many of these technologies were just being installed. Two years later, there’s been time for a fight back. One such story comes from the France-based company Teleperformance, one of those obscure, behind-the-scenes suppliers to the companies we’ve all heard of. In this case, the company in the shadows supplies remote customer service workers to include, just in the UK, the government’s health and education departments, NHS Digital, the RAF and Royal Navy, and the Student Loans Company, as well as Vodafone, eBay, Aviva, Volkswagen, and the Guardian itself; some of Teleperformance’s Albanian workers provide service to Apple UK

In 2021, Teleperformance demanded that remote workers in Colombia install in-home monitoring and included a contract clause requiring them to accept AI-powered cameras with voice analytics in their homes and allowing the company to store data on all members of the worker’s family. An earlier attempt at the same thing in Albania failed when the Information and Data Protection Commissioner stepped in.

Teleperformance tried this in the UK, where the unions warned about the normalization of surveillance. The company responded that the cameras would only be used for meetings, training, and scheduled video calls so that supervisors could check that workers’ desks were free of devices deemed to pose a risk to data security. Even so, In August 2021 Teleperformance told Test and Trace staff to limit breaks to ten minutes in a six-hour shift and to select “comfort break” on their computers (so they wouldn’t be paid for that time).

Other stories from the pandemic’s early days show office workers being forced to log in with cameras on for a daily morning meeting or stay active on Slack. Amazon has plans to use collected mouse movements and keystrokes to create worker profiles to prevent impersonation. In India, the government itself demanded that its accredited social health activists install an app that tracks their movements via GPS and monitors their uses of other apps.

More recently, Politico reports that Uber drivers must sign in with a selfie; they will be banned if the facial recognition verification software fails to find a match.

This week, at the Guardian Clea Skopoleti updated the state of work. In one of her examples, monitoring software calculates “activity scores” based on typing and mouse movements – so participating in Zoom meetings, watching work-related video clips, and thinking don’t count. Young people, women, and minority workers are more likely to be surveilled.

One employee Skopoleti interviews takes unpaid breaks to carve out breathing space in which to work; another reports having to explain the length of his toilet breaks. Another, a English worker in social housing, reports his vehicle is tracked so closely that a manager phones if they think he’s not in the right place or taking too long.

This is a surveillance-breeds-distrust-breeds-more-surveillance cycle. As Ellen Ullman long ago observed, systems infect their owners with the desire to do more and more with them. It will take time for employers to understand the costs in worker burnout, staff turnover, and absenteeism.

One way out is through enforcing the law: In 2020, the ICO investigated Barclay’s Bank, which was accused of spying on staff via software that tracked how they spent their time; the bank dropped it. In many of these stories, however, the surveillance suppliers say they operate within the law.

The more important way out is worker empowerment. In Colombia, Teleperformance has just guaranteed its 40,000 workers the right to form a union.

First, crucially, we need to remember that surveillance is not normal.

Illustrations: The boss tells Charlie Chaplin to get back to work in Modern Times (1936).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Mastodon or Twitter.