The science fiction author Charles Stross had a moment of excitement on Mastodon this week: WRITER CHALLENGE!.
Stross challenged writers to use the word “esquivalience” in their work. The basic idea: turn this Pinocchio word into a “real” word.
Esquivalience is the linguistic equivalent of a man-made lake. The creator, editor Christine Lindberg, invented it for the 2001 edition of the New American Oxford Dictionary and defined it as “the willful avoidance of one’s official responsibilities; the shirking of duties”. It was a trap to catch anyone republishing the dictionary rather than developing their own (a job I have actually done). This is a common tactic for protecting large compilations where it’s hard to prove copying – fake streets are added to maps, for example, and the people who rent out mailing lists add ringers whose use will alert them if the list is used outside the bounds of the contractual agreement.
There is, however, something peculiarly distasteful about fake entries in supposedly authoritative dictionaries, even though I agree with Lindberg that “esquivalience” is a pretty useful addition to the language. It’s perfect – perhaps in the obvious adjectival form “esquivalient” – for numerous contemporary politicians, though here be dragons: “willful” risks libel actions.
Probably most writers have wanted to make up words, and many have, from playwright and drama critic George S. Kaufman, often credited for coining, among other things, “underwhelmed”, to Anthony Burgess, who invented an entire futurist street language for A Clockwork Orange. Some have gone so far as to create enough words to publish dictionaries – such as the humorist Gelett Burgess, whose Burgess Unabridged (free ebook!) compiles “words you’ve always needed”. From that collection, I have always been particularly fond of Burgess’s “wox”, defined as “a state of placid enjoyment; sluggish satisfaction”. It seems particularly apt in the hours immediately following Thanksgiving dinner.
In these cases, though, the context lets you know the language is made up. The dictionary is supposed to be authoritative, admitting words only after they are well-established. The presence of fake words feels damaging in a way that a fake place on a map doesn’t. It’s comparatively easy to check whether a place exists by going there, but at some point down the echoing corridors of time *every* word was used for the first time. Pinpointing exactly when is hard unless someone ‘fesses up. I don’t like the idea that my dictionary is lying to me. Better if NOAD had planted two fake words and had them recursively point at each other for their definitions.
I had been avoiding the ChatGPT hoopla, but it seemed plausible to ask it: Is “esquivalience” a real word?” Its response started well enough: “‘Esquivalience’ is not recognized as a standard word in the English language. It is a made-up word…” And then cuckoo land arrived: “…that was created by a writer named Adam Jacot de Boinod for his book “The Meaning of Tingo”.” Pause to research. The book in question was written in 2006. The word “esquivalience” does not, from a quick text search, appear in it. Huh? I went on to suggest Christine Lindberg’s name to ChatGPT, and after a digression attributing the word to the singer-songwriter Christine Lavin, it appeared to find references to Lindberg’s “claim” in its corpus of data. But, it continued to warn, in every response, “it is still not recognized as a standard word in the English language”. It’s a bot. It’s not being stern. It doesn’t know what it’s saying. Getting it to agree on Christine Lindberg as the original source isn’t winning the argument. It’s just giving it a different prompt.
I ask if it has ever encountered the word “wox”. “As an AI language model, I have certainly come across the word ‘wox’.” A human reads lightly insulted pride into that. Resist. It’s a bot. It has no pride. The bot went on to speculate on possible origins (“it may be a neologism…”). I ask if it’s heard of Gelett Burgess. Oh, yes, followed by a short biography. Then, when told Burgess invented “wox”: “Gelett Burgess did indeed invent the word…” and goes on to cite the correct book…but then continues that Burgess defined it as “to make fun of, to poke fun at” which is absolutely not what Burgess says, and I know this because I have the original 1914 book right here, and the definition I cited above is right there on p112. The bot does “apologize” every time you point out a mistake, though.
This isn’t much of a sample, but based on it, I find ChatGPT quite alarming as an extraordinarily efficient way of undermining factual knowledge. The responses sound authoritative, but every point must be fact-checked. It could not be worse-suited for today’s world, where everyone wants fast answers. Coupled with search, it turns the algorithms that give us answers into even more obscure and less trustworthy black boxes. Wikipedia has many flaws, but its single biggest strength is its sourcing and curation; how every page has been changed and shaped over the years is open for inspection.
So when ChatGPT went on to say that Gelett Burgess is widely credited with coining the term “blurb”, Wikipedia is where I turned. Wikipedia agrees (asked, ChatGPT cites the Oxford English Dictionary). Burgess FTW.
Illustrations: Gelett Burgess’s 1914 Burgess Unabridged, a dictionary of made-up words.
Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Mastodon or Twitter.
Years ago—in the teeth of a bookshelf crisis—I took all of my encyclopaedias and all but a select few dictionaries to the local Samaritans charity shop thinking they could use the money, someone else could use the books, and I could always use t’internet. I’d not questioned that decision before now…
I know, right?
wg
Update: Today I tried asking it again the same question, “Is ‘esequivalience’ a real word?” and got an entirely new answer, attributing it to comedian Adam Rifkin. When prompted, it agreed, as before, that this was a mistake. But getting the right answer out of it appears to be entirely a matter of giving it the right prompt. I can’t think of anything more dangerous to tie to a search engine.