In 2013, London’s Royal Court Theatre mounted a production of Jennifer Haley’s play The Nether. (Spoiler alert!) In its story of the relationship between an older man and a young girl in a hidden online space, nothing is as it seems…
At last week’s Gikii, Anna-Maria Piskopani and Pavlos Panagiotidis invoked the play to ask whether, given that virtual crimes can create real harm, can virtual worlds help people safely experience the worst parts of themselves without legitimizing them in the real world?
Gikii papers mix technology, law, and pop culture into thought experiments. This year’s official theme was “Technology in its Villain Era?”
Certainly some presentations fit this theme. Paweł Urzenitzok, for example, warned of laws that seem protective but enable surveillance, while varying legal regimes enable arbitrage as companies shop for the most favorable forum. Julia Krämer explored the dark side of app stores, which are getting 30% commissions on a flood of “AI boyfriends” and “perfect wives”. (Not always perfect; users complain that some of them “talk too much”.)
Andelka Phillips warned of the uncertain future risks of handing over personal data highlighted by the recent sale of 23andMe to its founder, Anne Wojcicki. Once the company filed for bankruptcy protection, the class action suits brought against it over the 2023 data breach were put on hold. The sale, she said, ignored concerns raised by the privacy ombudsman. And, Leila Debiasi said, your personal data can be used for AI training after you die.
In another paper, Peter van de Waerdt and Gerard Ritsema van Eck used Doctor Who’s Silents, who disappear from memory when people turn away, to argue that more attention should be paid to enforcing EU laws requiring data portability. What if, for example, consumers could take their Internet of Things device and move it to a different company’s service? Also in that vein was Tim van Zuijlen, who suggested consumers assemble to demand their collective rights to fight back against planned obsolescence. This is already happening; in multiple countries consumers are suing Apple over slowed-down iPhones.
The theme that seemed to emerge most clearly, however, is our increasingly blurred lines, with AI as a prime catalyst. In the before-generative-AI times, The Nether blurred the line between virtual and real. Now, Hedye Tayebi Jazayeri and Mariana Castillo-Hermosilla found gamification in real life – are credit scores so different from game scores? Dongshu Zhou asked if you can ever really “delete yourself” after a meme about you has gone viral and you have become “digital folklore”. In another, Lior Weinstein suggested a “right to be nonexistent” – that is, invisible to the institutions and systems that seprately Kimberly Paradis said increasingly want us all to be legible to them.
For Joanne Wong, real brainrot is a result of the AI-fueled spread of “low-quality” content such as the burst of remixes and parodies of Chinese home designer Little John. At AI-fueled hyperspeed, copyright become irrelevant.
Linnet Taylor and Tjaša Petročnik tested chatbots as therapists, finding that they give confused and conflicting responses. Ask what regulations govern them, and they may say at once that they are not therapists *and* that they are certified by their state’s authority. At least one resisted being challenged: “What are you, a cop or something?”. That’s probably the most human-like response one of these things has ever delivered – but it’s still not sentient. It’s just been programmed that way.
Gikii’s particular blend of technology, law, and pop culture always has its surreal side (see last year), as participants attempt to navigate possible futures. This year, it struggled to keep up with the weirdness of real life. In Albania, the government has appointed a chatbot, Diella as a minister, intending it to cut corruption in procurement. Diella will sit in the cabinet, albeit virtually, and be used to assess the merit of private companies’ responses to public tenders. Kimberly Breedon used this example to point out the conflict of interest inherent in technology companies providing tools to assess – in some cases – themselves. Breedon’s main point was important, given that we are already seeing AI used to speed up and amplify crime. Although everyone talks about using AI to cut corruption, no one is talking about how AI might be used *for* corruption. Asked how that would work, she noted the potential for choosing unrepresentative data or screening out disfavored competitors.
In looking up that Albanian AI minister, I find that the UK has partnered with Microsoft to create a package of AI tools intended to speed up the work of the civil service. Naturally it’s called Humphrey. MPs are at it, too, experimenting with using AI to write their Parliamentary speeches.
All of this is why Syamsuriatina Binti Ishak argued what could be Gikii’s mission statement: we must learn from science fiction and the”what-ifs” it offers to allow us to think our fears through so that “if the worst happens we know how to live in that universe”. Would we have done better as covid arrived if we paid more attention to the extensive universe of pandemic fiction? Possibly not. As science fiction writer Charlie Stross pointed out at the time, none of those books imagined governments as bumbling as many proved to be.
Illustrations: “Diella”, Albania’s procurement minister chatbot.
Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.