Turn left at the robot

Rosey, the robot maid from The Jetsons, using a machine to make egg nog. Rosey is a blue mechanical robot with a little maid's cap and apron.

It’s easy to forget, now that so many computer interfaces seem to be getting more inscrutable, that in the early 1990s the shift from the command line to graphical interfaces and the desire to reach the wider mass market introduced a new focus on usability. In classics like Don Norman’s The Design of Everyday Things, a common idea was that a really well-designed system should not require a manual because the design itself would tell the user how to operate it.

What is intuitive, though, is defined by what you’re used to. The rumor that Apple might replace the letters on keys with glyphs is a case in point. People who’ve grown up with smartphones might like the idea of glyphs that match those on their phones. But try doing technical support over the phone and describing a glyph; easier to communicate the letters.

Those years in which computer interfaces were standardized relied on metaphors based on familiar physical items: a floppy disk to save a file, an artist’s palette for color choices. In 1993, leading software companies like Microsoft and Lotus set up usability labs; it took watching user testing to convince developers that struggling to use their software was not a sign the users were stupid.

With that background, it was interesting to attend this year’s edition of the 20-year-old Human-Robot Interaction conference. Robots don’t need metaphors; they *are* the thing itself. Although: why shouldn’t a robot also have menu buttons for common functions?

In the paper I found most interesting and valuable, Lauren Wright examined the use of a speaking Misty robot to deliver social-emotional learning lessons. Wright’s group tested the value of deception – that is, having the robot speak in the first person of its “family”, experiences, and “emotions” – versus a more truthful presentation, in which the robot is neutral and tells its stories in the third person, refers to its programmers, and professes no humanity. The researchers were testing the widely-held assumption that kids engage more with robots programmed to appear more human. They found the opposite: while both versions significantly increased their learning, the kids who used the factual robot showed more engagement and higher scores in the sense of using concepts from the lesson in their answers. This really shouldn’t be surprising. Children don’t in general respond well to deception. Really, who does?

The children’s personal reactions to the robots were at least as interesting. In Michael F. Xu‘s paper, the researchers held co-design sessions and then installed a robot in eight family homes to act as a neutral third-party enforcer issuing timely reminders on behalf of busy parents. Some of the families did indeed report that the robot’s reminder got stuff done more efficiently. On the other hand, the experiment was short – only four days – and you have to wonder if that would still be true after the novelty wore off. There were hints of this from the kids, some of whom pushed back. One simply bypassed a robot reminding him of the limits on his TV viewing by taking the TV upstairs, where the robot couldn’t go. Another reacted like I would at any age and told the robot to “shut up”.

The fact versus fiction presentation included short video clips of some of the kids’ interaction with the robot tutor. In one, a boy placed his hands on either side of the robot’s “face” while it was talking and kept moving its head around, exploring the robot’s physical capabilities (or trying to take its head off?). The speaker ignored this entirely, but the sight hilariously made an important point: the robot’s physical form speaks even when the robot is silent.

We saw this at We Robot 2016, when a Jamaican lawyer asked Olivier Guilhem, from Aldebaran Robotics, which makes Pepper, “Why is the robot white?” His response: “It looks clean.” This week, one paper tried to tease out how “representation bias” – assumptions about gender, skin tone, dis/ability, accessibility, size, age – affect users’ reactions. In the dataset used to train an AI model, bias may be baked in through the historical record. With robots, bias can also present directly through the robot’s design, as Yolande Strengers’ and Jenny Kennedy’s showed in their 2020 book The Smart Wife. Despite its shiny, unmistakable whiteness, Pepper’s shape was ambiguous enough for its gender to be interpreted differently in different cultures. In the HRI paper, the researchers concluded that biases in robot design could perpetuate occupational stereotypes – “technological segregation”. They also found their participants consistently preferred non-skin tones – in their examples, silver and light teal.

“Who builds AI shapes what AI becomes,” said Ben Rosman, who outlined a burgeoning collaborative effort to build a machine learning community across Africa and redress its underrepresentation. The same with robots: many, many cultural norms affect how humans interact with them. That information is signal, not noise, he says, and should be captured to enable robots to operate across wide ranges of human context without relying on “brittle defaults” that interpret human variation as failures. “Turn left at the robot,” makes perfect sense once you know that in South Africa “robots” are known elsewhere as traffic lights.

Illustrations: Rosey, the still-influential “old demo model” robot maid in The Jetsons (1962-1963).

Also this week:
At the Plutopia podcast, we interview Marc Abrahams, founder of the Ig Nobel awards.
At Skeptical Inquirer, the latest Letter to America finds David Clarke conducting the English folklore survey.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Author: Wendy M. Grossman

Covering computers, freedom, and privacy since 1991.