On the eve of We Robot 2026, here are links to my summaries of every year since 2012, the inaugural conference, except 2014, which I missed for family reasons. There was no conference in 2024 in order to move the event back to its original April schedule (covid caused its move to September in 2020). These are my personal impressions; nothing I say here should be taken as representing the conference, its founders, its speakers, or their institutions.
We Robot was co-founded by Michael Froomkin, Ryan Calo, and Ian Kerr to bring together lawyers and engineers to think early and long about the coming conflicts in robots, law, and policy.
2025: Predatory inclusion: In Windsor, Ontario, a few months into the new US administration, the sudden change in international relations highlights the power imbalances inherent in many of today’s AI systems. Catopromancy: in workshops, we hear a librarian propose useful AI completely out of step with today’s corporate offerings, and mull how to apply existing laws to new scenarios.
2024 No conference.
2023 The end of cool: after struggling to design a drone delivery service that had benefits over today’s cycling couriers, we find ourselves less impressed by robots that can do somersaults but not anything obviously useful; the future may have seemed more exciting when it was imaginary.
2022 Insert a human: following a long-held conference theme about “humans in the loop, “robots” are now “sociotechnical systems”. Coding ethics: Where Asimov’s laws were just a story device, in workshops we try to work out how to design a real ethical robot.
2021 Plausible diversions: maybe any technology sufficiently advanced to seem like magic can be well enough understood that we can assign responsibility and liability? Is the juice worth the squeeze?: In workshops, we mull how to regulate delivery robots, which will likely have no user-serviceable parts. Title from Woody Hartzog.
2020 (virtual) The zero on the phone: AI exploitation and bias embedded in historical data become what one speaker calls “unregulated experimentation on humans…without oversight or control”.
2019 Math, monsters, and metaphors. We dissect the trolley problem and find the true danger on the immediate horizon is less robots, more the “pile of math that does some stuff” we call “AI”. The Algernon problem: in workshops, new disciplines joining the We Robot family remind us that robots/AI are carrying out the commands of distant owners.
2018 Deception. We return to the question of what makes robots different and revisit Madeleine Clare Elish’s moral crumple zones after the first pedestrian death by self-driving car. Late, noisy, and wrong: in workshops, engineers Bill Smart and Cindy Grimm explain why sensors never capture what you think they’re capturing and how AI systems use their data.
2017 Have robot, will legislate: Discussion of risks this year focused on the intermediate situation, when automation and human norms must co-exist.
2016 Humans all the way down: Madeline Clare Elish introduces “moral crumple zones”, a paper that will resonate through future years. The lab and the world: in workshops, Bill Smart uses conference attendees in formation to show why getting a robot to do anything is difficult.
2015: Multiplicity: W
When in the life of a technology is the right time for regulatory intervention?
2014 Missed conference
2013 Cautiously apocalyptic: Diversity of approaches to regulation will be needed to handle the diversity of robots, and at the beginning of cloud robotics and full-scale data collection, we envision a pet robot dog that can beg its owner for an upgraded service subscription.
2012 A really fancy hammer with a gun. At the first We Robot, we try to answer the fundamental question: what difference do robots bring? Unsentimental engineer Bill Smart provided the title.