Seminars & Colloquia
Gordon Briggs
U.S. Naval Research Laboratory
"Ongoing Challenges in Explaining Command Rejection Decisions by Autonomous Agents"
Wednesday April 26, 2023 10:30 AM
Location: 3211, EB2 NCSU Centennial Campus
(Visitor parking instructions)
the ability to decide when it is appropriate to reject a received directive and determine how to
communicate an explanation of this decision (e.g., “I can’t turn left at the light, the road has
been closed”). As autonomous, robotic agents become more pervasive, situations will arise that
necessitate the rejection of a command issued by a human operator (Briggs and Scheutz, 2017;
Coman and Aha, 2018; Briggs et al., 2022). While the notion of an autonomous agent
intentionally disobeying a command from a human operator is a controversial one, we argue it
is necessary from a practical and ethical standpoint (e.g., Briggs and Scheutz, 2017). Early work
in enabling language-enabled robotic agents to appropriately reject commands and provide an
explanation focused on grounding these decisions and explanations in multiple felicity (i.e.,
acceptance) conditions, including: knowledge, capacity, goal priority, social role, and normative
permissibility (Briggs and Scheutz, 2015). In this talk, we discuss this prior work and its
limitations, highlighting ongoing challenges in enabling autonomous agents to appropriately
communicate about command rejections. These challenges include issues such as undesirable
pragmatic implicatures (e.g., Jackson and Williams, 2022), sociolinguistic miscalibration of
responses to norm-violations (Jackson et al., 2019), and possible overgeneralization of rejection
scope. We discuss the negative effects on human-robot interaction quality these challenges
may pose and present possible solutions.
References
Briggs, G. M., & Scheutz, M. (2015). “Sorry, I can’t do that”: Developing Mechanisms to Appropriately Reject
Directives in Human-Robot Interactions. In 2015 AAAI fall symposium series.
Briggs, G., & Scheutz, M. (2017). The case for robot disobedience. Scientific American, 316(1), 44-47.
Briggs, G., Williams, T., Jackson, R. B., & Scheutz, M. (2022). Why and how robots should say ‘no’. International
Journal of Social Robotics, 14(2), 323-339.
Coman, A., & Aha, D. W. (2018). AI rebel agents. AI Magazine, 39(3), 16-26.
Jackson, R. B., Wen, R., & Williams, T. (2019). Tact in noncompliance: The need for pragmatically apt responses to
unethical commands. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (pp. 499-505).
Jackson, R. B., & Williams, T. (2022). Enabling morally sensitive
Dr. Briggs’ research involves developing a wide-range of computational mechanisms and cognitive models to enable artificial agents to perceive their environment, reason, and communicate with people in human-like ways to both facilitate human-machine interaction and to improve understanding of human perceptual, reasoning, and language faculties. Additionally, his research involves validating these models and novel human-robot interaction mechanisms through human subject studies. This work has spanned topics such as natural language generation and pragmatics, human-robot dialogue, causal reasoning, numerical perception, robotic gaze behavior, and robot ethics.
Some of Dr. Briggs’ work on robot command rejection has received significant attention from both the research community and popular media. He co-wrote an invited contribution to Scientific American outlining this work, entitled “The Case for Robot Disobedience: How and Why Robots Should Say ‘No.’” Dr. Briggs is also a 2018 Laboratory University Collaborative Initiative (LUCI) Fellow, acting as a co-PI on a joint NRL/Army Research Laboratory project investigating how robots should ask questions to learn about the world through situated dialogue.
Host: John-Paul Ore, CSC