Lixiao Huang,
Jared Freeman,
Nancy J. Cooke,
Myke C. Cohen,
Xiaoyun Yin,
Jeska Clark,
Matt Wood,
Verica Buchanan,
Christopher Corral,
Federico Scholcover,
Anagha Mudigonda,
Lovein Thomas,
Aaron Teo &
John Colonna-Romano
Abstract
Artificial social intelligence (ASI) agents have great potential to aid the success of individuals, human–human teams, and human–artificial intelligence teams. To develop helpful ASI agents, we created an urban search and rescue task environment in Minecraft to evaluate ASI agents’ ability to infer participants’ knowledge training conditions and predict participants’ next victim type to be rescued. We evaluated ASI agents’ capabilities in three ways: (a) comparison to ground truth—the actual knowledge training condition and participant actions; (b) comparison among different ASI agents; and (c) comparison to a human observer criterion, whose accuracy served as a reference point. The human observers and the ASI agents used video data and timestamped event messages from the testbed, respectively, to make inferences about the same participants and topic (knowledge training condition) and the same instances of participant actions (rescue of victims). Overall, ASI agents performed better than human observers in inferring knowledge training conditions and predicting actions. Refining the human criterion can guide the design and evaluation of ASI agents for complex task environments and team composition.