Artificial intelligence is everywhere. Whether it’s Siri, Alexa or Google Assistant, people use AI agents on a daily basis to communicate, to remember things and to make everyday life more convenient.

Now, University of Arizona researchers are looking to take AI to the next level. Like something from a science-fiction movie, they are hoping to create an AI agent that can understand social cues and human interactions and use that information to help teams achieve their goals.

The project, funded by a $7.5 million grant from the Defense Advanced Research Project Agency, is part of DARPA’s Artificial Social Intelligence for Successful Teams program, which seeks to develop AI systems that demonstrate basic machine social skills.

“The goal ... is to develop artificial intelligence with a ‘theory of mind,’ and create AI that is a good teammate to humans,” said Adarsh Pyarelal, a research scientist in the Machine Learning for Artificial Intelligence Lab in the UA School of Information. Pyarelal is the principal investigator for the project.

According to Pyarelal, while popular AI agents do a good job of finding information on the internet, they aren’t great at reading social cues. For example, Siri wouldn’t know the difference between someone yelling at her or speaking politely. With this new project, the goal is for the system to be capable of inferring the beliefs, desires and intentions of human beings.

“The thing that makes a human a good teammate is having a good sense of what other people on their team are thinking, so they can predict what their teammates are going to do with some level of certainty,” Pyarelal said. “We’re trying to build AI with some social awareness and this kind of theory of mind.”

The UA project, called Theory of Mind-Based Cognitive Architecture for Teams, or ToMCAT, will develop and test AI agents in a Minecraft video game environment where they will be paired with up to four human players. The AI agent will collaborate with the players to complete custom-designed missions, collecting information about the individual players as well as their interactions with other human teammates along the way.

Through the use of webcams and microphones, the AI agent will observe the human players’ facial expressions, eye movements and voices. Each player will also be connected to an electrocardiogram machine to monitor the electrical activity of their heart as well as a head cap that measures brain activity.

The ToMCAT agent will initially observe the payers’ interactions and then, after gathering enough information about the players and their social dynamics, will collaborate with them and even offer suggestions to help the team meet their goals.

“The AI agent is going to provide helpful suggestions or check in from time to time about the situation,” Pyarelal said. “So, everything comes full circle – the agent observes, it learns, and then, if needed, it can intervene to help the team. For example, if person A and person B don’t get along, the AI agent may suggest that maybe you don’t want to put them on the same team.”

The team is hoping that this research will help to inform the development of AI agents that are capable of being more interactive and even assisting in difficult or high-stress situations.

The ToMCAT project is a collaboration among the UA’s School of Information, Department of Computer Science and Department of Family Studies and Human Development. The UA research team is one of 12 teams selected to receive funding through the ASIST program.


Become a #ThisIsTucson member! Your contribution helps our team bring you stories that keep you connected to the community. Become a member today.

Contact reporter Jasmine Demers at jdemers@tucson.com

On Twitter: @JasmineADemers