PUBlic Professor Series | Dr. Matthew Tata

How to Talk to Your Robot: Using Cognitive Neuroscience to Make Robots That Can Hear

Dr. Matthew Tata, Department of Neuroscience (CCBN)

Robotics is a transformative technology, and within the coming decade, we’ll likely have smart and helpful machines that will exist alongside us at home and in the workplace. Those machines should, quite literally, do what we tell them to do. That is, we should be able to communicate our goals to robotics systems in the same way we interact with other people; we should be able to talk to robots. However, there’s a significant problem: most robots can’t hear.

Although our sense of hearing often feels subjectively secondary to our sense of vision, we use sound continuously to make sense of our world, even when our other senses fail us. For example, we can hear around corners, but we can’t see around corners. Sound conveys critical information from which we can often decode the identity, location, and intention of people around us, and an impairment of hearing can be a severe disability. Indeed, people go to great lengths to restore the functions of hearing, ranging from cochlear implants to learning to talk with our hands. Robots have a different, entirely computational problem: they need advanced artificial intelligence software to be able to understand the auditory world.

My Cognitive Robotics Lab in the Department of Neuroscience at the University of Lethbridge has been developing auditory AI for the past five years, and this talk will explore how we solve some of the problems that face all hearing systems, whether they are biological or machine. Auditory AI needs to solve these computational problems in fast and efficient ways, so we turn to the human brain for inspiration in developing our algorithms.   By studying how we localize sounds, understand speech, and focus our auditory attention, we not only achieve a better understanding of how the human brain works, but we also can translate these discoveries into algorithms for robots so they can behave more naturally in the auditory world.

ICYMI: Watch the video