Artificial Intelligence Meets Neuroscience

Loading Events

« All Events

  • This event has passed.

Artificial Intelligence Meets Neuroscience

April 19 @ 9:30 am - 12:30 pm

Free

Join us for an amazing series of talks of state-of-the-art research at the intersection of artificial intelligence and neuroscience at MIT for the general public, as part of the Cambridge Science Festival 2019.

 

Talks:

Time: 9:30 – 10:00

Talk: Artificial intelligence and its relationship to increase our understanding of the brain

Speaker: Dr. Omar Costilla Reyes – Miller Lab, Brain and Cognitive Sciences, MIT

Abstract: During the past few years, we have experienced an impressive explosion in the development and deployment of artificial intelligence systems to solve tasks such as image recognition and autonomous driving. Such computational advancements are making also contributions to increase our understanding of the brain. In this talk, I will present an overview in advancements of artificial intelligence systems in neuroscience. Then I will explain the computational building blocks that are still missing to make substantial progress in our understanding of the brain.

 

Time: 10:00 – 10:30

Talk: Steady As She Knows: Invariant Representations of Facial Emotion and Identity

Speaker: Kathryn C O’Nell – Saxe Lab, Brain and Cognitive Sciences, MIT

Abstract: Every day, your brain works constantly to help you perform the tasks vital to your life. Some are obvious and take conscious effort, like speaking and moving. However, there are also tons of computations going on inside your brain that often go unnoticed. I work on one such easily-ignorable computation. You can, with relative ease, recognize whether someone else is happy or sad based on their facial expression, regardless of their identity or what angle from which you’re viewing their face. This means that your internal representation of emotion is invariant: it’s stable even when other aspects of the face you’re looking at change. In this talk, I’ll discuss how I use artificial intelligence to study how the brain creates invariant representations of facial emotions.

 

Time: 10:30 – 11:00

Talk:  How does the brain make a prediction about the world?

Speaker: Dr. Andre Bastos – Miller Lab, Brain and Cognitive Sciences, MIT

Abstract: In any nervous system, brains have to separate activity that is externally generated (the stuff happening “out there”) from internally generated activity. Sensory feedforward pathways carry information up the brain’s processing chain and internal information is thought to be carried by feedback pathways. Machine learning has made great strides in understanding and implementing the feedforward stream but is only beginning to consider internal information and feedback. This internal activity represents our plans, goals, attentional state, and expectation. They allow us to form useful predictions about the world, which helps to guide perception and action. In this talk, I will introduce these concepts and discuss our current neuroscientific understanding about this internal activity. I will also discuss how machine learning and AI are beginning to tap into and model not only feedforward, but also feedback processing.

 

Time: 11:00 – 11:30

Talk: The role of symbols on the mind, two perspectives on Artificial Intelligence

Speaker: Andres Campero – Tenenbaum Lab, Brain and Cognitive Sciences, MIT

Abstract: Two perspectives regarding the mind have existed with parallels in both Artificial Intelligence and Cognitive Science. The first is more related to logic and based on explicit symbols. The second is more similar to intuition, where learning happens not through deduction but through repetition. We will revisit this debate and then discuss a contemporary research direction trying to combine both.

 

Time: 11:30 – 12:00

Talk: Where in the brain are memories born?

Speaker: Dr. Diego Mendoza Halliday, Desimone Lab, Brain and Cognitive Sciences, MIT

Abstract: Information first enters our brains via the sensory organs. The brain selects a fraction of all incoming sensory information for storage in short-term memory and eventually in long-term memory. We currently know that the brain processes sensory information in a series of stages along a highway of interconnected regions. However, how and in which stage this information is first transformed into memories has remained unclear. In this talk, I will summarize some of our recent studies, which have helped answer these questions, showing where in the brain short-term memories are first born. These findings provide insights into the general principles of functional organization in the brain.

 

Time: 12:00 – 12:30

Talk: Using AI to understand how brain regions “talk” to each other

Speaker: Mengting Fang, Anzellotti Lab, Psychology department, Boston College

Abstract: Most cognitive tasks are not completed by a single brain region, but by many regions working together. Thanks to their coordinated responses, we can interact with the complex world around us. How do different brain regions cooperate to make human behavior possible? New AI tools let us study this question in new and more powerful ways. I will talk about how artificial neural networks can help us understand the “language” with which brain regions communicate with each other, and show examples of how brain regions that respond to faces and scenes interact with the rest of the brain (while participants were watching Forrest Gump!). I will discuss some difficulties scientists encounter in this work, and some new directions for the future.

Cost: Free. Drop in.

Details

Date:
April 19
Time:
9:30 am - 12:30 pm
Cost:
Free
Event Categories:
, ,
Event Tags:
,

Venue

Singleton Auditorium 46-3002
43 Vassar St
Cambridge, Massachusetts
+ Google Map