Recording

https://www.bilibili.com/video/BV1G34y1g7Q6/?spm_id_from=333.999.0.0&vd_source=e9626f9767e6e22ece9d765f34ba01c5

Speaker

Matt Nassar

Bio

Matt Nassar is an Assistant Professor in the Department of Neuroscience at Brown University. He received his BA at Colgate University and his Doctorate from the University of Pennsylvania. He completed post-doctoral training at the University of Pennsylvania and Brown University before joining the faculty at Brown. His research examines how the brain prioritizes, segregates, and combines information collected in complex environments and how this process differs across individuals, pathologies, and over-healthy aging. For example, why and how do people prioritize sensory information arriving at certain times or locations? How does this prioritization differ across individuals and change across healthy aging How does the internal state of the brain affect ongoing cognition and sensory processing? What functions might these dynamic fluctuations serve in the real world?

Abstract

People flexibly adjust their use of information according to context. The same piece of information, for example, the unexpected outcome of an action, might be highly influential on future behavior in one situation – but utterly ignored in another one. Bayesian models have provided insight into why people display this sort of behavior and even identified potential neural mechanisms that link to behavior in specific tasks and environments, but to date have fallen short of providing broader mechanistic insights that generalize across tasks or statistical environments. Here I’ll examine the possibility that such broader insights might be gained through careful consideration of task structure. I’ll show that we can think about a large number of sequential tasks as requiring the same inference problem – that is to infer the latent states of the world and the parameters of those latent states – with the primary distinctions within the class defined by transition structure. Then I’ll talk about how a neural network that updates latent states according to a known transition structure and learns “parameters” of the world for each latent state can explain adaptive learning behavior across environments and provide the first insights into neural correlates of adaptive learning across environments. This model generates internal signals that identify the need for latent state updating, which maps onto previous observations made in pupil dilations and P300 responses across different task environments. I will also discuss an experiment that we are currently setting up to test the idea that these signals might reflect a latent state update signal, with a focus on relationships to learning and perception. Finally, I discuss how deviations from normative structure learning might give rise to aberrant belief updating in mental illness.