Recording

https://www.bilibili.com/video/BV14i4y1e7Lc/?vd_source=8b926cc5cb9e7d8fb85957e534d96e47

Speaker

Michael J. Frank

Bio

Michael J. Frank is Edgar L Marston Professor of Cognitive, Linguistic & Psychological Sciences at Brown University. He directs the Center for Computational Brain Science within the Carney Institute for Brain Science. He received his PhD in Neuroscience and Psychology in 2004 at the University of Colorado, following undergraduate and master’s degrees in electrical engineering. Frank’s work focuses primarily on theoretical models of frontostriatal circuits and their modulation by dopamine, especially their cognitive functions and implications for neurological and psychiatric disorders. The models are tested and refined with experiments across species, neural recording methods, and neuromodulation. Honors include the Troland Research Award from the National Academy of Sciences (2021), Kavli Fellow (2016), the Cognitive Neuroscience Society Young Investigator Award (2011), and the Janet T Spence Award for early career transformative contributions (Association for Psychological Science, 2010). Dr Frank is a senior editor for eLife.

Abstract

Humans are remarkably adept at generalizing knowledge between experiences in a way that can be difficult for computers. Previous computational models and data suggest that rather than learning about each individual context, humans build latent abstract structures and learn to link these structures to arbitrary contexts, facilitating generalization, but with a cost in efficiency of initial learning. In these models, task structures that are more popular across contexts are likely to be reused in new contexts. Neural signatures of such structure learning are predictive across individuals of the ability to transfer knowledge to new situations. However, these models predict that structures are either re-used as a whole or created from scratch, prohibiting the ability to generalize constituent parts of learned structures. This contrasts with ecological settings, where task structures can be decomposed into constituent parts and reused in a compositional fashion. Moreover in many situations people can transfer structures that they have learned to entirely new situations, by analogy, even when surface aspects of the transition and reward functions change. I will present novel computational models across levels (from neural networks to bayesian formulations) that address how agents and humans can learn and generalize such abstract and compositional structure.