Prof Bradley Love | Taming the neuroscience literature with predictive and explanatory models
- Location
- Gisbert Kapp N224, Hybrid Event, in person, Zoom - registration required
- Dates
- Tuesday 22 October 2024 (13:00-14:00)
- Contact
chbh@contracts.bham.ac.uk
This seminar is free to attend and is open to all, both within and outside the University. Attendance is possible both in-person and on Zoom, details of Zoom registration and physical location can be found above.
We are delighted to announce that the Centre for Human Brain Health (CHBH) will welcome Prof Bradley Love, a Professor of Cognitive and Decision Sciences in Experimental Psychology at UCL, to present a hybrid CHBH Seminar, taking place on the date and time above. His full biography can be found below.
To arrange a 1:1 meeting with the speaker, please state your interest in the Zoom registration link above, or email chbh@contacts.bham.ac.uk.
CHBH Event Host
Dr Paul Muhle-Karbe
Abstract
Models can help scientists make sense of an exponentially growing literature. In the first part of the talk, I will discuss using models as predictive tools. In the BrainGPT.org project, we use large language models (LLMs) to order the scientific literature. On a benchmark, BrainBench, that involves predicting experimental results from methods, we find that LLMs exceed the capabilities of human experts. Because the confidence of LLMs is calibrated, they can team with neuroscientists to accelerate scientific discovery. In the second part of the talk, I focus on models that can provide explanations bridging behaviour and brain measures. Unlike predictive models, explanatory models can offer interpretations of key results. I'll discuss work that suggests intuitive cell types (e.g., place, grid, concept cells, etc.) are of limited scientific value and naturally arise in complex networks, including random networks. In this example, the explanatory model is serving as a baseline which should be surpassed prior to making strong scientific claims. I'll end by noting the complementary roles explanatory and predictive models play.
Speaker Biography
Brad Love is Professor of Cognitive and Decision Sciences at UCL and a fellow at the European Lab for Learning & Intelligent Systems (ELLIS). In the recent past, he focused on using brain measures to select between competing models of cognitive function, and theory-driven analyses of naturalistic behaviour in large datasets. Currently, he focuses on deep learning and generative AI. One goal is making deep learning and other complex models more human-like in terms of aligning with behaviour and brain response. A second goal, exemplified by the BrainGPT.org project, is using large language models to accelerate scientific discovery.