Automated tuning of large-scale neuronal models
An interdisciplinary group of researchers primarily based at Carnegie Mellon University and the University of Pittsburgh presents a novel framework, Spiking Network Optimization using Population Statistics, that can quickly and accurately customize models that reproduce activity to mimic what’s observed in the brain.
Developing large-scale neural network models that mimic the brain’s activity is a major goal in the field of computational neuroscience. Existing models that accurately reproduce aspects of brain activity are notoriously complex, and fine-tuning model parameters often requires significant time, intuition, and expertise. New published research from an interdisciplinary group of researchers primarily based at Carnegie Mellon University and the University of Pittsburgh presents a novel solution to mitigate some of these challenges. The machine learning-driven framework, Spiking Network Optimization using Population Statistics (SNOPS), can quickly and accurately customize models that reproduce activity to mimic what’s observed in the brain.
“One way for neuroscientists to understand how the brain works is by constructing mathematical models of the brain to reproduce its activity,” explained Shenghao Wu, a former graduate student in neural computation and machine learning at Carnegie Mellon. “To date, building such models has been a manual process and usually requires a lot of energy and domain expertise. Our SNOPS method is not only faster and more powerful, but also, it finds a wider range of model configurations that are consistent with the brain’s activity, all automatically.”
Our method is not only faster and more powerful, but also, it finds a wider range of model configurations that are consistent with the brain’s activity, all automatically.
Shenghao Wu, former graduate student, neural computation and machine learning
Chengcheng Huang, an assistant professor of neuroscience and mathematics at the University of Pittsburgh whose background is in circuit modeling elaborated, “Before SNOPS, when a model got complicated and we wanted to explain a more complex phenomenon, it was difficult to find the right parameters to analyze the model’s full behavior. SNOPS is a useful tool to accelerate our progress and, ultimately, develop more realistic models of the brain.”
Published in Nature Computational Science, the group’s work to develop SNOPS uniquely combined the efforts of experimentalists, data-driven computational scientists, and modelers. “We have very different backgrounds and ways of approaching things, and that’s in the spirit of the neuroscience we do at Carnegie Mellon,” said Matt Smith, professor of biomedical engineering and Neuroscience Institute and co-director of the Center for the Neural Basis of Cognition. “I’m excited about the way Shenghao combined all of our skills to build SNOPS, and also, how we can apply it to better understand how different parts of the brain work together.”
In the days ahead, SNOPS, which is now available via open-source sharing, can guide the development of network models with the aim of enabling deeper insight into how networks of neurons give rise to brain function.
“We started with network models that have been widely used over decades. There were certain aspects of the brain’s activity that we could not get the models to reproduce, no matter how we tuned it,” added Byron Yu, professor of biomedical engineering and electrical and computer engineering at Carnegie Mellon University. “With SNOPS, we can quickly find a configuration that captures all the needed aspects of the brain’s activity. It gives us a lot of hope for putting together the big picture.”
The group’s work was supported by the National Institutes of Health and Simons Foundation. Additional study authors include Adam Snyder, assistant professor at University of Rochester, and Brent Doiron, professor at The University of Chicago.