Display options
Share it on

Front Neuroinform. 2017 May 04;11:33. doi: 10.3389/fninf.2017.00033. eCollection 2017.

Automatic Optimization of the Computation Graph in the Nengo Neural Network Simulator.

Frontiers in neuroinformatics

Jan Gosmann, Chris Eliasmith

Affiliations

  1. Centre for Theoretical Neuroscience, University of WaterlooWaterloo, ON, Canada.

PMID: 28522970 PMCID: PMC5415674 DOI: 10.3389/fninf.2017.00033

Abstract

One critical factor limiting the size of neural cognitive models is the time required to simulate such models. To reduce simulation time, specialized hardware is often used. However, such hardware can be costly, not readily available, or require specialized software implementations that are difficult to maintain. Here, we present an algorithm that optimizes the computational graph of the Nengo neural network simulator, allowing simulations to run more quickly on commodity hardware. This is achieved by merging identical operations into single operations and restructuring the accessed data in larger blocks of sequential memory. In this way, a time speed-up of up to 6.8 is obtained. While this does not beat the specialized OpenCL implementation of Nengo, this optimization is available on any platform that can run Python. In contrast, the OpenCL implementation supports fewer platforms and can be difficult to install.

Keywords: Nengo; OpenCL; Python; computation graph; neural engineering framework; optimization

References

  1. Science. 2012 Nov 30;338(6111):1202-5 - PubMed
  2. Front Neuroinform. 2014 Jan 06;7:48 - PubMed
  3. PLoS One. 2016 Feb 22;11(2):e0149928 - PubMed
  4. J Neural Eng. 2016 Oct;13(5):051001 - PubMed

Publication Types