A Universal Neuromorphic Computing Substrate for Six Different Networks

A Universal Neuromorphic Computing Substrate for Six Different Networks. In this study, a highly configurable neuromorphic computing substrate is presented and is then used for emulating various types of neural networks. There is a mixed-signal chip that lies at the heart of this system with analog implementations of neurons and synapses and digital transmission of action potentials. To learn more about the universal neuromorphic computing substrate for six different networks, read the complete article below.

A Universal Neuromorphic Computing Substrate for Six Different Networks

By nature, computational neuroscience has a high demand for powerful and effective devices for replicating neural network simulations. In contrast to traditional general-purpose machines based on a von Neumann architecture, neuromorphic systems are, in a rather broad sense, a class of devices that implement particular features of biological neural networks in their physical circuit layout.

Various changes motivate the neuromorphic approach. The arguably most characteristic feature of neuromorphic devices is inherent parallelism enabled by the fact that individual neural network components are physically implemented in silico. Because of this parallelism, scaling of emulated network models does not imply slowdown, as is usually the case for traditional and old machines. The hard upper bound in network size (given by the number of available components on the neuromorphic device) can be broken by scaling the devices themselves, e.g., by wafer-scale integration or massively interconnected chips.

Hardware Emulation of Neural Networks

Spikey Chip has been emulated on the six neural network models. Many of the emulation results are compared to those received by software simulations to verify the network’s functionality and performance. The tool NEST (Gewaltig and Diesmann, 2007) or NEURON (Carnevale and Hines, 2006) is used.

  • Synfire Chain with Feedforward Inhibition: Architectures with feedforward connectivity have been employed extensively as computational components and as models for the study of neuronal dynamics. Synfire chains are feedforward networks consisting of several neuron groups where each neuron in a group projects to neurons in the succeeding group.
  • Balanced Random Network: BRNs consist of inhibitory and excitatory populations of neurons, both receiving feedforward connections from two populations of Poisson processes mimicking background activity. Both neuron populations are recurrently connected, including connections within the populations. All connections are realized with random and sparse connections of probability p. In the study, synaptic weights for inhibitory ones. In contrast to the original implementation using 12,500 neurons, we scaled this network by a factor of 100 while preserving its firing behavior.
  • Soft Winner-Take-All Network: Soft winner-take-all (sWTA) computation is mostly viewed as an underlying principle in models of cortical processing (Grossberg, 1973; Maass, 2000; Itti and Koch, 2001; Douglas and Martin, 2004; Oster et al., 2009; Lundqvist et al., 2010.) An sWTA network consists of a ring-shaped layer of recurrently connected excitatory and a common pool of inhibitory neurons, following the implementation by Nefti et al.
  • Cortical Layer 2/3 Attractor Model: The attractor networks that model working memory in the cerebral cortex have acquired support from both computer simulations and experimental data over the past decades. From a structural perspective, the most significant feature of the Layer 2/3 Attractor Memory Network is its modularity. It is loyal to its biological archetype and implements a set of cortical hypercolumns, which are subdivided into multiple minicolumns. Each minicolumn comprises three cell populations: excitatory pyramidal cells, inhibitory basket cells, and inhibitory RSNP (regular spiking non-pyramidal) cells.
  • Insect Antennal Lobe Model: The high acceleration factor of the Spikey chip makes it an attractive platform for neuromorphic data processing. Preprocessing of multivariate data is a common problem in signal and data analysis. In conventional computing, the reduction of correlation between input channels is often the first step in the analysis of multidimensional data, achieved. In the insect olfactory systems, odors are first encoded into neuronal signals by receptor neurons (RNs), which are situated on the antenna.
  • Liquid State Machine: Liquid state machines (LSMs), as proposed by Maass et al. (2002) and Jaeger (2001), provide a generic framework for computation on continuous input streams. The liquid, a recurrent network, projects an input into a high-dimensional space, which is subsequently read out. The LSM comprises two major components: the recurrent liquid network itself and a spike-based classifier. A general-purpose liquid needs to meet the separation property, which requires that different inputs are mapped to different outputs, for a wide range of possible inputs. Therefore, a network topology similar to the one proposed by Bill et al. (2001).

Leave a Comment