Engineering Digital Neurons
The digital neuron is the fundamental building block of a digital, silicon-based brain. Regardless of whether the computing device is based on vector (GPGPU) technology, massive parallel computer technology, a Ram-based array (e.g. IBM TrueNorth), or FPGA technology, the sheer number of neurons to be simulated will demand a digital neuron that satisfies Einstein's maxim: it should be as simple as possible, but no simpler.
This talk will discuss a number of design options for digital neurons that implement the spiking integrate-and-fire model. For efficiency, design objectives include 1) use of relatively short precision numbers to represent potentials and conductances (synaptic weights) and 2) use of a relatively small number of simple arithmetic logic operations.
Informed by experimental data and guided by intuition, I will argue that delays (modeled at the dendrites) are a key part of a biologically accurate neuron, as is an output where spike timing is dependent on the relative timing of input spikes and associated synaptic weights. This neuron will be compared with two other candidate digital neurons, the one used by IBM in the TrueNorth project and a digital version of a conventional perceptron-based neuron with a sigmoid output. It will be argued that these latter two digital neurons are lacking in key elements of a biologically accurate neuron; i.e., they may not be capable of supporting the brain's computational paradigm. I am careful to say "may not" rather than “are not” because no one really knows how the brain computes. Consequently, these arguments are open to debate, which is a desired outcome of this workshop presentation. (Note, however, that this talk is not targeted at the larger question regarding the merits of digital neurons versus analog neurons.)