August 27, 2010 @ 11:30 : Jason Rolfe

Institute of Neuroinformatics
University of Zurich - ETH Zurich

Intrinsic gradient networks: A novel class of biologically plausible recurrent neural networks

Artificial neural networks are computationally powerful and exhibit brain-like dynamics. Unfortunately, the gradient-dependent learning algorithms used to train them are biologically implausible, because the calculation of the gradient in a traditional artificial neural network requires a complementary network of fast training signals that are dependent upon, but must not affect, the primary network activity. By contrast, the network of neurons in the cortex is highly recurrent and does not support such segregated groups of signals. We address this biological implausibility by introducing a novel class of recurrent neural networks, intrinsic gradient networks, in which the gradient of an error function with respect to the parameters is a simple function of the network state after convergence. These networks can be trained using only their intrinsic (local) signals, much like the network of neurons in the brain. We derive a simple equation that characterizes intrinsic gradient networks, and construct a broad set of networks that satisfy this characteristic equation. In particular, we construct intrinsic gradient networks with dynamics reminiscent of loopy belief propagation and hierarchical (deep) artificial neural networks. We also identify a broad subset of highly recurrent intrinsic gradient networks in which gradient descent corresponds to a nearly Hebbian synaptic weight update, and a number of other biologically motivated constraints are satisfied. Intrinsic gradient networks thus reconcile the computational power of gradient descent training on neural networks with biologically plausible learning.

seminars/seminaritems/2010-08-27.txt · Last modified: 2010/09/17 09:23 by koray