Sequence learning in the Bayesian Confidence Propagation Neural Network

Abstract: This thesis examines sequence learning in the Bayesian Confidence PropagationNeural Network (BCPNN). The methodology utilized throughout this work is com-putational and analytical in nature and the contributions here presented can beunderstood along the following four major themes: 1) this work starts by revisitingthe properties of the BCPNN as an attractor neural network and then provides anovel formalization of some of those properties. First, a bayesian theoretical frame-work for the lower bounds in the BCPNN. Second, a differential formulation ofthe BCPNN plasticity rule that highlights its relationship to similar rules in thelearning literature. Third, closed form analytical results for the BCPNN trainingprocess. 2) After that, this work describes how the addition of an adaptation processto the BCPNN enables its sequence recall capabilities. The specific mechanisms ofsequence learning are then studied in detail as well as the properties of sequencerecall such as the persistence time (how long does the network last in a specific stateduring sequence recall) and its robustness to noise. 3) This work also shows howthe BCPNN can be enhanced with memory traces of the activity (z-traces) to pro-vide the network with disambiguation capabilities. 4) Finally, this works provides acomputational study to quantify the number of the sequences that the BCPNN canstore successfully. Alongside these central themes, results concerning robustness,stability and the relationship between the learned patterns and the input statisticsare presented in either computational or analytical form. The thesis concludes witha discussion of the sequence learning capabilities of the BCPNN in the context of thewider literature and describes both his advantages and disadvantages with respectto other attractor neural networks.

  CLICK HERE TO DOWNLOAD THE WHOLE DISSERTATION. (in PDF format)