SPAF-network with Saturating Pretraining Neurons

Abstract

In this work, various aspects of neural networks, pre-trained with denoising autoencoders (DAE) are explored. To saturate neurons more quickly for feature learning in DAE, an activation function that offers higher gradients is introduced. Moreover, the introduction of sparsity functions applied to the hidden layer representations is studied. More importantly, a technique that swaps the activation functions of fully trained DAE to logistic functions is studied, networks trained using this technique are reffered to as SPAF-networks. For evaluation, the popular MNIST dataset as well as all \(3\) sub-datasets of the Chars74k dataset are used for classification purposes. The SPAF-network is also analyzed for the features it learns with a logistic, ReLU and a custom activation function. Lastly future roadmap is proposed for enhancements to the SPAF-network.

Author Keywords: Artificial Neural Network, AutoEncoder, Machine Learning, Neural Networks, SPAF network, Unsupervised Learning

    Item Description
    Type
    Contributors
    Creator (cre): Burhani, Hasham
    Thesis advisor (ths): Wenying, Feng
    Degree committee member (dgc): Hurley, Richard
    Degree committee member (dgc): Abdella, Kenzu
    Degree granting institution (dgg): Trent University
    Date Issued
    2016
    Date (Unspecified)
    2016
    Place Published
    Peterborough, ON
    Language
    Extent
    144 pages
    Rights
    Copyright is held by the author, with all rights reserved, unless otherwise noted.
    Local Identifier
    TC-OPET-10315
    Publisher
    Trent University
    Degree