Representation Learning with Restorative Autoencoders for Transfer Learning

Abstract

Deep Neural Networks (DNNs) have reached human-level performance in numerous tasks in the domain of computer vision. DNNs are efficient for both classification and the more complex task of image segmentation. These networks are typically trained on thousands of images, which are often hand-labelled by domain experts. This bottleneck creates a promising research area: training accurate segmentation networks with fewer labelled samples.

This thesis explores effective methods for learning deep representations from unlabelled images. We train a Restorative Autoencoder Network (RAN) to denoise synthetically corrupted images. The weights of the RAN are then fine-tuned on a labelled dataset from the same domain for image segmentation.

We use three different segmentation datasets to evaluate our methods. In our experiments, we demonstrate that through our methods, only a fraction of data is required to achieve the same accuracy as a network trained with a large labelled dataset.

Author Keywords: deep learning, image segmentation, representation learning, transfer learning

    Item Description
    Type
    Contributors
    Creator (cre): Fichuk, Dexter Lamont
    Thesis advisor (ths): McConnell, Sabine
    Degree committee member (dgc): Hurley, Richard
    Degree granting institution (dgg): Trent University
    Date Issued
    2020
    Date (Unspecified)
    2020
    Place Published
    Peterborough, ON
    Language
    Extent
    81 pages
    Rights
    Copyright is held by the author, with all rights reserved, unless otherwise noted.
    Subject (Topical)
    Local Identifier
    TC-OPET-10743
    Publisher
    Trent University
    Degree