Building Convolutional Neural Networks with Tensorflow

Building Convolutional Neural Networks with Tensorflow

In the past year I have also worked with Deep Learning techniques, and I would like to share with you how to make and train a Convolutional Neural Network from scratch, using tensorflow. Later on we can use this knowledge as a building block to make interesting Deep Learning applications.

The contents of this blog-post is as follows:

  1. Tensorflow basics:
    • 1.1 Constants and Variables
    • 1.2 Tensorflow Graphs and Sessions
    • 1.3 Placeholders and feed_dicts
  2. Neural Networks in Tensorflow
    • 2.1 Introduction
    • 2.2 Loading in the data
    • 2.3 Creating a (simple) 1-layer Neural Network:
    • 2.4 The many faces of Tensorflow
    • 2.5 Creating the LeNet5 CNN
    • 2.6 How the parameters affect the outputsize of an layer
    • 2.7 Adjusting the LeNet5 architecture
    • 2.8 Impact of Learning Rate and Optimizer
  3. Deep Neural Networks in Tensorflow
    • 3.1 AlexNet
    • 3.2 VGG Net-16
    • 3.3 AlexNet Performance
  4. Final words

 

1. Tensorflow basics:

Here I will give a short introduction to Tensorflow for people who have never worked with it before. If you want to start building Neural Networks immediatly, or you are already familiar with Tensorflow you can go ahead and skip to section 2. If you would like to know more about Tensorflow, you can also have a look at this repository, or the notes of lecture 1 and lecture 2 of Stanford’s CS20SI course.

1.1 Constants and Variables

The most basic units within tensorflow are Constants, Variables and Placeholders.

The difference between a tf.constant() and a tf.Variable() should be clear; a constant has a constant value and once you set it, it cannot be changed.  The value of a Variable can be changed after it has been set, but the type and shape of the Variable can not be changed.