- Published on
Part 1: Insight into Convolutional Neural Networks
- Authors
- Name
- Qi Wang
Overview
Table of Contents
Introduction
Have you ever wondered how computers can recognize images? Image recognition, object detection, semantic segmentation, and neural transfer learning are just a few tasks that can be accomplished with Convolutional Neural Networks (CNNs). Many of these applications are utilized in autonomous vehicles as well as a variety of different applications. In this post, I will go through the intuition of CNNs and how they work. It might be useful if you already have some Neural Network knowledge coming in.
The link to the paper on CNNs is attached here.
Why CNNs
For those that are familiar with normal Neural Networks, you might be wondering why we can't do image recognition tasks with Deep Neural Networks. The answer is you can, but there are some disadvantages. Take a image for example. Assume there are three channels representing each pixel: R, G, and B. This means there are input parameters. From first glance, this may seem like a fairly low number, but such an image has an extremely low resolution. If you were using higher resolution images such as , you find that you have input parameters. With this many parameters in the input layer, the network will become way too big and result in extremely slow training time and low accuracy.
Filters / Kernals
During learning, the computer must extract features of the image to accurately execute its tasks. Turns out we can use filters (might be referred to as kernals but I will be using the term filters) to extract these features out of the image. Filters are matrices that are layered on the image and takes the sum of each element-wise multiplied to get the resultant matrix with features. Below is an example to visualize the concept.
One common filter for finding vertical edges in images is:
Now lets take a image matrix:
We will start the filter at on the image and shift it right one pixel each time. When the edge of the filter reaches the edge of the image, we shift the filter down one row and repeat the steps until the whole image matrix is covered. When a patch of the image matrix and the filter are multiplied and summed for an entry, the operation called a convolution. Hence, the name Convolutional Neural Networks. I've illustrated the convolution operations of the full above image below.
Becomes:
As shown, there is a clear indication of a vertical edge between the 2nd and 3rd column of the matrix. We refer to the distance we shift the filter every time as the stride. In the above example, we had a stride of . Another common technique when applying filters is to pad the image. As seen above, the image size was reduced after the convolutions, and a more drastic reduction would be seen if a larger stride size was used. Therfore, you can choose to pad dimenions of the image to maintain the same size after applying the filters. If we wanted to maintain the same size we would need to have an additional padding of where is the dimensions, is the stride, is the filter dimensions, and is the padding needed. For our above example, we would need to have a padding of for a same sized matrix after extracting the features. Below is the image with .
The final size after all the convolution operations for an image would be .
Pooling
Another important building block of CNNs are pooling layers. Pooling layers are used to down-size the amount of parameters that need to be trained by summarizing the overall matrix. Two common types of pooling layers are Max Pooling and Average Pooling. These are quite intuitive to understand, max pooling takes the maximum value over the region you pool and average pooling takes the average value over the region you pool. Below is an example of what pooling would do to this matrix.
Max Pooling with a size of and a stride of :
Average Pooling with a size of and a stride of :
What's Next?
Now that you know how the building blocks of a CNN works, we will cover methods to incorporate filters and pooling layers together into a full-blown Convolutional Neural Network. See you there!