A gentle introduction to Image Recognition by Convolutional Neural Network

When people hear the words “Artificial intelligence” they most often tnink of either robots or automated system with the ability to distinguish between objects. The latter is what in the area of analytics knows as image recognition, a specific branch of a much wider set of techniques known as Deep Learning. When curious minds undertake the task to dig deep into the hows and whys of these techniques, they are generally overwhelmed by the mathematics involved and are too often confronted with tedious and far too technical explanations. Also, the number of examples available on the internet is rather limited (mostly due to the fact that the this branch of analytics is an emerging one). It is however possible to describe how image recognition works, or at least give a pretty good idea of it, without using overly complicated examples. I will make an attempt at giving a clear and simple explanation of how deep learning by convolution neural networks work and end it with an example. There is nothing revolutionary about the example I have chosen. I will simply train a model to distinguish between airplane and birds. The exercice has been done before with cats, dogs, tables, chairs and so on, but it is a neat one.

My readers know by now that I do not particularily like to apply methods without really grasping the way they work and understand the math behind it. I also dislike writing about these techniques without giving something that enlightens my audience and I will therefore give a very gentle introduction to all the concepts involved in convolutional neural networks. To do so, I will limit my example to the how the technique is used to distinguish two types of 2-dimensional images, namely those of a C and a K. The ideas can thereafter be generelized to the recognition of any letter or digits and are the same techniques used in the now so common parking lots in which an image recognition software reads your registration number, however dirty your licence plates may be.

Lets start with our two images, C and K:

As you can see, all the relevant information in these two pictures is colored black and there is no other way to distinguish them but by comparing where this information is distributed. Now, for a computer, these two images are not colored in the sense a human would understand color but rather as a 2-dimensional grid of pixels or a table in which each cell is assigned a specific number. we only have two colors here so let’s say, for the sake of the argument that the color white is -1 an color black is 1. Our computer thus translates the images as :

Now, when comparing two pictures, if even one single pixel is different then they will be considered as different, even though they might represent exactly the same object but with different backgrounds or if the image has been somehow altered. Consider for instance the task of comparing handwritten letters (two individuals have different ways of writing an A,for instance). This poses a serious problem if they aim is recognition of a letter or an object. So, from a machine point of view it will not be feasible to compare entire images. What we then are looking for are details or features that are common in both pictures.  Doing this task by hand is a fairly easy, but tedious, task. There is however a mathematical way to perform this task, a method that even surpasses us in classifying any set of object, both in performance and in speed. This very practical mathematical tool is called convolution. We will not into the mathematical details of convolution but will instead show how this is used in image recognition. In the picture below, we have identified one of the features specific to the letter C.

freaturesC

When perfoming a convolution we let the identified feature (the 3 × 3 image above) sweep over the entire picture (the 10 × 11 image) and apply the following rule when the feature sweeps over a part of the image: Multiply the value of each pixel of the feature with the correpsonding pixel of the image that is covered. If both pixels have value 1 or -1, then the result is 1, otherwise the result is -1.

convcalc
The first operation represents the convolution of the feature and the upper left-hand corner of the image while the second is the convolution of the feature with the sub-part of the image starting in row 1 and column 2. The red 1s and -1s are the results on the multiplications of the corresponding pixel-values.

The next step is to add the results of the operation above and to divide the result by the total number of pixels in the feature.

convolutionexample 1

The same operation then needs to be repeated for each feature of the image, the result being an array of filtered images, each one corresponding to a specific filter (feature). The set of all these operation is what we call this a convolutional layer.

A word of caution here: This is one of many convolution methods. Most often one encounters what is known as kernel convolution, where it is the center of the sub part of the image which is in consideration. Kernel convolution is used to sharpen, blur or tranform an image. I recommend the following page to learn more about these methods: AI Shack

As you might have realized by now, the number of operations needed to perform convolution operations on all images of a training set have a tendency to grow very rapidly. There are however two things one can do to limit this to something that can be handled. One, the obvious solution, is to reduce the size of the pictures to be treated. The other one if what is called max pooling which is nothing else than a downsampling of the image or, more precisely, a sample-based discretization process.

In the example I will give later on in this post I chose to reduce the size of the images as well as pooling them after convolution. So, how does pooling work? As the name indicates it’s all about keeping the essential information in the data. How is it done? Welll there are many ways of pooling data from an image depending on how much you want to reduce the image. In the picture below I given two examples, one in which the input is a 6 by 6 image reduced to a 5 by 5 image using a 2 by 2, stride 1 pooling. The second pooling is done using a 2 by 2, stride 2 sweep givning an output image of size 3 by 3.

POOLING

If we use a 2 × 2, stride 2 pooling on our example above the resulting reduced image is given by

pooling on convolution

having done so for all features of the convolution layer we are left with a pooling layer, a collection of heavily reduced images that are more easily treated.

The title of this post suggests the use of neural networks or neurons and until now we haven’t said anything about them. In our brain the thinking or the need to perform a specific task activates different neurons and pathways sending signals to nervs that inform muscles and other tissues to contract in a controlled manner. The neuron in a neural network is nothing else than a set of inputs, a set of weights, and an activation function. The activation activates the neurons to perform a specific task. There are many different types of activation functions. The most common one are the sigmoidal function (most often used for classification problems of 0s and 1a), the hyperbolic tangent (used to rescale values between -1 and 1) and the Rectified Linear Unit function (ReLU) (setting all negative values to 0). Which activation function to use entirely depends on the situation one is in and we will not go into the detail in this post. To complete the example, we have been working on until now, the picture below shows the result of ReLU on the pooled and convoluted image:

pooling on convolution and ReLU The process of convoluting, pooling, activating can be repeated many times in a neural network and in this case we talk about layers. First of all we have the input layer, our pictures and last an output layer. In between, there can be one or several so called hidden layers. The output from the convolutional layers represents high-level features in the data. While that output could be flattened and connected to the output layer, adding a fully-connected layer is a (usually) cheap way of learning non-linear combinations of these features. Essentially the convolutional layers are providing a meaningful, low-dimensional, and somewhat invariant feature space, and the fully-connected layer is learning a (possibly non-linear) function in that space. In the early days of neural networks models only included the input and output spaces (with some transformation of the data in between) and the results were found to be mediocre. It was quickly discovered that the presence of just one or two fully connected hidden layers improved the results. The entire process can be visualized as follows.

process
This describes the entire process in a convolutional neural network. Convolutions, activations and pooling can be performed a multitude of times. Fully connected layers, the learning processes, can be added at will. The output is a probability that the object is a particular one. In this case, the probability that the input represents a “bird” is 0,79.

You now know everything there is to know about convolutions neural networks but yet lack the knowledge of making it work in practice. Let’s remedy this! As I mentioned at the beginning of this post, there is nothing revolutionary about this example. It’s been done before with different objects and has been applied IRL in many different everyday applications. As I am a big fan of R, I have done the entire exercise using it but there are many other ways to do this. There are a few considertions to take into account before setting up your own convolutional neural network. One thing is to have enough pictures of the objects you wish to compare. In my example I have around 10 000 pics of birds and airplanes. Another more technical one which relates to the packages used is to have a later version of R. Many users of R have R 3.3.3 versions and have not updated them since they always use the same sets of packages and have no need for updates. Install at least R 3.4.0.

The packages used in the code I give below are mxnet, DiagrammeR, EBImage which need to be downloaded.

require(devtools)
install_version("DiagrammeR", version = "0.9.0", repos = "http://cran.us.r-project.org")
install.packages("drat", repos="https://cran.rstudio.com")
drat:::addRepo("dmlc")


cran = getOption("repos")
cran["dmlc"] = "https://s3-us-west-2.amazonaws.com/apache-mxnet/R/CRAN/"
options(repos = cran)
install.packages("mxnet")
library(mxnet)
source("https://bioconductor.org/biocLite.R")
biocLite("EBImage")

Once the installation is complete, the fun begins. You need to define you working directory and point to the your image repository. Once this is done you may display images in RStudio.

image_dir = "C:\\Users\\sergos\\Desktop\\EGET\\Blogg\\Image recognition\\BirdPlane"

example_Bird = readImage(file.path(image_dir, "bird 1.jpg"))
display(example_Bird)
example_plane = readImage(file.path(image_dir, "plane 100.jpg"))
display(example_plane)

image display.png

How to make life easy for oneself! Most of you probably don’t have a database of pictures of the objects you wish to present to your CNN and you will most probably scrape pages in order to build a sufficient supply of images. Once you have done this, my advice is to rename all pictures. Doing this from Windows Explorer is easy but the result contains paranthesis. All bird pictures will be called bird (i), where i goes over the range of pictures you have. Use this code (saved as a *.bat file) to have the pictures renamed as bird 1, bird 2, ….

cd C:\save
setlocal enabledelayedexpansion
for %%a in (bird *.jpg) do (
set f=%%a.
set f=!f:^(=!
set f=!f:^)=!
ren "%%a" "!f!"
)

As we mentioned earlier, images can take up a lot of space because of size and colors. The amount of data to be handled by the neural network can then make the whole process rather slow. It is therefore a good idea to resize the pictures and to transform them into grey scale photos (unless color is what has to be recognized as well). Here is a function that does a very good job. It grey-scales the photos and reduces the pictures to 28×28 pixels.

width = 28
height = 28
extract_feature = function(dir_path, width, height, is_bird = TRUE, add_label = TRUE) {
 img_size = width*height
 images_names = list.files(dir_path) 
 if (add_label) {
 images_names = images_names[grepl(ifelse(is_bird, "bird", "plane"), images_names)] ## Select only cats or dogs images
 label = ifelse(is_bird, 0, 1) 
 }
 print(paste("Start processing", length(images_names), "images"))
 feature_list = pblapply(images_names, function(imgname) {
 img = readImage(file.path(dir_path, imgname))
 imtorg_resized = resize(img, w = width, h = height)

grayimg = channel(img_resized, "gray")

img_matrix = grayimg@.Data

img_vector = as.vector(t(img_matrix))

return(img_vec)
 })
 
 feature_matrix = do.call(rbind, feature_list)
 feature_matrix = as.data.frame(feature_matrix)
 names(feature_matrix) = paste0("pixel", c(1:img_size))
 if (add_label) {
 feature_matrix = cbind(label = label, feature_matrix)
 }
 return(feature_matrix)
}
BIRDS = extract_feature(dir_path = image_dir, width = width, height = height)
PLANES = extract_feature(dir_path = image_dir, width = width, height = height, is_bird = FALSE)

saveRDS(BIRDS, "birds.rds")
saveRDS(PLANES, "planes.rds")

We now have a library of pictures representing birds and airplanes, all pictures of the same size and color. The process above can take a while depending on the number of image to be treated, but this needs only be done onces.

The next step is to determine the training and test sets. I chose to take 60% of my photos as a training set.

library(mxnet)
complete_set = rbind(BIRDS, PLANES)

training_index = createDataPartition(complete_set$label, p = .6, times = 1)
training_index = unlist(training_index)
train_set = complete_set[training_index,]
dim(train_set)

train_data = data.matrix(train_set)
train_x = t(train_data[, -1])
train_y = train_data[,1]
train_array = train_x
dim(train_array) = c(28, 28, 1, ncol(train_x))

test_set = complete_set[-training_index,]
dim(test_set)

test_data = data.matrix(test_set)
test_x = t(test_set[,-1])
test_y = test_set[,1]
test_array = test_x
dim(test_array) = c(28, 28, 1, ncol(test_x))

Until now, we have just made some preparation of the data. We have everything set to construct our model. As we described earlier, there are several steps and ways to construct a CNN. In this example I chose to build the model by using two convolutional layer as well as two fully connected layers. Layers can be added at will depending on the purposes but one needs to keep in mind that a model with many layer can be computationally heavy to run and that the marginal gain of each added layer may not be sufficient to be motivated.

First Convolutional Layer

In the introduction, we described how convolution could be used to identify features in an image. This was done by letting a grid sweep over the original image and perform some calculations. Furthermore, activation of neurons could be performed using some activation function and a final step was the pooling of the resulting dataset.

In this convolutional layer I use 20  5×5 grids (also called kernel) for the convolution. In the example I described above with the letters C and K, the activation function was ReLU for the simple reason that it is 1) easy to explain and 2) the image to be classified was two dimensional and not particularily complicated. However, ReLU presents some issues in this case and a far more useful and efficient activation function is the hyperbolic tangent:

tanh(z) = \frac{sinh(z)}{cosh(z)}= \frac{exp(z)-exp(z)}{exp(z)+exp(z)}

For an excellent motivation of why the tanh-activation is ideal I suggest reding the following blog Derivation: Derivatives for Common Neural Network Activation Functions. 

The pooling is done by taking the maximum value of the results of the activation by letting 2×2 square sweep over the the data and strides are 2 both sideways and down (hence, not overlapping).

The code for the first convolutional layer (Convolution, Activation, pooling)is then given by

mx_data = mx.symbol.Variable('data')

conv_1 = mx.symbol.Convolution(data = mx_data, kernel = c(5, 5), num_filter = 20)
tanh_1 = mx.symbol.Activation(data = conv_1, act_type = "tanh")
pool_1 = mx.symbol.Pooling(data = tanh_1, pool_type = "max", kernel = c(2, 2), stride = c(2,2))

Second Convolution Layer

As i described before, one can perform multiple convolutions in order to reduce the work needed by the model and depending on the complexity of the images fed to it, it might be a good idea. In our second we refine the filtering of the image to recognize more features. Note that it is the result of the pooling in the first convolutional layer that is now used for the convolution.

conv_2 = mx.symbol.Convolution(data = pool_1, kernel = c(5,5), num_filter = 50)
tanh_2 = mx.symbol.Activation(data = conv_2, act_type = "tanh")
pool_2 = mx.symbol.Pooling(data = tanh_2, pool_type = "max", kernel = c(2, 2), stride = c(2,2))

First and second Fully Connected Layers

The first fully connected Layer acts on the result of the second convolutional layer. The first step is a flattening of the results of the pooling process. The flattening is the process of converting all the resultant 2 dimensional arrays into a single long continuous linear vector.

flat = mx.symbol.Flatten(data = pool_2)
fcl_1 = mx.symbol.FullyConnected(data = flat, num_hidden = 50)
tanh_3 = mx.symbol.Activation(data = fcl_1, act_type = "tanh")
fcl_2 = mx.symbol.FullyConnected(data = tanh_3, num_hidden = 2)
NN_model = mx.symbol.SoftmaxOutput(data = fcl_2)

The last step is jsut an activation process that assigns probabilities (i.e. values between o and 1 to the results of the second fully connected layer.

The Model

Further up we determined a training set of pictures, defined two convolution layer and two fully connected layer. We have everything set to train our model. The mxnet package has a very good function for this, namely mx.model.FeedForward.create in which a number of parameters such as which device to use, how many rounds the model has to perform, the learning rate and so forth can be set. For a very goos documentation of the package, do read MXNet: A Scalable Deep Learning Framework.

device = mx.cpu()

model = mx.model.FeedForward.create(NN_model, X = train_array, y = train_y,
 ctx = device,
 num.round = 40,
 array.batch.size = 100,
 learning.rate = 0.05,
 momentum = 0.9,
 wd = 0.00001,
 eval.metric = mx.metric.accuracy,
 epoch.end.callback = mx.callback.log.train.metric(100))

 model

In our case, the model accuracy after 40 rounds was a staggering 98,5%. For recognizing birds and airplanes, it is not a bad result. There are probably areas in which this may be too low, but let’s just say that we are failry happy with this.

We can even provide a confuction matrix for the results.

confusion

As you can see, it misclassified some pictures. Just for the fun of it, here are for of the misclassified images:

These pictures are somewhat ambigious. Many other pictures that were misclassified had the same problem.

In conclusion, CNN are not very complicated and all the operations needed are fairly easy to understand. This may be one of the reasons for which we see such a widespread use of the technique.

Serge

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Powered by WordPress.com.

Up ↑

%d bloggers like this: