Deep Convolutional Neural Networks (DCNN) Explained

A deep convolutional neural network (DCNN) is a convolutional neural network (CNN) with multiple layers that is commonly used to analyze images. Here’s what to know.

Published on Jun. 11, 2024
Deep Convolutional Neural Networks (DCNN) Explained
Image: Shutterstock / Built In
Brand Studio Logo

A deep convolutional neural network (DCNN) is a convolutional neural network (CNN) with multiple layers. It’s a class of artificial neural networks, most commonly applied to analyze images. It’s also known as shift invariant or space invariant artificial neural networks (SIANN), based on the shared-weight architecture of the convolution kernels or filters that slide along input features and provide translation equivariant responses known as feature maps. 

What Is a DCNN?

Deep convolutional neural network (DCNN) is a class of artificial neural networks that’s most commonly used to analyze images by providing feature maps. It includes five components: a convolution and rectified linear unit layer, a pooling layer, a fully connected layer, a dropout layer and an activation functions layer. 

DCNN architecture includes five components: 

  1. Convolutional and rectified linear unit (ReLU) layer
  2. Pooling layer
  3. Fully connected layer
  4. Dropout layer
  5. Activation functions layer

 

What Is the DCNN Architecture?

A deep convolutional neural network is composed of five layers. The convolutional and ReLU layer is used to extract features from the input images. The pooling layer reduces the computational cost. The fully connected layer initiates the classification stage. The dropout layer tackles overfitting by randomly dropping a few neurons from the model. And finally, the activation functions layer is where the model collects the dots among the data points and learns.

Here’s how each of them work:

1. Convolutional Layer and ReLu

This layer is the first layer that is used to extract the various features from the input images. In this layer, the mathematical operation of convolution is performed between the input image and a filter of a particular size MxM. By sliding the filter over the input image, the dot product is taken between the filter and the parts of the input image with respect to the size of the filter (MxM).

The output is termed as the feature map which gives us information about the image such as the corners and edges. Later, this feature map is fed to other layers to learn several other features of the input image.

2. Pooling Layer

In most cases, a convolutional layer is followed by a pooling layer. The primary aim of this layer is to decrease the size of the convolved feature map to reduce computational costs. This is performed by decreasing the connections between layers and independently operating on each feature map. Depending upon the method used, there are several types of pooling operations.

In max pooling, the largest element is taken from the feature map. Average pooling calculates the average of the elements in a predefined size image section. The total sum of the elements in the predefined section is computed in sum pooling. The pooling layer usually serves as a bridge between the convolutional layer and the fully connected layer

3. Fully Connected Layer

The fully connected (FC) layer consists of the weights and biases along with the neurons and is used to connect the neurons between two different layers. These layers are usually placed before the output layer and form the last few layers of a CNN architecture.

In this, the input image from the previous layers is flattened and fed to the FC layer. The flattened vector then undergoes a few more FC layers where mathematical operations usually take place. In this stage, the classification process begins to take place.

4. Dropout Layer

Usually, when all the features are connected to the FC layer, it can cause overfitting in the training data set. Overfitting occurs when a particular model works so well on the training data causing a negative impact on the model’s performance when used on new data.

To overcome this problem, a dropout layer is utilized, wherein a few neurons are dropped from the neural network during the training process resulting in reduced size of the model. On passing a dropout of 0.3, 30 percent of the nodes are dropped out randomly from the neural network.

5. Activation Functions Layer

Finally, one of the most important parameters of the CNN model is the activation function. They are used to learn and approximate any kind of continuous and complex relationship between variables of the network. In simple words, it decides which information of the model should fire in the forward direction and which ones should not at the end of the network.

It adds non-linearity to the network. There are several commonly used activation functions such as the ReLU, Softmax, tanH, and the Sigmoid functions. Each of these functions has a specific usage. For a binary classification CNN model, Sigmoid and softmax functions are preferred and for multiclass classification, softmax is used.

More on Machine LearningSelf Organizing Maps Explained

 

How Does a DCNN Work?

CNN compares images piece by piece. The pieces that it looks for are called features, which are a bunch of MxM matrices with numbers. Images are nothing but MxM number matrices of pixel values for a computer. By finding rough feature matches in roughly the same positions in two images, CNNs get a lot better at seeing similarities than whole-image matching schemes.

However, When presented with a new image, the CNN doesn’t know exactly where these features will match, so it tries them everywhere, in every possible position. It matches feature matrices in steps by shifting the defined steps at a time. In calculating the match to a feature across the whole image, we make it a filter. The math we use to do this is called convolution, from which convolutional neural networks take their name.

The next step is to repeat the convolution process in its entirety for each of the other features. The result is a set of filtered images, one for each of our filters. It’s convenient to think of this whole collection of convolution operations as a single processing step.

Now comes the step where we introduce so-called “non-linearity” in our model, so that our model can predict and learn non-linear boundaries. A very common way to do this is using a nonlinear function (like ReLU or GeLU). The most popular non-linear function is ReLU which performs a simple math operation: wherever a negative number occurs, swap it out for a 0. This helps the CNN stay mathematically healthy by keeping learned values from getting stuck near zero or blowing up toward infinity. Note that this convolution and ReLU operation may create massive feature maps, and it’s crucial to reduce the feature map size while keeping the identified feature intact.

Pooling is a way to take large images and shrink them down while preserving the most important information in them. It consists of stepping a small window across an image and taking the maximum value from the window at each step. In practice, a window of two or three pixels on a side and steps of two pixels work well. A pooling layer is just the operation of performing pooling on an image or a collection of images. The output will have the same number of images, but they will each have fewer pixels. This is also helpful in managing the computational load.

Once the desired amount of convolution operations are performed,depending upon the designed model, it is now time to make use of the power of deep learning neural networks to harness the full potential of the operations performed in earlier stages. But before we pass the pooled feature maps to the neural network for learning, we need to flatten the matrices. The reason is very obvious: a neural network only accepts a single dimension input. So, we stack them like Lego bricks. In the end, raw images get filtered, rectified, and pooled to create a set of shrunken, feature-filtered images, and now it is ready to go into the world of neurons, a neural network.

The fully connected layers in the neural network take the high-level filtered images, one dimension rectified pooled feature map, and translate them into votes (or signals). These votes are expressed as weights, or connection strengths, between each value and each category. When a new image is presented to the CNN, it percolates through the lower layers until it reaches the fully connected layer at the end. Then an election is held. The answer with the most votes wins and is declared the category of the input.

And that is how a DCNN works. 

A tutorial on deep learning with convolutional neural networks. | Video: Computerphile

More on Machine LearningUnderstanding the Derivative of the Sigmoid Function

 

How to Design a DCNN Model

Unfortunately, not every aspect of CNNs can be learned in such a straightforward manner. There is still a long list of decisions that a CNN designer must make.

  • For each convolution layer, how many features will you include? How many pixels in each feature?
  • For each pooling layer, what should be the window size? What stride?
  • What function should I use? How many epochs? Any early stopping?
  • For each extra fully connected layer, how many hidden neurons?

In addition to these, there are also higher-level architectural decisions to make like, how many of each layer to include? In what order? There are lots of tweaks that we can try, such as new layer types and more complex ways to connect layers with each other or simply increasing the number of epochs or changing the activation function.

And the best way to decide is to do and see it for yourself.

Here is a simple notebook where you can see what this might look like, and how you can come to a conclusion for selecting the best CNN hyperparameter combination. It may be computationally heavy and hence optimization of your image and batch size might be essential. And that’s everything you need to know about DCNNs.

Frequently Asked Questions

A deep convolutional neural network is a convolutional neural network (CNN) composed of multiple layers and used to analyze images for machine learning.

A DCNN passes an image through five layers. The convolutional and ReLu layer extract features from the input images. The pooling layer reduces the computational cost. The fully connected layer initiates the classification stage. The dropout layer drops a few neurons from the model to reduce overfitting. When the model reaches the activation layer, it collects the dots among the data points and learns.

A deep convolutional neural network architecture includes five components: 

1. Convolutional layer and rectified linear unit (ReLu)
2. Pooling layer
3. Fully connected layer
4. Dropout
5. Activation functions

Hiring Now
OCC
Big Data • Cloud • Fintech • Information Technology • Financial Services
SHARE