Reference : https://adeshpande3.github.io/adeshpande3.github.io/A-Beginner's-Guide-To-Understanding-Convolutional-Neural-Networks/
Different Layers in CNN :
A classic CNN has the following layers : Input -> Conv -> RELU -> Pool ->RELU ->Conv ->RELU -> Pool ->Fully Connected
Convolution Layer:
First layer in convolution Neural network is always CNN. We can consider convolution as a filter/neuron/kernel. Convolution can be considered as a torch light fell on an image. The light part keep moving on the image. And this torch light is taken as a number array. When the convolution is keep moving on the image , the numbers in the image are multiplied elementary to the numbers in the torch light or FILTER. So this filter helps us to learn about the features of an image . The first convolution layers will learn about basic features like edge, curve etc. When it goes in deep it will learn about more complex features like paws, legs of dog or a pink color etc. More the number of convolution layers more we will learn about different dimensions of images.
The matrix we get by multiplying the filter with the receptive field is called - activation map.
RELU : non linear activation function
Pool layers : Eg. Max pooling. It takes the max value number from a filter . This helps to reduce the number of weights and also to reduce over fitting
Drop out Layers : This layer helps to avoid overfitting in the neural network model. This layer drops any of the weights from the network layer , hence the model become more efficient in the sense that it become more independent of some features. If some features or weights are missing also the network works fine.
Fully Connected Layers : These layers helps to map the output from the activation layer into one of the output class. That is either into the classes or gives the probability that a image will belongs to one class.
Different Layers in CNN :
A classic CNN has the following layers : Input -> Conv -> RELU -> Pool ->RELU ->Conv ->RELU -> Pool ->Fully Connected
Convolution Layer:
First layer in convolution Neural network is always CNN. We can consider convolution as a filter/neuron/kernel. Convolution can be considered as a torch light fell on an image. The light part keep moving on the image. And this torch light is taken as a number array. When the convolution is keep moving on the image , the numbers in the image are multiplied elementary to the numbers in the torch light or FILTER. So this filter helps us to learn about the features of an image . The first convolution layers will learn about basic features like edge, curve etc. When it goes in deep it will learn about more complex features like paws, legs of dog or a pink color etc. More the number of convolution layers more we will learn about different dimensions of images.
The matrix we get by multiplying the filter with the receptive field is called - activation map.
RELU : non linear activation function
Pool layers : Eg. Max pooling. It takes the max value number from a filter . This helps to reduce the number of weights and also to reduce over fitting
Drop out Layers : This layer helps to avoid overfitting in the neural network model. This layer drops any of the weights from the network layer , hence the model become more efficient in the sense that it become more independent of some features. If some features or weights are missing also the network works fine.
Fully Connected Layers : These layers helps to map the output from the activation layer into one of the output class. That is either into the classes or gives the probability that a image will belongs to one class.
Comments
Post a Comment