Skip to main content

CNN Layers

Reference : https://adeshpande3.github.io/adeshpande3.github.io/A-Beginner's-Guide-To-Understanding-Convolutional-Neural-Networks/

Different Layers in CNN :

A classic CNN has the following layers : Input -> Conv -> RELU -> Pool ->RELU ->Conv ->RELU -> Pool ->Fully  Connected

Convolution Layer:
First layer in convolution Neural network is always CNN. We can consider convolution as a filter/neuron/kernel. Convolution can be considered as a torch light fell on an image. The light part keep moving on the image. And this torch light is taken as a number array. When the convolution is keep moving on the image , the numbers in the image are multiplied elementary to the numbers in the torch light or FILTER. So this filter helps us to learn about the features of an image . The first convolution layers will learn about basic features like edge, curve etc. When it goes in deep it will learn about more complex features like paws, legs of dog or a pink color etc. More the number of convolution layers more we will learn about different dimensions of images.

The matrix we get by multiplying the filter with the receptive field is called - activation map.


RELU : non linear activation function

Pool layers : Eg. Max pooling. It takes the max value number from a filter . This helps to reduce the number of weights and also to reduce over fitting

Drop out Layers : This layer helps to avoid overfitting in the neural network model. This layer drops any of the weights from the network layer , hence the model become more efficient in the sense that it become more independent of some features. If some features or weights are missing also the network works fine.

Fully Connected Layers : These layers helps to map the output from the activation layer into one of the output class. That is either into the classes or gives the probability that a image will belongs to one class. 


Comments

Popular posts from this blog

Coursera Course 3 Structuring Machine Learning Projects

Week One - Video One - Why ML STrategy Why we should learn care about ML Strategy Here when we try to improve the performance of the system we should consider about a lot of things . They are: -Amount of data - Amount of diverse data - Train algorithm longer with gradient descent -use another optimization algorithm like Adam -  use bigger network or smaller network depending out requirement -  use drop out - add l2 regularization - network architecture parameters like number of hidden units, Activation function etc. Second Video - Orthogonalization Orthogonalization means in a deep learning network we can change/tune so many things for eg. hyper parameters to get a more performance in the network . So most effective people know what to tune in order to achieve a particular effect. For every set of problem there is a separate solution. Don't mix up the problems and solutions. For that, first we should find out where is the problem , whether it is with training ...

Converting DICOM images into JPG Format in Centos

Converting DICOM images into JPG Format in Centos I wanted to work with medical image classification using Deep learning. The Image data set was .dcm format. So to convert the images to jpg format following steps have performed. Used ImageMagick software. http://www.ofzenandcomputing.com/batch-convert-image-formats-imagemagick/ Installed ImageMagick in Centos by downloading the rom and installing its libraries : rpm -Uvh ImageMagick-libs-7.0.7-10.x86_64.rpm rpm -Uvh ImageMagick-7.0.7-10.x86_64.rpm After installation the the image which is to be converted is pointed in directory. Inside the directory executed the command: mogrify -format jpg *.dcm Now dcm image is converted to JPG format.