Skip to main content

Posts

Showing posts from November, 2017

Coursera Deep Learning Course 2

Training/Dev/Test Set what is training set / dev set / test set In traditional methodology/ when we have small size data we can take 60-20-20 ratio to get training set-validation set/dev set -test set. Now, when we have big data it is fine that the dev set or test set to be less than 10 or 20 percent of your data. Or even 98-1-1 ratio is also fine. One rule of thumb is : Test set and Dev set should come from same distribution. Bias and Variance Bias means the high error rate in training. I may be due to underfitting. For this we can change neural network architecture like network size and number of iterations. Varaince means error rate in Dev set . This may be due to Over fitting of the data . This can be avoided by increasing number of data and regularization. Bias - Variance trade off means balancing both without the increase in other. Regularization is used to reduce the variance . It may hurt bias and bias may increase a little but not much if we have  a bigger network...

Coursera Course 3 Structuring Machine Learning Projects

Week One - Video One - Why ML STrategy Why we should learn care about ML Strategy Here when we try to improve the performance of the system we should consider about a lot of things . They are: -Amount of data - Amount of diverse data - Train algorithm longer with gradient descent -use another optimization algorithm like Adam -  use bigger network or smaller network depending out requirement -  use drop out - add l2 regularization - network architecture parameters like number of hidden units, Activation function etc. Second Video - Orthogonalization Orthogonalization means in a deep learning network we can change/tune so many things for eg. hyper parameters to get a more performance in the network . So most effective people know what to tune in order to achieve a particular effect. For every set of problem there is a separate solution. Don't mix up the problems and solutions. For that, first we should find out where is the problem , whether it is with training ...

BENIGN or MALIGNANT Cancer Classification

import numpy as np #import tensorflow as tf import os from random import shuffle import cv2 import matlab from tqdm import tqdm import dicom as pdicom from glob import glob from mpl_toolkits.mplot3d.art3d import Poly3DCollection import scipy.ndimage from skimage import morphology from skimage import measure from skimage.transform import resize from sklearn.cluster import KMeans from plotly import __version__ from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot from plotly.tools import FigureFactory as FF from plotly.graph_objs import * import matplotlib.pyplot as plt init_notebook_mode( connected = True ) TRAIN_DIR = '/home/naima.v/mc/CancerImages/Calc_Labelled_Train1' TEST_DIR = '/home/naima.v/mc/CancerImages/Calc_Labelled_Test1' IMG_SIZE = 50 LR = 1e-3 MODEL_NAME = 'CANCERDET-{}-{}.model2' .format(LR, '6conv-basic' ) def readDCMImg(path): g = glob(path + '/*.dcm' ) #print ("Total of %d DICOM images.\nFir...

CNN Layers

Reference : https://adeshpande3.github.io/adeshpande3.github.io/A-Beginner's-Guide-To-Understanding-Convolutional-Neural-Networks/ Different Layers in CNN : A classic CNN has the following layers : Input -> Conv -> RELU -> Pool ->RELU ->Conv ->RELU -> Pool ->Fully  Connected Convolution Layer: First layer in convolution Neural network is always CNN. We can consider convolution as a filter/neuron/kernel. Convolution can be considered as a torch light fell on an image. The light part keep moving on the image. And this torch light is taken as a number array. When the convolution is keep moving on the image , the numbers in the image are multiplied elementary to the numbers in the torch light or FILTER. So this filter helps us to learn about the features of an image . The first convolution layers will learn about basic features like edge, curve etc. When it goes in deep it will learn about more complex features like paws, legs of dog or a pink color etc....

Converting DICOM images into JPG Format in Centos

Converting DICOM images into JPG Format in Centos I wanted to work with medical image classification using Deep learning. The Image data set was .dcm format. So to convert the images to jpg format following steps have performed. Used ImageMagick software. http://www.ofzenandcomputing.com/batch-convert-image-formats-imagemagick/ Installed ImageMagick in Centos by downloading the rom and installing its libraries : rpm -Uvh ImageMagick-libs-7.0.7-10.x86_64.rpm rpm -Uvh ImageMagick-7.0.7-10.x86_64.rpm After installation the the image which is to be converted is pointed in directory. Inside the directory executed the command: mogrify -format jpg *.dcm Now dcm image is converted to JPG format. 

Cat Vs Dog Classification Using TensorFlow CNN

This code is copy of - https://www.kaggle.com/sentdex/full-classification-example-with-convnet/notebook - work.  I have tried this code and executed successfully. Description: The data set (training and test) are taken from Kaggle Dog vs Cat competition.  https://www.kaggle.com/c/dogs-vs-cats-redux-kernels-edition/data In this the training data is encoded into an array containing the feature of each image plus its corresponding label. Functions used for this are : label_img() and create_training_data() . Similarly preprocessing is done for test data set also. The test data set images are stored as numpy array with its corresponding IDs. Function used is :process_test_data Numpy is used for preprocessing the image data into arrays. Using Tensorflow '- tflearn a DNN model is created. Here it has six convolutional layers followed by maxpool layers . Activation function used is 'RELU'. Then One fully connected layer with drop out and then softmax regression is also ...