This code is copy of - https://www.kaggle.com/sentdex/full-classification-example-with-convnet/notebook - work.
I have tried this code and executed successfully.
Description:
The data set (training and test) are taken from Kaggle Dog vs Cat competition. https://www.kaggle.com/c/dogs-vs-cats-redux-kernels-edition/data
In this the training data is encoded into an array containing the feature of each image plus its corresponding label.
Functions used for this are : label_img() and create_training_data() . Similarly preprocessing is done for test data set also.
The test data set images are stored as numpy array with its corresponding IDs. Function used is :process_test_data
Numpy is used for preprocessing the image data into arrays.
Using Tensorflow '- tflearn a DNN model is created. Here it has six convolutional layers followed by maxpool layers .
Activation function used is 'RELU'. Then One fully connected layer with drop out and then softmax regression is also applied.
Now the model is created. So we can split the training data into training set and validation set.
Then model is applied on these data sets. Epoch can be altered as needed. Epoch gives number of iterations through the data.
Once the model is created we can test the model on unknown data set of test data. The corresponding images can be visualise using 'matplotlib' also.
The model should be saved for future improvements.
I have tried this code and executed successfully.
Description:
The data set (training and test) are taken from Kaggle Dog vs Cat competition. https://www.kaggle.com/c/dogs-vs-cats-redux-kernels-edition/data
In this the training data is encoded into an array containing the feature of each image plus its corresponding label.
Functions used for this are : label_img() and create_training_data() . Similarly preprocessing is done for test data set also.
The test data set images are stored as numpy array with its corresponding IDs. Function used is :process_test_data
Numpy is used for preprocessing the image data into arrays.
Using Tensorflow '- tflearn a DNN model is created. Here it has six convolutional layers followed by maxpool layers .
Activation function used is 'RELU'. Then One fully connected layer with drop out and then softmax regression is also applied.
Now the model is created. So we can split the training data into training set and validation set.
Then model is applied on these data sets. Epoch can be altered as needed. Epoch gives number of iterations through the data.
Once the model is created we can test the model on unknown data set of test data. The corresponding images can be visualise using 'matplotlib' also.
The model should be saved for future improvements.
Output Obtained : Visualization
import numpy as np #import tensorflow as tfimport os from random import shuffle import cv2 from tqdm import tqdm TRAIN_DIR = '/home/naima.v/mc/CancerImages/train'TEST_DIR = '/home/naima.v/mc/CancerImages/test'IMG_SIZE = 50LR = 1e-3MODEL_NAME = 'DOGSVSCATS-{}-{}.model'.format(LR, '2conv-basic') def label_img(img): word_label=img.split('.')[-3] if word_label=='cat':return [1,0] elif word_label=='dog':return [0,1] def create_training_data(): training_data = [] for img in tqdm(os.listdir(TRAIN_DIR)): label = label_img(img) path = os.path.join(TRAIN_DIR,img) img=cv2.imread(path,cv2.IMREAD_GRAYSCALE) img=cv2.resize(img,(IMG_SIZE,IMG_SIZE)) training_data.append([np.array(img),np.array(label)]) shuffle(training_data) np.save('training_data1.npy',training_data) return training_data def process_test_data(): testing_data=[] for img in tqdm(os.listdir(TEST_DIR)): path = os.path.join(TEST_DIR,img) img_num = img.split('.')[0] img = cv2.imread(path,cv2.IMREAD_GRAYSCALE) img = cv2.resize(img,(IMG_SIZE,IMG_SIZE)) testing_data.append([np.array(img),np.array(img_num)]) shuffle(testing_data) np.save('test_data.npy',testing_data) return testing_data train_data = create_training_data() import tflearn from tflearn.layers.conv import conv_2d,max_pool_2d from tflearn.layers.core import input_data,dropout,fully_connected from tflearn.layers.estimator import regression convnet = input_data(shape=[None, IMG_SIZE, IMG_SIZE, 1], name='input') convnet = conv_2d(convnet, 32, 5, activation='relu') convnet = max_pool_2d(convnet, 5) convnet = conv_2d(convnet, 64, 5, activation='relu') convnet = max_pool_2d(convnet, 5) convnet = conv_2d(convnet, 32, 5, activation='relu') convnet = max_pool_2d(convnet, 5) convnet = conv_2d(convnet, 64, 5, activation='relu') convnet = max_pool_2d(convnet, 5) convnet = conv_2d(convnet, 32, 5, activation='relu') convnet = max_pool_2d(convnet, 5) convnet = conv_2d(convnet, 64, 5, activation='relu') convnet = max_pool_2d(convnet, 5) convnet = fully_connected(convnet, 1024, activation='relu') convnet = dropout(convnet, 0.8) convnet = fully_connected(convnet,2,activation='softmax') convnet = regression(convnet, optimizer='adam', learning_rate=LR, loss='categorical_crossentropy', name='targets') model = tflearn.DNN(convnet, tensorboard_dir='log') if(os.path.exists('{}.meta'.format(MODEL_NAME))): model.load(MODEL_NAME) print 'Model loaded' train = train_data[:-500] test = train_data[-500:] X = np.array([i[0] for i in train]).reshape(-1,IMG_SIZE,IMG_SIZE,1) Y = [i[1] for i in train] test_x = np.array([i[0] for i in test]).reshape(-1,IMG_SIZE,IMG_SIZE,1) test_y = [i[1] for i in test] model.fit({'input':X},{'targets':Y},n_epoch=4,validation_set=({'input':test_x},{'targets':test_y}), snapshot_step=50000,show_metric=True,run_id=MODEL_NAME) import matplotlib.pyplot as plt test_data = process_test_data() fig = plt.figure() for num,data in enumerate(test_data[:12]): img_num = data[1] img_data = data[0] y = fig.add_subplot(3,4,num+1) orig = img_data data = img_data.reshape(IMG_SIZE,IMG_SIZE,1) model_out = model.predict([data])[0] if np.argmax(model_out)==1: str_label = 'Dog' else : str_label = 'Cat' y.imshow(orig,cmap='gray') plt.title(str_label) y.axes.get_xaxis().set_visible(False) y.axes.get_xaxis().set_visible(False) plt.show()

Comments
Post a Comment