반응형
In [1]:
import gc
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# 교차검증 lib
from sklearn.model_selection import StratifiedKFold,train_test_split
from tqdm import tqdm_notebook
from sklearn.metrics import accuracy_score, roc_auc_score
#모델 lib
from keras.datasets import mnist
from keras.utils.np_utils import to_categorical
from keras.preprocessing.image import ImageDataGenerator, load_img
from keras.models import Sequential, Model
from keras.callbacks import ModelCheckpoint, EarlyStopping, ReduceLROnPlateau
from keras.layers import Dense, Dropout, Flatten, Activation, Conv2D, AveragePooling2D,BatchNormalization, MaxPooling2D
from keras import layers
from keras.optimizers import Adam,RMSprop, SGD
#모델
from keras.applications import VGG16, VGG19, resnet50
#경고메세지 무시
import warnings
warnings.filterwarnings(action='ignore')
import os
import gc
AlexNet¶
이번에는 AlexNet에 대하여 알아보겠습니다. AlexNet 12년에 등장해서 딥러닝을 부흥시켰습니다. 그이유는 최초로 ImageNet대회에서 CNN으로 우승했기 때문입니다. 구조는 LeNet-5와 비슷하고 레이어만 증가했습니다.
AlexNex이 개발되었을 시대에는 컴퓨팅파워가 낮아 3GB의 GPU밖에 없어 AlexNet으로 ImageNet을 학습시킬 수 없었습니다. 그래서 model parallel을 통해 두개의 GPU을 연결해 병렬로 처리했습니다. 그래서 자세히 그림을보시면 conv1에서 depth가 96이아니라 48입니다.
Data load¶
- AlexNet을 통해 개와 고양이를 분류하는 이진분류 문제를 풀어보겠습니다.
In [2]:
# https://www.kaggle.com/bulentsiyah/dogs-vs-cats-classification-vgg16-fine-tuning
filenames = os.listdir("../input/dogs-vs-cats/train/train")
categories = []
for filename in filenames:
category = filename.split('.')[0]
if category == 'dog':
categories.append(1)
else:
categories.append(0)
train = pd.DataFrame({
'filename': filenames,
'category': categories
})
train.head()
Out[2]:
Visualization¶
In [3]:
#Visualizing the data
sample = filenames[2]
image = load_img("../input/dogs-vs-cats/train/train/"+sample)
plt.imshow(image)
plt.show()
train/test data Split¶
In [4]:
train["category"] = train["category"].astype('str')
its = np.arange(train.shape[0])
train_idx, test_idx = train_test_split(its, train_size = 0.8, random_state=42)
df_train = train.iloc[train_idx, :]
X_test = train.iloc[test_idx, :]
its = np.arange(df_train.shape[0])
train_idx, val_idx = train_test_split(its, train_size = 0.8, random_state=42)
X_train = df_train.iloc[train_idx, :]
X_val = df_train.iloc[val_idx, :]
print(X_train.shape)
print(X_val.shape)
print(X_test.shape)
X_train['category'].value_counts()
Out[4]:
AlexNet- Details/Retrospectives¶
- 최초의 ReLU를 사용했으며
- local response normalization사용하였는데 현재는 사용하지 않습니다. 현재는 Batch Normalizetion을 사용합니다.
- heavy data augmentation
- dropout 0.5
- batch size 128
- SGD Momentum 0.9
- Learning rate은 1e-2시작하며 val accuracy가 증가하지 않으면 10으로 나누어 줍니다.
- L2 weight decay 5e-4
In [5]:
# Parameter
image_size = 227
img_size = (image_size, image_size)
nb_train_samples = len(X_train)
nb_validation_samples = len(X_val)
nb_test_samples = len(X_test)
epochs = 20
#batch size 128
batch_size =128
# Define Generator config
train_datagen =ImageDataGenerator(
rescale=1./255,
rotation_range=10,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True
)
val_datagen = ImageDataGenerator(rescale=1./255)
test_datagen = ImageDataGenerator(rescale=1./255)
In [6]:
#generator
train_generator = train_datagen.flow_from_dataframe(
dataframe=X_train,
directory="../input/dogs-vs-cats/train/train",
x_col = 'filename',
y_col = 'category',
target_size = img_size,
color_mode='rgb',
class_mode='binary',
batch_size=batch_size,
seed=42
)
validation_generator = val_datagen.flow_from_dataframe(
dataframe=X_val,
directory="../input/dogs-vs-cats/train/train",
x_col = 'filename',
y_col = 'category',
target_size = img_size,
color_mode='rgb',
class_mode='binary',
batch_size=batch_size,
)
test_generator = test_datagen.flow_from_dataframe(
dataframe=X_test,
directory="../input/dogs-vs-cats/train/train",
x_col = 'filename',
y_col=None,
target_size= img_size,
color_mode='rgb',
class_mode=None,
batch_size=batch_size,
shuffle=False
)
Model - AlexNet¶
In [7]:
#INPUT
input_shape = (227, 227, 3)
model = Sequential()
#CONV1
model.add(Conv2D(96, (11, 11), strides=4,padding='valid', input_shape=input_shape))
#MAX POOL1
model.add(MaxPooling2D(pool_size=(3, 3), strides=2))
#NORM1 Local response normalization 사용하였는데 현재는 사용하지 않습니다. 현재는 Batch Normalizetion을 사용합니다.
model.add(BatchNormalization())
#CONV2
model.add(Conv2D(256, (3, 3), activation='relu', padding='same'))
#MAX POOL1
model.add(MaxPooling2D(pool_size=(3, 3), strides=2))
#NORM2
model.add(BatchNormalization())
#CONV3
model.add(Conv2D(384, (3, 3),strides=1, activation='relu', padding='same'))
#CONV4
model.add(Conv2D(384, (3, 3),strides=1, activation='relu', padding='same'))
#CONV5
model.add(Conv2D(256, (3, 3),strides=1, activation='relu', padding='same'))
#MAX POOL3
model.add(MaxPooling2D(pool_size=(3, 3), strides=2))
model.add(Flatten())
#FC6 예측 class가 적어 FC layer을 조정했습니다.
model.add(Dense(1024, activation='relu'))
model.add(Dropout(0.5))
#FC7
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.5))
#FC8 이진 분류이기 때문에 sigmoid
model.add(Dense(1, activation='sigmoid'))
# SGD Momentum 0.9, L2 weight decay 5e-4
optimizer = SGD(lr=0.01, decay=5e-4, momentum=0.9)
model.compile(loss='binary_crossentropy',
optimizer=optimizer, metrics=['accuracy'])
model.summary()
Train / predict¶
Train¶
In [8]:
def get_steps(num_samples, batch_size):
if (num_samples % batch_size) > 0 :
return (num_samples // batch_size) + 1
else :
return num_samples // batch_size
In [9]:
%%time
from keras.callbacks import ModelCheckpoint, EarlyStopping, ReduceLROnPlateau
#model path
MODEL_SAVE_FOLDER_PATH = './model/'
if not os.path.exists(MODEL_SAVE_FOLDER_PATH):
os.mkdir(MODEL_SAVE_FOLDER_PATH)
model_path = MODEL_SAVE_FOLDER_PATH + 'AlexNet.hdf5'
patient = 5
callbacks_list = [
# Learning rate 1e-2, reduced by 10 manually when val accuracy plateaus
ReduceLROnPlateau(
monitor = 'val_accuracy',
#콜백 호출시 학습률(lr)을 10으로 나누어줌
factor = 0.1,
#5epoch 동안 val_accuracy가 상승하지 않으면 lr 조정
patience = patient,
#최소학습률
min_lr=0.00001,
verbose=1,
mode='min'
),
ModelCheckpoint(
filepath=model_path,
monitor ='val_accuracy',
# val_loss가 좋지 않으면 모델파일을 덮어쓰지 않는다
save_best_only = True,
verbose=1,
mode='min') ]
In [10]:
history = model.fit_generator(
train_generator,
steps_per_epoch = get_steps(nb_train_samples, batch_size),
epochs=epochs,
validation_data = validation_generator,
validation_steps = get_steps(nb_validation_samples, batch_size),
callbacks = callbacks_list
)
gc.collect()
Out[10]:
predict¶
In [11]:
%%time
test_generator.reset()
prediction = model.predict_generator(
generator = test_generator,
steps = get_steps(nb_test_samples, batch_size),
verbose=1
)
print('Test accuracy : ', roc_auc_score(X_test['category'].astype('int'), prediction, average='macro'))
acc / loss plot¶
In [12]:
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
epochs = range(len(acc))
plt.plot(epochs, acc, label='Training acc')
plt.plot(epochs, val_acc, label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.ylim(0.5,1)
plt.show()
In [13]:
loss = history.history['loss']
val_loss = history.history['val_loss']
plt.plot(epochs, loss, label='Training loss')
plt.plot(epochs, val_loss, label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.ylim(0,0.5)
plt.show()
Result¶
- testset의 정확도는 96.6%나왔습니다.
- loss는 점점 떨어지지만 조금 불안정해보입니다
In [1]:
from IPython.core.display import display, HTML
display(HTML("<style>.container {width:90% !important;}</style>"))
반응형
'데이터분석 > vision' 카테고리의 다른 글
VGGNet using keras (0) | 2019.11.07 |
---|---|
LRN(Local Response Normalization) 이란 무엇인가?(feat. AlexNet) (0) | 2019.11.07 |
LeNet-5 using keras (0) | 2019.10.30 |
Lecture 9: CNN Architectures (0) | 2019.10.27 |
Lecture 7: Training Neural Networks, part I (0) | 2019.10.17 |