You can get the data from Kaggle
Importing Needed Lib
import os
from tensorflow.keras import layers
from tensorflow.keras import Model
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import tensorflow as tf
import matplotlib.pyplot as plt
import PIL
Getting the Data Ready
train_datagen = ImageDataGenerator(rescale = 1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
test_datagen = ImageDataGenerator( rescale = 1.0/255)
train_generator = train_datagen.flow_from_directory('Gender/Train',
batch_size =256 ,
class_mode = 'binary',
target_size = (64, 64))
validation_generator = test_datagen.flow_from_directory('Gender/Validation',
batch_size =256 ,
class_mode = 'binary',
target_size = (64, 64))
Found 160000 images belonging to 2 classes.
Found 22598 images belonging to 2 classes.
from keras.optimizers import Adam
model = tf.keras.models.Sequential([
# 1st conv
tf.keras.layers.Conv2D(96, (11,11),strides=(4,4), activation='relu', input_shape=(64, 64, 3)),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.MaxPooling2D(2, strides=(2,2)),
# 2nd conv
tf.keras.layers.Conv2D(256, (11,11),strides=(1,1), activation='relu',padding="same"),
tf.keras.layers.BatchNormalization(),
# 3rd conv
tf.keras.layers.Conv2D(384, (3,3),strides=(1,1), activation='relu',padding="same"),
tf.keras.layers.BatchNormalization(),
# 4th conv
tf.keras.layers.Conv2D(384, (3,3),strides=(1,1), activation='relu',padding="same"),
tf.keras.layers.BatchNormalization(),
# 5th Conv
tf.keras.layers.Conv2D(256, (3, 3), strides=(1, 1), activation='relu',padding="same"),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.MaxPooling2D(2, strides=(2, 2)),
# To Flatten layer
tf.keras.layers.Flatten(),
# To FC layer 1
tf.keras.layers.Dense(4096, activation='relu'),
tf.keras.layers.Dropout(0.5),
#To FC layer 2
tf.keras.layers.Dense(4096, activation='relu'),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.compile(
optimizer=Adam(lr=0.001),
loss='binary_crossentropy',
metrics=['accuracy']
)
hist = model.fit_generator(generator=train_generator,
validation_data=validation_generator,
steps_per_epoch=256,
validation_steps=50,
epochs=20)
Epoch 1/20
256/256 [==============================] - 1314s 5s/step - loss: 0.2968 - accuracy: 0.8741 - val_loss: 0.3489 - val_accuracy: 0.8210
Epoch 2/20
256/256 [==============================] - 1093s 4s/step - loss: 0.2798 - accuracy: 0.8811 - val_loss: 0.2943 - val_accuracy: 0.8783
Epoch 3/20
256/256 [==============================] - 1097s 4s/step - loss: 0.2710 - accuracy: 0.8875 - val_loss: 0.2179 - val_accuracy: 0.9102
Epoch 4/20
256/256 [==============================] - 1130s 4s/step - loss: 0.2650 - accuracy: 0.8893 - val_loss: 0.3015 - val_accuracy: 0.8678
Epoch 5/20
256/256 [==============================] - 1258s 5s/step - loss: 0.2559 - accuracy: 0.8936 - val_loss: 0.2607 - val_accuracy: 0.8902
Epoch 6/20
256/256 [==============================] - 1112s 4s/step - loss: 0.2493 - accuracy: 0.8971 - val_loss: 0.2665 - val_accuracy: 0.9080
Epoch 7/20
256/256 [==============================] - 1210s 5s/step - loss: 0.2477 - accuracy: 0.8955 - val_loss: 0.2039 - val_accuracy: 0.9232
Epoch 8/20
256/256 [==============================] - 1287s 5s/step - loss: 0.2406 - accuracy: 0.8992 - val_loss: 0.4048 - val_accuracy: 0.8284
Epoch 9/20
256/256 [==============================] - 1405s 5s/step - loss: 0.2344 - accuracy: 0.9031 - val_loss: 0.4894 - val_accuracy: 0.7862
Epoch 10/20
256/256 [==============================] - 1446s 6s/step - loss: 0.2321 - accuracy: 0.9033 - val_loss: 0.3033 - val_accuracy: 0.8535
Epoch 11/20
256/256 [==============================] - 1229s 5s/step - loss: 0.2307 - accuracy: 0.9046 - val_loss: 0.2363 - val_accuracy: 0.8936
Epoch 12/20
256/256 [==============================] - 1292s 5s/step - loss: 0.2294 - accuracy: 0.9057 - val_loss: 0.3867 - val_accuracy: 0.8329
Epoch 13/20
256/256 [==============================] - 1517s 6s/step - loss: 0.2258 - accuracy: 0.9075 - val_loss: 0.2036 - val_accuracy: 0.9230
Epoch 14/20
256/256 [==============================] - 1228s 5s/step - loss: 0.2202 - accuracy: 0.9086 - val_loss: 0.2143 - val_accuracy: 0.9089
Epoch 15/20
256/256 [==============================] - 1466s 6s/step - loss: 0.2150 - accuracy: 0.9110 - val_loss: 0.2121 - val_accuracy: 0.9166
Epoch 16/20
256/256 [==============================] - 1215s 5s/step - loss: 0.2120 - accuracy: 0.9134 - val_loss: 0.1996 - val_accuracy: 0.9265
Epoch 17/20
256/256 [==============================] - 1215s 5s/step - loss: 0.2110 - accuracy: 0.9141 - val_loss: 0.2726 - val_accuracy: 0.8954
Epoch 18/20
256/256 [==============================] - 1212s 5s/step - loss: 0.2123 - accuracy: 0.9126 - val_loss: 0.1670 - val_accuracy: 0.9328
Epoch 19/20
256/256 [==============================] - 1246s 5s/step - loss: 0.2066 - accuracy: 0.9164 - val_loss: 0.1860 - val_accuracy: 0.9166
Epoch 20/20
256/256 [==============================] - 1258s 5s/step - loss: 0.2052 - accuracy: 0.9153 - val_loss: 0.1808 - val_accuracy: 0.9268
Testing the Model
import numpy as np
from keras.preprocessing import image
# predicting images
path = "test2.jpg"
img = image.load_img(path, target_size=(64, 64))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
images = np.vstack([x])
classes = model.predict(images, batch_size=1)
print(classes)
if classes[0]>0.5:
print("is a male")
else:
print( " is a female")
plt.imshow(img)
[[1.]]
is a male
<matplotlib.image.AxesImage at 0x268cfc04160>
import numpy as np
from keras.preprocessing import image
# predicting images
path = "img/test7.jpg"
img = image.load_img(path, target_size=(64, 64))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
images = np.vstack([x])
classes = model.predict(images, batch_size=1)
print(classes)
if classes[0]>0.5:
print("is a male")
else:
print( " is a female")
plt.imshow(img)
[[0.]]
is a female
<matplotlib.image.AxesImage at 0x268cfd03a60>