Keras stops learning at some point
Each class has about 300 real images. Classification model with 3 classes.
I made a model using 2000 generated samples and 10 epochs. The model is okay, but there are a lot of false negative predictions.
Then I want to improve the model and increase the number of samples generated (the number of real images unchanged) to 20000. In the 6th EPOCH, the accuracy begins to deteriorate and eventually reaches 0.2
339/666 [==============>...............] - ETA: 52s - loss: 0.2762 - acc: 0.9012
340/666 [==============>...............] - ETA: 52s - loss: 0.2757 - acc: 0.9014
341/666 [==============>...............] - ETA: 52s - loss: 0.2754 - acc: 0.9015
342/666 [==============>...............] - ETA: 52s - loss: nan - acc: 0.9014
343/666 [==============>...............] - ETA: 52s - loss: nan - acc: 0.8995
344/666 [==============>...............] - ETA: 52s - loss: nan - acc: 0.8976
345/666 [==============>...............] - ETA: 51s - loss: nan - acc: 0.8955
Is it overfitting?
Can I somehow block it in real time without starting to learn again? For example. Is it possible to save the model after each epoch and adopt the best model in this case.
Or at least, can Keras break teaching?
Solution
Your model is certainly not overfitting here.
After a certain number of iterations, your model stops learning (the accuracy curve becomes flatter).
To overcome this problem, you can do the following
- Add more data
- Adjust hyperparameters
The keras
library provides checkpointing functionality through the callback API. The ModelCheckpoint
callback class allows you to define where the model weight checkpoints are located, how files are named, and under what circumstances model checkpoints are created.
Using it, you can select the best model from the number of iterations.
from keras.callbacks import ModelCheckpoint
"""
Your Code
"""
checkpoint = ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_best_only=True, mode='max')