Different Validation Accuracy after run re-start - goes from 0.81 to 0.98

I have trained the following DL model:


here’s the training curves for the first training run:

when I restart training from the end of the previous run I get:

Different Validation Accuracy after run re-start - goes from 0.81 to 0.98

AFAIU this is a big problem. It does not make sense to me, it term of training algorithms.

This ids for project WEBINAR - Transfert Learning - VGG16 - VegData, Run53 and Run55. VGG16 is set fully trainable (Trainable = 100)

Do you have shuffle on in the data configuration?

If shuffle was on. It is possible that validation set in the next run (55), gets most of the data from earlier training set(run 53). In that case, validation accuracy will jump to close to training accuracy.

OK, I have Shuffe Data on. And use 95%-5% split of the dataset. Isn’t there a way to restart the training with the same “shuffle” training set/validation set split ?

I wish I could use K-fold cross-validation but there’s a bug in it.

If you want to keep the same split across multiple runs, then shuffle option should be off in data configuration. You can shuffle your dataset outside of Deep Learning Studio before uploading (or just reshuffle train.csv using Jupyter notebook and overwrite it)

Ok I know about that possibility but is not too handy to have to do it each time its needed.
AFAIU there will be a feature in the next version of DLS that will allow to do that ?