INFERENCE - Dataset Inference - Which model used ? from which epoch?

When performing the inference with a specific Run, which model is actually used if the save weight was set to “end of epoch” during the training. Is there a way to know at which epoch the model used for inference was saved and what were it’s performances with the train and validation set ?

There is only one weight file for every run. When save weight was set to “end of epoch”, weight file is updated at every epoch. So if your training ran for X epochs, inference is using weights saved at epoch X.

If you want to see the training, validation performance (and last epoch # of a run) , you can go to results tab and select that run.

Ok so Best Accuracy option does save the weights for the model with the best Validation accuracy performance seen during the whole run ? if that is the case, is there a way to know at which epoch the model with the best Validation Accuracy was saved and what were it’s performances with the train and validation set ? OK if one know the epoch one can find the performance values, but it would be nicer to find them in the Result and/or Inference tab.

Maybe the default value “Save Weights on” should best accuracy and not end of epoch. In most DL prog it is set to best accuracy cause people want to keep the model with the best possible predictive performances.

Yes, Best Accuracy saves the weights for the model with the best Validation accuracy performance seen during the whole run.

The only way to find the epoch number is to look at validation accuracy graph in Results tab for that run. Epoch where validation accuracy is the highest is the one you are looking for.

Regarding the default value, we thought about it and there are cases where validation accuracy is not available (no validation dataset) or may not make sense (if validation size is small).

We may revisit this default value in future if there is recommendation from more users.

OK,

Right in case where there is no validation set, it can be the default value. The value shouldn’t actually be selectable.

AFAIK, the most likely case is that there is a Validation Dataset that actually guides the training. Allows to test several value of hyper-parameters and network architecture. If the Validation size is to small, I am not sure it make sense to have a validation set. One might be better off using all the sample in the training set ?