Hello rajendra & others,
I’ve made some nice progress with the help of DLS. Meanwhile, some questions came up which I have compiled below, which you can hopefully answer without much trouble.
-
Which variable is plotted on the X-axis in the results tab plots? I figured it is most likely ‘batch’ for training plots and ‘epoch’ for validation plots? Is this right?
-
Should I randomize my train.csv, or is the ‘shuffle data’ function in the data tab enough?
-
Do you have a hint on how to deal with unbalanced data? (I have one class, where samples are very hard to make, and for the second class I can make virtually unlimited samples).
-
Does DLS reduce learning rate automatically? Is there a way for reducing LR according to loss in DLS?
-
How can I decide whether to save weights on lowest loss or best accuracy?
-
With my dataset (2 classes, 11k samples) my results usually have quite extreme spikes every time a new epoch starts. is this normal behaviour? (see image below).
-
In general, does this look like a sucessfull training run? Can I be sure it isn’t overfitting?
-
This was archived with inception preprocessing and a model created by autoML function. Does it make sense to try to use an actual inception or resNet layer inside the model, like this now?
-
I heard that DLS is not going to get another free version, instead there will be a professional version? Is that correct?
-
Is there an official documentation on DLS?
-
How can I stay up to date on DLS development? Blog on deep cognition seems dead to me (last update there was for DLS 2.2.0).
Thanks you for your answers.
As a feature request: I would like to be able to pan and zoom all four plots simultaneously in the result tab, instead of just one at a time.