Trouble using a Custom Pre-Trained model


#1

DLS allows to add a custom model in the list of pre-trained models. I did add one that was saved with DLS using Download Trained Model in the Inference/Deploy - Download tab. Download worked Ok. Loading the Custom model worked well. I see it listed in the list of pre-trained model layer available in DLS.
41

When I start a new projet to use the Custom Pre-Trained model, I can add the layer Ok to the model. But then I get a model validation error:
24


#2

Could you share this custom model’s .yaml file here ?


#3

The files were produced Download Trained Model in the Inference/Deploy for the Run16 of project EFE_Leaf_AutoML.

Here’s the content of the .yaml file:

data:
dataset: {name: Leaf_96x96, samples: 990, type: private}
datasetLoadOption: batch
kfold: 1
mapping:
id:
options: {}
port: ‘’
shape: ‘’
type: Categorical
image:
options: {Augmentation: true, Height: 28, Normalization: false, Resize: false,
Scaling: ‘255’, Width: 28, height_shift_range: ‘0’, horizontal_flip: true,
pretrained: None, rotation_range: ‘180’, shear_range: ‘0’, vertical_flip: false,
width_shift_range: ‘0’}
port: InputPort0
shape: ‘’
type: Image
species:
options: {}
port: OutputPort0
shape: ‘’
type: Categorical
numPorts: 1
samples: {split: 1, test: 0, training: 792, validation: 198}
shuffle: false
model:
connections:

  • {source: convolution2d_426, target: maxpooling2d_221}
  • {source: activation_112, target: dropout_209}
  • {source: dense_185, target: activation_110}
  • {source: convolution2d_430, target: maxpooling2d_222}
  • {source: convolution2d_415, target: maxpooling2d_218}
  • {source: batchnormalization_267, target: convolution2d_423}
  • {source: dense_187, target: activation_112}
  • {source: convolution2d_423, target: maxpooling2d_219}
  • {source: Input_0, target: convolution2d_415}
  • {source: maxpooling2d_221, target: convolution2d_430}
  • {source: flatten, target: dense_185}
  • {source: dropout_209, target: dense_188}
  • {source: maxpooling2d_218, target: batchnormalization_267}
  • {source: batchnormalization_268, target: convolution2d_426}
  • {source: dense_188, target: Output_0}
  • {source: activation_110, target: dense_187}
  • {source: maxpooling2d_222, target: flatten}
  • {source: maxpooling2d_219, target: batchnormalization_268}
    layers:
  • args: {}
    class: Input
    name: Input_0
    x: 60
    y: 60
  • args: {activation: relu, border_mode: same, dim_ordering: th, nb_col: 2, nb_filter: ‘64’,
    nb_row: 2}
    class: Convolution2D
    name: convolution2d_415
    x: 60
    y: 180
  • args: {dim_ordering: th, strides: ‘(2, 2)’}
    class: MaxPooling2D
    name: maxpooling2d_218
    x: 60
    y: 302
  • args: {}
    class: BatchNormalization
    name: batchnormalization_267
    x: 57
    y: 477
  • args: {dim_ordering: th, strides: ‘(2, 2)’}
    class: MaxPooling2D
    name: maxpooling2d_219
    x: 59
    y: 741
  • args: {activation: relu, border_mode: same, dim_ordering: th, nb_col: 2, nb_filter: 32,
    nb_row: 2}
    class: Convolution2D
    name: convolution2d_423
    x: 60
    y: 632
  • args: {}
    class: BatchNormalization
    name: batchnormalization_268
    x: 416
    y: 63
  • args: {activation: relu, border_mode: same, dim_ordering: th, nb_col: 2, nb_filter: 32,
    nb_row: 2}
    class: Convolution2D
    name: convolution2d_426
    x: 413
    y: 271
  • args: {dim_ordering: th, strides: ‘(2, 2)’}
    class: MaxPooling2D
    name: maxpooling2d_221
    x: 413
    y: 397
  • args: {activation: relu, border_mode: same, dim_ordering: th, nb_col: 2, nb_filter: 32,
    nb_row: 2}
    class: Convolution2D
    name: convolution2d_430
    x: 413
    y: 604
  • args: {dim_ordering: th, strides: ‘(2, 2)’}
    class: MaxPooling2D
    name: maxpooling2d_222
    x: 413
    y: 734
  • args: {}
    class: Flatten
    name: flatten
    x: 760
    y: 60
  • args: {activation: linear, output_dim: ‘1024’}
    class: Dense
    name: dense_185
    x: 760
    y: 212
  • args: {activation: relu}
    class: Activation
    name: activation_110
    x: 760
    y: 300
  • args: {p: 0.4}
    class: Dropout
    name: dropout_209
    x: 760
    y: 656
  • args: {activation: linear, output_dim: ‘512’}
    class: Dense
    name: dense_187
    x: 759
    y: 453
  • args: {activation: relu}
    class: Activation
    name: activation_112
    x: 759
    y: 545
  • args: {activation: softmax, output_dim: 99}
    class: Dense
    name: dense_188
    x: 1204
    y: 335
  • args: {}
    class: Output
    name: Output_0
    x: 1208
    y: 502
    params:
    advance_params: true
    batch_size: 256
    is_custom_loss: false
    loss_func: categorical_crossentropy
    num_epoch: 300
    optimizer: {lr: ‘0.1’, name: Adadelta}
    project: EFE_Leaf_AutoML

#4

Any news on this subject ?


#5

Sorry for the delay on this. Actually I was requesting the .YAML file you are uploading for custom pre-trained model.

The YAML configuration you shared is for the project (that could have been also usable but seems like copy-paste messed it up).

We tried to reproduce it with one project we had. After downloading trained model, we converted model.h5 to model.yaml and uploaded as custom pre-trained model and it works.


#6

Hello,

I have the same problem as describe above. I uploaded the .yaml file from the downloaded model, but get then the error “KeyError: ‘class_name’” as I want to use it.
I do not know how to convert my .h5 model in yaml so it then work.

Could you help me?

Thank you.


Load Previous Weights: Redefinition of variable uniform1