Difference between Version 2.5 and 3.0?

Hi,

I’ve done the same network on both version of DSL (2.5 and 3.0) and I don’t get the same result.

I use the MNIST database.

Here’s the result for the 2.5 DLS version :


pretty good!

and the result for the 3.0 DLS version :
<<SORRY but because I’m a new user, I can’t put more than one image)

Here’s the 2 YAML files (sorry but I can’t attach them):

First :

data:
dataset: {name: mnist, samples: 70000, type: public}
datasetLoadOption: full
kfold: 1
mapping:
Digit Label:
options: {}
port: OutputPort0
shape: ‘’
type: Categorical
Image:
options: {Augmentation: false, Height: 28, Normalization: false, Resize: false,
Scaling: 1, Width: 28, height_shift_range: 0, horizontal_flip: false, pretrained: None,
rotation_range: 0, shear_range: 0, vertical_flip: false, width_shift_range: 0}
port: InputPort0
shape: ‘’
type: Image
numPorts: 1
samples: {split: 4, test: 7000, training: 56000, validation: 7000}
shuffle: true
model:
connections:

  • {source: Dense_2, target: Dropout_2}
  • {source: Dense_1, target: Dropout_1}
  • {source: Flatten_1, target: Dense_1}
  • {source: Input_1, target: Flatten_1}
  • {source: Dropout_1, target: Dense_2}
  • {source: Dropout_2, target: Dense_3}
  • {source: Dense_3, target: Output_1}
    layers:
  • args: {}
    class: Input
    name: Input_1
    x: 98
    y: 15
  • args: {}
    class: Output
    name: Output_1
    x: 583
    y: 482
  • args: {}
    class: Flatten
    name: Flatten_1
    x: 114
    y: 154
  • args: {activation: relu, output_dim: ‘512’}
    class: Dense
    name: Dense_1
    x: 138
    y: 294
  • args: {p: ‘0.3’}
    class: Dropout
    name: Dropout_1
    x: 167
    y: 428
  • args: {activation: relu, output_dim: ‘512’}
    class: Dense
    name: Dense_2
    x: 565
    y: 27
  • args: {p: ‘0.3’}
    class: Dropout
    name: Dropout_2
    x: 564
    y: 168
  • args: {activation: softmax, output_dim: ‘10’}
    class: Dense
    name: Dense_3
    x: 576
    y: 324
    params:
    advance_params: true
    batch_size: 1024
    is_custom_loss: false
    loss_func: categorical_crossentropy
    num_epoch: 10
    optimizer: {name: Adadelta}
    project: mnist_1

Second:

data:
dataset:
name: mnist
samples: 70000
type: public
datasetLoadOption: full
kfold: 1
mapping:
Digit Label:
options: {}
port: OutputPort0
shape: ‘’
type: Categorical
Image:
options:
Augmentation: false
Height: 28
Normalization: false
Resize: false
Scaling: 1
Width: 28
height_shift_range: 0
horizontal_flip: false
pretrained: None
rotation_range: 0
shear_range: 0
vertical_flip: false
width_shift_range: 0
port: InputPort0
shape: ‘’
type: Image
numPorts: 1
samples:
split: 4
test: 7000
training: 56000
validation: 7000
shuffle: true
model:
connections:

  • source: Dropout_1
    target: Dense_2
  • source: Dense_1
    target: Dropout_1
  • source: Dense_2
    target: Dropout_2
  • source: Dense_3
    target: Output_1
  • source: Flatten_1
    target: Dense_1
  • source: Input_1
    target: Flatten_1
  • source: Dropout_2
    target: Dense_3
    layers:
  • args: {}
    class: Input
    name: Input_1
    x: 4
    y: 26
  • args: {}
    class: Output
    name: Output_1
    x: 532
    y: 456
  • args: {}
    class: Flatten
    name: Flatten_1
    x: 36
    y: 164
  • args:
    activation: relu
    units: ‘512’
    class: Dense
    name: Dense_1
    x: 53
    y: 278
  • args:
    rate: ‘0.3’
    class: Dropout
    name: Dropout_1
    x: 64
    y: 413
  • args:
    activation: relu
    units: ‘512’
    class: Dense
    name: Dense_2
    x: 488
    y: 56
  • args:
    rate: ‘0.3’
    class: Dropout
    name: Dropout_2
    x: 493
    y: 158
  • args:
    activation: softmax
    units: ‘10’
    class: Dense
    name: Dense_3
    x: 500
    y: 272
    params:
    advance_params: true
    batch_size: 1024
    is_custom_loss: false
    loss_func: categorical_crossentropy
    num_epoch: 10
    optimizer:
    epsilon: 1e-08
    name: Adadelta
    project: mnist_2

Why this differences?

Regards,

Philippe

Hi Philippe,

DLS <=2.5 was built with MXNet backend and DLS 3.0 is built with tensorflow backend, this change may affect the accuracy with the same configuration.

We recommend to review the new layer configuration.

Thanks