Setting more weight for color param

Hello,

For my model, color is an important param to distinguish the classes. So I want to set more weight to this param.
Is this possible?
Thank you

Lyranis

Lyranis, are you using the dataset which you mentioned in earlier email?

If you mean to say that color in the image is important to determine the class of the image and you want to assist the model with this information, then it is not possible.

Secondly, this may not be even required. Deep Learning networks are good in figuring out what is important in a image to predict the right class.

What is the accuracy you are getting with your existing model ? can you share your model configuration ?

Yes, it is the same dataset (but with more samples).
Of course, here my model config code:

import keras
from keras.layers.normalization import BatchNormalization
from keras.layers.convolutional import Convolution2D
from keras.layers.pooling import MaxPooling2D
from keras.layers.core import Dropout
from keras.layers.core import Flatten
from keras.layers.core import Dense
from keras.layers import Input
from keras.models import Model
from keras.regularizers import *

def get_model():
aliases = {}
Input_17 = Input(shape=(3, 64, 64), name=‘Input_17’)
BatchNormalization_2 = BatchNormalization(name=‘BatchNormalization_2’,axis= 1)(Input_17)
Convolution2D_5 = Convolution2D(name=‘Convolution2D_5’,border_mode= ‘same’ ,nb_row= 7,nb_filter= 64,nb_col= 7,activation= ‘relu’ )(BatchNormalization_2)
Convolution2D_6 = Convolution2D(name=‘Convolution2D_6’,nb_row= 7,nb_filter= 64,nb_col= 7,activation= ‘relu’ )(Convolution2D_5)
MaxPooling2D_4 = MaxPooling2D(name=‘MaxPooling2D_4’)(Convolution2D_6)
Dropout_2 = Dropout(name=‘Dropout_2’,p= 0.25)(MaxPooling2D_4)
Convolution2D_7 = Convolution2D(name=‘Convolution2D_7’,border_mode= ‘same’ ,nb_row= 7,nb_filter= 128,nb_col= 7,activation= ‘relu’ )(Dropout_2)
Convolution2D_8 = Convolution2D(name=‘Convolution2D_8’,nb_row= 7,nb_filter= 128,nb_col= 7,activation= ‘relu’ )(Convolution2D_7)
MaxPooling2D_5 = MaxPooling2D(name=‘MaxPooling2D_5’)(Convolution2D_8)
Dropout_3 = Dropout(name=‘Dropout_3’,p= 0.25)(MaxPooling2D_5)
Flatten_2 = Flatten(name=‘Flatten_2’)(Dropout_3)
Dense_1 = Dense(name=‘Dense_1’,activation= ‘relu’ ,output_dim= 1024)(Flatten_2)
Dropout_4 = Dropout(name=‘Dropout_4’,p= 0.5)(Dense_1)
Dense_2 = Dense(name=‘Dense_2’,activation= ‘softmax’ ,output_dim= 14)(Dropout_4)

model = Model([Input_17],[Dense_2])
return aliases, model

from keras.optimizers import *

def get_optimizer():
return SGD(decay=1e-6,nesterov=True,momentum=0.9)

def is_custom_loss_function():
return False

def get_loss_function():
return ‘categorical_crossentropy’

def get_batch_size():
return 32

def get_num_epoch():
return 20

def get_data_config():
return ‘{“datasetLoadOption”: “full”, “dataset”: {“samples”: 362, “type”: “private”, “name”: “ChampiDataset”}, “numPorts”: 1, “kfold”: 1, “shuffle”: true, “samples”: {“split”: 1, “training”: 289, “validation”: 72, “test”: 0}, “mapping”: {“Image”: {“options”: {“Resize”: true, “horizontal_flip”: true, “rotation_range”: “180”, “Scaling”: “0.7”, “pretrained”: “None”, “shear_range”: “0”, “width_shift_range”: “0”, “height_shift_range”: “0”, “Augmentation”: true, “Height”: “64”, “vertical_flip”: true, “Width”: “64”, “Normalization”: true}, “type”: “Image”, “port”: “InputPort0”, “shape”: “”}, “Rating”: {“options”: {}, “type”: “Categorical”, “port”: “OutputPort0”, “shape”: “”}}}’

Here the training results:

I have good general results, but in some cases, predictions are wrong for images with fully different colors.
My use case is identifying mushrooms. A mushroom sort has often a specific color space (so for example white or brown). I had prediction with high probabilities that a mushroom is of sort “A” (a white one), but effectively the mushroom was brown.

Thank you :slight_smile:

I realize that the problem could come from my data augmentation params. I have better results without data augmentation. Probably I’m not using it like it should be (i actually work in try and decide mode).

We have found an issue with the augmentation. We have already fixed it and it will be available in the next release.

Your dataset is small (362 samples). It is not enough for a network to do a good job. You can try using pre-trained model which are trained on millions of images and fine tune last layers using your dataset.

A pre-trained model has a parameter called trainable which you can set to 10 (to mark last 10% of layers trainable).

You may need to run it more epochs as pre-trained model adapts to your dataset.

Thanks for your reply. You are totally right. I’m aware that my dataset is not optimal. But for now I want to get the max of it for testing purpose and to anderstand the basics of deep learning.
Do you have an example (code or screenshot) of a model that use a pre-trained model? I’m not a expert and do not exactly know where / how to place the pre-trained model and how it has to interact with the other layers.
Thank you verry much.

Lyranis, here is one model using inceptionV3. You can choose other one from the list later.

One thing to keep in mind is that pre-trained model works with certain image sizes. So you need to resize your input to that pre-trained model input size. For inceptionV3 it is 299x299. You can set it from data tab (click on your image column to see resize option)

Thank you! Top support! I’ll try it right now and will give you feedback after some fine tuning tests :-).

Just to add to Rajendra’s comment for pre-trained model example, here is one video example for a custom project in which Pre-trained Model (Inception V3) has been used: