Key error: Image


On cloud, I get following error in my model: “Key error: ‘Image’”.
The first column of my dataset is an image (images are shown) but is not auto-recognize as an image (but as Categorical). So I changed it to image, but then I get the error in the model.
I tested the exactly same dataset and same model on my local machine with DLS but do not get this error.

Do you have an idea?

Thank you


I checked the dataset on cloud. I saw one issue with meta.json (generated at dataset upload) that it had an unicode character before Image column name (i.e. ‘\ufeffImage’)

I dont know how dataset is uploaded to cloud. Did you upload via dataset upload <1GB, File browser or zipped the folder (containing meta.json) form DLS desktop?

I have fixed the meta.json and hopefully you will be able to use it in the cloud now.

Hello rajendra,
Thank you for your reply.
I uploaded the dataset via dataset upload (because < 1 GB) as a zip folder with train.csv and my images folders.
I think it could be a BOM problem. I actually have train.csv in UTF-8 with BOM. I will convert my train.csv in UTF-8 without BOM to see if it solve the problem.
For now, the problem still persist.
I give you feedback soon.


Hi rajendra,

Ok, it seems it was a BOM problem. I convert train.csv into UTF-8 without BOM, and now my model is OK!
Thank you for your help.


Good to hear that you have resolved your problem.

Is it possible to share (emailing to support) your original train.csv for us to investigate and see if we could support with BOM in future without requiring conversion. You dont need to include the images.


Of course, I will do it.
I now have another problem by training the model (on cloud). The process stop with this error:
The training process start some times for a few seconds, then stop with the message. Some other times it stop at the beginning. I think it has with the data (images) to do.
Is it possible that the resize function on cloud do not run like expected?
My model:
import keras
from keras.layers.normalization import BatchNormalization
from keras.layers.convolutional import Convolution2D
from keras.layers.pooling import MaxPooling2D
from keras.layers.core import Dropout
from keras.layers.core import Dense
from keras.layers.core import Flatten
from keras.layers import Input
from keras.models import Model
from keras.regularizers import *

def get_model():
aliases = {}
Input_9 = Input(shape=(3, 28, 28), name=‘Input_9’)
BatchNormalization_3 = BatchNormalization(name=‘BatchNormalization_3’,axis= 1)(Input_9)
Convolution2D_9 = Convolution2D(name=‘Convolution2D_9’,nb_row= 3,border_mode= ‘same’ ,activation= ‘relu’ ,nb_col= 3,nb_filter= 28)(BatchNormalization_3)
Convolution2D_10 = Convolution2D(name=‘Convolution2D_10’,nb_row= 3,activation= ‘relu’ ,nb_col= 3,nb_filter= 28)(Convolution2D_9)
MaxPooling2D_5 = MaxPooling2D(name=‘MaxPooling2D_5’)(Convolution2D_10)
Dropout_7 = Dropout(name=‘Dropout_7’,p= 0.25)(MaxPooling2D_5)
Convolution2D_11 = Convolution2D(name=‘Convolution2D_11’,nb_row= 3,border_mode= ‘same’ ,activation= ‘relu’ ,nb_col= 3,nb_filter= 56)(Dropout_7)
Convolution2D_12 = Convolution2D(name=‘Convolution2D_12’,nb_row= 3,activation= ‘relu’ ,nb_col= 3,nb_filter= 56)(Convolution2D_11)
MaxPooling2D_6 = MaxPooling2D(name=‘MaxPooling2D_6’)(Convolution2D_12)
Dropout_8 = Dropout(name=‘Dropout_8’,p= 0.25)(MaxPooling2D_6)
Flatten_3 = Flatten(name=‘Flatten_3’)(Dropout_8)
Dense_5 = Dense(name=‘Dense_5’,output_dim= 128,activation= ‘relu’ )(Flatten_3)
Dropout_9 = Dropout(name=‘Dropout_9’,p= 0.5)(Dense_5)
Dense_6 = Dense(name=‘Dense_6’,output_dim= 15,activation= ‘softmax’ )(Dropout_9)

model = Model([Input_9],[Dense_6])
return aliases, model

from keras.optimizers import *

def get_optimizer():
return SGD(decay=0.0000001,nesterov=True,momentum=0.9)

def is_custom_loss_function():
return False

def get_loss_function():
return ‘categorical_crossentropy’

def get_batch_size():
return 32

def get_num_epoch():
return 10

def get_data_config():
return ‘{“mapping”: {“Rating”: {“port”: “OutputPort0”, “options”: {}, “shape”: “”, “type”: “Categorical”}, “Image”: {“port”: “InputPort0”, “options”: {“Augmentation”: false, “Scaling”: 1, “Normalization”: true, “Height”: 28, “height_shift_range”: 0, “Resize”: true, “width_shift_range”: 0, “Width”: “28”, “horizontal_flip”: false, “rotation_range”: 0, “pretrained”: “None”, “vertical_flip”: false, “shear_range”: 0}, “shape”: “”, “type”: “Image”}}, “dataset”: {“type”: “private”, “samples”: 3450, “name”: “ChampiDataset”}, “samples”: {“split”: 4, “training”: 2760, “validation”: 345, “test”: 345}, “datasetLoadOption”: “full”, “kfold”: 1, “numPorts”: 1, “shuffle”: true}’

Thank you

Could you start training with “Load Full Dataset” set to False. That should cause system to send additional error message with more information.

I set “Load dataset in Memory” to “One batch at a time”. I get no error, but training stop in background, because if I switch to another tab in the application and go back to the “Training” tab, the “Start Training” button is active.
I’m on Firefox (sorry, do not have chrome for now), and is only see there messages in the console:{“run_name”: “Run10”, “run_id”: 10, “status”: “OK”, “session_id”: “e4TYoX4BoCjnpPzruMHZholw6fAE3dch”}
main.c4f60b8e3ea79522e115.bundle.js:1:563254{“instance_status”: “RUNNING”, “status”: “OK”, “train_server_ip”: “”, “compute_type”: “CPU-XEON E5-8GB”, “train_server_id”: “i-0a8c01b8801f493d9”, “hours”: 0.83, “token”: “4GKoe73c”}
main.c4f60b8e3ea79522e115.bundle.js:1:563254{“instance_status”: “RUNNING”, “status”: “OK”, “train_server_ip”: “”, “compute_type”: “CPU-XEON E5-8GB”, “train_server_id”: “i-0a8c01b8801f493d9”, “hours”: 0.83, “token”: “4GKoe73c”}

Do not know if it is any help for you…

Ok, i installed chrome (for you :-)). And i see now following error: “FileNotFoundError: 2 No such file or directory”.
This tell me that somewhere in my train.csv some pictures path are wrong. I will check this and give you feedback.

Ok, I tested following:

First, I can confirm you that train.csv file has to be UTF-8 without BOM. Excel save me the file with BOM as a default. This could be a problem by other users. Users have to convert the file into UTF-8 without BOM, or perhaps you can dynamically convert it afer upload (because the first chars for UTF-8 BOM files are well known, it is possible to remove the first chars).
Then, I try to change my image path. I added “./” (my PC = Windows, Cloud = Linux?). But this do not solve the problem.
After that, I try to load a little dataset (< 20 MB. the first one was about 250MB). With the little one, no problem. It work fine. No error.

Ii do not find any error in my “big” dataset (the 250MB one). Perhaps the file were not completely loaded. I try it 2 times, and 2 times I got the error.

I will check the train file more in details, but I guess the problem is somewhere else. If you found something…


In server logs, following file is found missing:


Please check if this file existed in your dataset. There could be more occurrences like these in train.csv. So you may want to check all the file paths using some script.

Right! Some of my pictures have the extention JPG (uppercase) !
This is of course the problem! On Windows, no problem, but on Linux … :-).
Thank you Rajendra!