Error while training

The following error occurs while training


the hyperparameters are set to

When setting the batch size to 256 it trains fine

This generally indicates Out Of Memory scenario. Some of the DL framework exits abruptly (without returning control to Python) in case of Out of Memory or other error cases. Because of this it is not trivial to report the actual error to the user.

This is the generic error message which is displayed when training process exits this way.