AutoML - Construct a model that can't be run on current GPU instance

On the integration, I used AutoML on a DoogBreed dataset with 120 class. The DL model constructed apparently can’t be runned because there’s not enough memory on the current GPU. Maybe the AutoML should check the amount of memory available on the current GPU and adapt the DL model generated so that it can run. If not possible there should at least be a warning that about the memory shortage.

This will not be possible at least in our cloud version as AutoML is run on a different system and training is done on another system (rented on the fly).

Well then, maybe there should be a “target memory size” parameter to indicate the amount of memory available on the target system that is going to train the DL model generated by the AutoML. That way a usable/feasible/trainable can be automatically built for the user to use. It might not be has good has the “normal” one but could be useful and would still be automatic. Ok so this is more a suggestion now.