Inference take a long time to run

For a DL model that trains a the speed of 32 secs for 6636 images, the inference take more than 10 minutes for a validation set with 1659 images. Where AFAIU, that should take just a few seconds.

Currently inference is run on CPU. So if your training ran on GPU, that would explain this difference.

We plan to make inference run on GPU in the upcoming releases.

Ok, that explains it.

When do you think the next release will be ?

Any updates on this feature? My Testing runs are taking so long.
Testing set is about 700000 lines - 10%, so the speedup would be most welcome.

Hi this feature is already include in the latest release. Now inference runs on GPU.

Thank you for your reply! However when I did the Inference run the docker used only on CPU core for calculations. It did it eventually and also I did not monitor the PC all the time, however when I did check, I checked the htop and nvidia-smi from command line. It showed 100% on one CPU thread and barely 1% on the GPU. Wonder why it behaved like so?