Retrive results after deploying model

I have an issue such that i have deployed the model but the predictions happen one by one over the rest api’s. Now i want the prediction of a whole file not just one. Is there any way to do it problematically because it is not feasible to pass on the data one by one instead if there should be a function to pass more than 1 record at once for predictions.

yes if you want to get result on your complete dataset then you can do that through inference tab.
There is an upload to upload dataset in which upload the dataset similarly as you have done in DLS but name the CSV file as test.csv this time. Just make sure that the column name should be same as in your Train.csv used during training process.
After which you can download your results as a CSV file.


No you got me wrong, I know the above process but I have a different data other than test, train, validation and the data will keeps on changing onto which we have to make predictions.

Now 2nd thing is I want to do above from an application frontend with the help of the model deployment rest API’s , so the problem is still I have to send each row of data to get predicts for each row and if I have to make predictions for let’s says 1000 rows at once then what is the process.


Can someone revert please ?

Hi Akash,

Rest API only supports single record at a time for now. We have taken your feedback and will discuss with the team on to enable multiple records in future versions.

In the meantime, you can download the trained model and implement a REST API server yourself to so. Trained model download includes a reference python code which takes the a CSV file (so multiple records are supported) and it generates a CSV file with result.


Thanks Rajendra, yes that can be always done.
The single click deployment is what i liked in deep cognition.