Question on curl

Hi,

I had a question regarding using the curl command on windows–after deploying the form inference into an app in my local server, I took the template curl command and tried to run it on the command line: curl -X POST “http://127.0.0.1:8881/models/faewrfaw/v1/predict” -H “Content-Type:multipart/form-data” -F “data={“key”: “Path”};type=application/json” -F “Path=@/path/to/image.png”.

My question is if I have jpg images I want to make an inference on, do I replace the image.png from path/to/ with the full file path of my image and then run it on the command line? Or do I need to make modifications to “key” as well? I also noticed that this curl code seems to have been written in a Linux format, do I need to convert this to a Windows format–I ask because using that code format , I got “read funcction returned funny value” when setting the path.

Yes, you need to only modify “Path=@/path/to/image.png” with either relative path or full path to the image. You can use windows style path on windows or whatever format curl is understanding (depending on if you running inside Cygwin or Command prompt)

Thanks for the help, I was able to run the curl on the command line (and also in python using the requests library). I do have a question about speed–when I run the curl command on windows, it takes roughly 4 seconds to output the prediction response for a single image (whether through the cmd or through python). I believe the latency is due to server communication back and forth (I am able to download the h5 file of the model and batch predict 6000 150X150 pixel images in 20 seconds using python). Do you know if there are ways to improve speed of processing to take it around a ms for processing each image? For example in python instead of opening the filepath there is io.stringIO to open the path, but that only cuts down the time to around 1-2 seconds.

Can you provide the following details:

  1. Which version of Deep Learning Studio you are using (Cloud/Linux/Windows)?

  2. In your python code, which backend are you using (Tensorflow/MxNet)? provide the full package name and version number

  3. prediction is done using CPU/GPU ?

If you are using MxNet backend with inference on CPU, please make sure to install MKL version of the MxNet. It has optimization to improve the inference significantly.