Export and lauch REST API code

I have been digging around how I can launch rest API version of the code where I can submit one file at a time.
I found tutorial to run validation here

Is there a guide for how to download a model and implement it with REST API on another server?

There is no guide showing how to implement the rest api service using the downloaded model. This is something developer needs to implement or they can install DLS on the REST API server and use DLS deployment feature.

Thank you for the reply rajendra, do you know where i can find information about DLS deployment?

I will do my research, just wanted to see if you know any articles that have been written that will help.

This is an amazing tool but there is just no documentation on crucial steps in the process of making practical use of models you create.

So far I downloaded copy of the model and attempted to run it in test mode, there were issues with making it run. Once I resolve them I guess I will be able to run them by placing data in folder and sending batch file.

Next step would be to write script that would take requests and put the file in proper folder and add record to test file, I just though there is a better way to deploy this.

Unfortunately, there is not much information available about DLS deployment. But we will be help with any questions you have here.

The deployment feature lets you evaluate your model on a single input record. There are two interfaces:

  1. Webapp: WebApp interface lets you enter the single input record using HTML form.

  2. REST API: REST interface lets you send single input record programmatically from your app. We provide an example curl command to show what options and field names needs to be sent. Curl command also prints the output received, so you can see the format of the returned output and you can parse it from your app.

I want to make real-world use of the model I created, and for that I need to be able to send single requests to it on demand using curl. I am trying to understand how to go about configuring this on lets say ubuntu server after I get the model.

I am happy to pay for support on this if needed but in my mind this type of explanation critical to allow use of the system.

At this point I just had to pay a developer to manually create the model in TF and deploy it for me. But this kind of defeats the purpose of DeepCognition, as it stand now it is a tool you play with to validate your idea and then you would have to hire AI developer to actually develop the model separately from deep cognition and deploy it.