Semantic image segmentation example


I’m just learning about semantic image segmentation and was wondering if this platform can perform this type of learning? If so would it be possible to get an example project/video of how to implement this?



Hi Charles. Here is a good resource. .

For generative image modeling. This is the way to go. arxiv. Org/pdf/1611.08408.pdf train with an adversarial network.

If you’re up for a challenge, recreate this model. It’s sounds awesome, but it just an ocr. Super straight forward. But they kinda stopped at model definition. You’ll want to pull the model, then optimize tooology and hyperparams through a simple GA. Search ‘Tpot Github’ I’m rewriting this, but it’s a good way to program/architect an AutoML. Simple is better sometimes.



Still I need more tips on how to upload the labelled data e.g.: csv file , still not imagine the Challenge I didn’t understand it very well.

if the output is an image, you can provide input image file, and output image file for each record. if output has more than 3 dimensions, then you can encode output in numpy format.

I am sorry Semantic Segmentation may not be possible because you need to manually annotate the images in the training dataset stating which pixels are soda bottles and which are not!


but what if we have mask r-cnn for pixel labeling? =)