There is a enormous amount of visual information in the world. The most actual task to find exactly what you need. Instead of Google image search there are a lot of specific tasks with limited number of object types on images tagging and indexing.
Let’s look at the computer vision technologies usage example for detecting wedding accessories and classify them, providing an information on different accessories characteristics provided in the table below.
|Bride dress||Bride hairstyle||Bridesmaid dress||Flowers||Desserts||Stationery||Jewelry||Groom shoes||Bride shoes|
|Sleeve length||Hair length||Sleeve length||Arrangement type||Type||Type||Type||Type||Type|
|Neckline||Hair style||Neck||Flower type||Color||Color|
The first step of this project is the data labeling. The customer has provided a dataset of wedding photos without any labeling. And we used Amazon MTurk service to provide such.
Amazon pipeline consists of:
1. HTML template, which workers used for drawing bounding box and select the label.
2. Python script, which checks similarity of labels, approve or reject assignment.
3. Python script, which generate proper labeling for training the neural net.
The second step was to train a neural network for category detection. We tested several neural net architectures, as MobileNet, NasNet, ResNet, YOLO to find the best accuracy/speed ratio. Finally, YOLOv2 neural net was trained on the labeled dataset.
The third step was to train an ensemble of models to make classification of detected categories by tags. We used PyTorch pre-trained vgg16 models to train it on a labeled dataset.
Demonstrated approach may be used in eCommerce to automate image tagging. It’s applicable also for quick search of goods with similar characteristics.