Einstein Object Detection

Let’s say you’re a developer who works for Alpine, a company that produces and sells cereal. Alpine wants to monitor which products are found on store shelves and where those products are located. So they have sales reps that visit various markets and take photos of shelves in the cereal aisle.

Your job is to create a model that identifies Alpine cereal boxes in an image. The model returns the type of cereal and the coordinates of where that box is located in the image.

Use the Einstein Object Detection API (Beta) to train deep-learning models to recognize and count multiple, distinct objects within an image. The API identifies objects within an image and provides details, like the size and location of each object.

For each object or set of objects identified in an image, the API returns the coordinates for the object’s bounding box and a class label. It also returns the probability of the object matching the class label

  1. Be in the same Einstein Platform app in LEX. Download the https://einstein.ai/images/alpine.zipfile and examine the annotation.csv and the sample images.

Look at the first image with name 20171030_133845.jpg:

In this there are three boxes of alpine: two of them are Oat Cereal and one is Corn Flakes.

Bounding boxes of them are specified in the first row of annotations.csv like this:

  1. Create the dataset:

Like image classification, create the dataset from the url specified in #1.

  1. Train the dataset:

Once you create the dataset, you can see its detail. As per the below image there are three labels and few samples learned from the uploaded file. Click on “Train” to create a model.

  1. Make prediction:

Once you are model is ready (100% progress rate with success status), switch to PREDICTION subtab. Select alphine model and paste https://einstein.ai/images/alpine.jpgurl in the input box. Click on “Send” to see the formatted response:

As per the above output image, all the objects under the images are predicted correctly.

Raw response:

https://einstein.ai/images/alpine.zip{
    "probabilities": [
        {
            "boundingBox": {
                "maxX": 2928,
                "maxY": 1973,
                "minX": 2160,
                "minY": 974
            },
            "label": "Alpine - Oat Cereal",
            "probability": 0.9846202
        },
        {
            "boundingBox": {
                "maxX": 1511,
                "maxY": 1841,
                "minX": 678,
                "minY": 927
            },
            "label": "Alpine - Corn Flakes",
            "probability": 0.9972818
        },
        {
            "boundingBox": {
                "maxX": 2201,
                "maxY": 1945,
                "minX": 1483,
                "minY": 1059
            },
            "label": "Alpine - Bran Cereal",
            "probability": 0.99779534
        },
        {
            "boundingBox": {
                "maxX": 3791,
                "maxY": 1978,
                "minX": 2869,
                "minY": 981
            },
            "label": "Alpine - Bran Cereal",
            "probability": 0.9950105
        }
    ]
}

To dig deeper into technical details and epi endpoints functionalities, please check https://metamind.readme.io/v2/docs.

You can use curl or your comfortable tool to explore the endpoints.

References:

https://trailhead.salesforce.com/trails/get_smart_einstein/projects/predictive_vision_apex

https://trailhead.salesforce.com/projects/build-a-cat-rescue-app-that-recognizes-cat-breeds

https://github.com/muenzpraeger/salesforce-einstein-platform-apex