Our Choice

We have our own way to live our life. Maybe some of us in their 20s already happily married, starting business/career, still in school, still trying to figure out what they really want or even who…


独家优惠奖金 100% 高达 1 BTC + 180 免费旋转

Serverless Image Recognition with KNIME and Amazon AWS Rekognition


Harness the power of web services with a bunch of nodes

Building an accurate image recognition engine with deep learning tools can be a difficult undertaking. You may require tens of thousands (or millions) of curated, tagged images to get an accurate model and a powerful enough server to run the deep learning model.

The approach I take with my tutorial using Amazon can be adapted and used for any of the services above. At the end of the day, an image is uploaded to cloud storage, an API call issued and a JSON output is returned with the images classification.

This is a quick and dirty tutorial, I’m sure you could improve this workflow somewhat if you’re going to place it into a production environment.

Once installed we need to configure the CLI with our Amazon credentials and default region.

Adding a new IAM user for the Amazon CLI.

5. We can skip tags and then create the user. Once created you’re provided with an Access key ID and a Secret access key. We’ll configure the CLI tool with our credentials.

We do this by running:

This will prompt us for our AWS Access Key ID, AWS Secret Access Key, Default region name, Default output format. Enter the information above along with your default region. You can leave Default output format blank.

You can find a list of your local regions here;

6. Install the AWS Python Library boto3. You can do this either by running:

KNIME workflow incorporating AWS Rekognition via Python and boto3.

To configure the KNIME workflow to work with your data;

The Python code looks like this:

By importing boto3, we can then call boto3.client(‘rekognition’) to start interacting with the Rekognition service.

This data is then output in JSON format and turned into a table.

In the “Filter Confidence” metanode we pivot the data, remove unnecessary columns and filter out results that fall below a certain confidence threshold as Rekognition provides us with confidence values. This helps improve the quality of results displayed.

The final step we join the AWS Rekognition output with the original images for presentation, as you can see below:

If you have any questions or ideas about this workflow, or any of my other tutorials, please feel free to reach out to me on Twitter or LinkedIn. I love the helpful KNIME community, I hope you can take this, improve on it and use it in your next project.

Add a comment

Related posts:

Power in Numbers

Today I am in the frays of the herd.. “It only takes one…” is published by Mia Dibra.

The 2 Harsh Truths I Wish New Writers Would Accept

Why? Because how else can you explain choosing excuses most times? It’s their brain’s way of rationalizing this behavior. I wish we would accept that the best achievements in life are hellish to get…

7 Simple Steps to Lower Heating Bills

But the weather turns colder. Soon, winter follows, and rising energy costs can bring an expensive winter unless you take action to reduce heating costs. Your busy schedule can delay thinking about…