Community Spring Cleaning week is here! Join your fellow Maveryx in digging through your old posts and marking comments on them as solved. Learn more here!

Data Science

Machine learning & data science for beginners and experts alike.
SusanCS
Alteryx Alumni (Retired)

SusanCS_0-1629150822241.gifImage via GIPHY

 

 

If you’ve been around American pop culture for even a little while, you can probably name the characters in the image above. With over 30 years on TV, The Simpsons has provided plenty of “training data” so we can recognize all or many characters from the show. Plus, as humans, we’re just pretty good at interpreting images.

 

But to recognize images, computers need not just training data, but also a method for understanding images and predicting who or what is within them. Fortunately, with the release of Alteryx Designer 21.3, the new Image Recognition Tool in the Intelligence Suite provides exactly that. This tool helps you build an image classification model trained on a set of labeled images with two or more classes (multiclass classification). You can then use the Predict Tool from the Machine Learning tool group to label new images. 

 

Let’s take a look at how we can use Image Recognition to train a model that can identify the members of the Simpson family almost as well as you and I can.  



SusanCS_1-1629150822783.gif

 Image via GIPHY

 



Don’t Have a Cow, Man: Prepping and Inputting Images

This dataset of images from The Simpsons is available on Kaggle, and it includes over 16,000 still images from the show, organized into training and test directories that represent 19 different characters. For simplicity, I’m using 5,637 images, each of which shows one of just four characters: Homer, Marge, Bart and Lisa. The model will try to “classify” each image as the character it thinks is shown, essentially “recognizing” that character.

 

I put 70% of the images into a training dataset, 20% in a validation set, and 10% in a holdout set. (For more on why this division matters, check out this post.) The Directory Tool makes it easy to bring in the full training and validation directories, plus their subdirectories organized by each of the four Simpsons characters. I used a Formula Tool to extract the label name (the Simpsons character shown in the image) from the name of each subdirectory and put it in a new field called “Class.” With the images organized and labels established, everything is ready for the Image Input Tool to bring in the actual image files.



input_prep_workflow_part.png

 


But — “d’oh!” as Homer would say — we aren’t ready to build the model just yet. It’s important to process the images first for consistency and for compatibility with the model-building process. In particular, you may need to use the Image Processing Tool — discussed in this blog post — to make the images consistent in size and apply other transformations they may need. (However, don’t convert them to grayscale, as that’s not compatible with the Image Recognition Tool.) Choose a uniform size for images; the smaller your images, the faster your model will be, but it may have lower accuracy. The minimum size for images depends on the pre-trained model you choose (more on that in a moment). A good starting point is 128 x 128, and that’s what I used here. You can experiment to see which dimensions give you the best results. It’s important to apply the same processing to your training, validation and test images.



SusanCS_3-1629150822363.gif

 Image via GIPHY

 

 

Woo-Hoo! Configuring Image Recognition

The Image Recognition Tool comes next in the workflow, and as shown below, it requires some configuration choices. We need to specify where the training and validation images are coming from and which field in each data stream contains labels for the images. 



tool configuration.png

 

 

We also have options for Epochs and Batch Size. A batch here is a subset of our training data, and the default in the tool is 32 images. Those 32 images are sent through the neural network that’s under construction; the error is calculated based on that group, and the model parameters updated to try to reduce error.  

 

The number of epochs represents the number of times you want all the training data (in one or more batches) to be sent through the model-in-progress. The default here is 10, but you can experiment with different values to see what works best for your dataset. The idea is to run the data through the model sufficient times to find parameters that minimize error.

 

With 32 batches and 10 epochs, the parameters of your model will be updated 320 times; therefore, keep in mind that your workflow will take longer if you increase these numbers. Adding more epochs can also lead to your model overfitting. Again, you can experiment with these options to see which combination produces the best results.



SusanCS_5-1629150823175.gif

 Image via GIPHY

 



Mmm, Models: Selecting a Pre-Trained Model

Finally, you have a list of choices for Pre-Trained Model. The Image Recognition Tool doesn’t build a deep convolutional neural network model for image classification completely from scratch. You probably wouldn’t want it to, unless you have a lot of computing power and time on your hands! Instead, it uses pre-trained models built by experts who did have those resources, plus millions of images for training and refining their models. The pre-trained models contain existing “knowledge,” so to speak, about image features, and they can “transfer” that knowledge to analyzing your images (hence the term “transfer learning,” which is a method also used with other types of data, such as in natural language processing).  

 

The default here is InceptionV3, but you can also choose VGG16, InceptionResNetV2, or Resnet50V2. As the Image Recognition Tool documentation explains, each has its own advantages and disadvantages in terms of accuracy, speed and computational expense, and you’ll need to prioritize those criteria for your use case. Again, you can easily try multiple options  here and see how each version of your model performs. If your images are small, be sure to note the minimum sizes required by the pre-trained models; VGG16 and ResNet50V2 require 32 x 32, and InceptionV3 and InceptionResNetV2 require 75 x 75. 

 

Finally, in order to use your trained model for prediction on new, unlabeled images, you can include your new data and a prediction process in the same workflow, perhaps using containers to separate the parts of the process. Alternatively, you can save the model for later use in a separate workflow. To save the model, add an Output Tool after the Image Recognition Tool to put the model in a .yxdb file.



SusanCS_6-1629150821965.gif

 Image via GIPHY

 



Cowabunga! Training the Model and Making Predictions

When you run your workflow, you can watch your epochs play out in the Results window. You’ll see the progress and, for each epoch, the evolving model’s performance on both your training and validation data. Often, though not always, you’ll see accuracy mostly increasing and loss mostly decreasing as the epochs proceed. This pattern shows that your model’s ability to predict the image label is improving as it repeatedly looks at the data and makes adjustments to its own parameters. 



SusanCS_7-1629150821739.png 

 

Want to see how all the pre-trained model options performed? Open the spoiler tag below.

 

Spoiler
I tested each of the four pre-trained model options on this dataset to see how each performed and how long each took to train on my MacBook Pro with a Windows virtual machine. Here’s what I found. Resnet50V2 offered the best accuracy and reasonable speed for this particular dataset, but YMMV with different hardware, images, image sizes and hyperparameters.

 

Pre-Trained Model

Time to Train

3,946 images in training set; 1,128 in test set

Time to Predict

563 images in holdout set

Overall Accuracy on Holdout Set

InceptionV3

46 min

1 min 44 sec

80.46%

InceptionResNetV2

1 hr 12 min

1 min 28 sec

83.84%

VGG16

1 hr 54 min

3 min 6 sec

88.28%

Resnet50V2

57 min

1 min 28 sec

89.88%


Now you can use your model for prediction, either directly within the same workflow or in a separate workflow. Whichever option you choose, prior to predicting their labels, your new images for classification should be processed in the same way as your original images; in my workflow, I resized my new, unlabeled images to 128 x 128 as well.



predict_workflow_part.png

 

 

I set up a second half of my workflow that brings in the saved Image Recognition model through an Input Tool. I connected the model to the M input anchor and my holdout data to the D anchor on the Predict Tool. 

 

I also added a bit of data tidying and analysis after the Predict Tool to assess how well my model performed on the holdout images. I added a variable to mark whether the prediction matched the original label on the image, which let me quickly see with a Filter Tool which images’ labels were predicted correctly and which weren’t. 

 

Finally, I used a Contingency Table Tool to display what’s essentially a confusion matrix, comparing the actual and predicted labels for the images. This visualization can give you quick insight into what your model is learning and where it’s making mistakes.  



SusanCS_9-1629150821967.png

 

 

My best-performing model used the ResNet50V2 pre-trained model, and it achieved 90% overall accuracy across all the classes (characters, in this case). It did slightly better identifying Lisa and Marge than Bart and Homer. I could experiment further with image and batch sizes, as well as the number of epochs, to see if I could get even better results. Nice work, Image Recognition! 



SusanCS_10-1629150823203.gif

 Image via GIPHY



Live Action: Image Classification Applications

What images do you have among your collections of interesting data? You may need to hand-label some images to get started, but with enough prepared examples, you can generate predictions on new data. Remember also that there are important ethical considerations to keep in mind with the use of images, especially those of people; here’s a concise overview of some issues to consider, and there are many more resources out there.

 

Data generated from your images can be mixed and mingled in your workflows with other data sources, as usual — which means you’ve expanded your data possibilities once again. Or, as they’d say in Springfield, “embiggened” them.



Do you still have questions about Image Recognition? Which other tools or data science concepts would you like to see addressed here on the blog? Let me know with a comment below, and subscribe to the blog to get future articles.



Recommended Reading

 

 

Update: By request, I've attached this workflow to the post! Please note you'll need to download the original dataset yourself, update the file paths included in the workflow, and have access to Intelligence Suite tools. Have fun!

 

Blog teaser photo by Joanna Kosinska on Unsplash

Susan Currie Sivek
Senior Data Science Journalist

Susan Currie Sivek, Ph.D., is the data science journalist for the Alteryx Community. She explores data science concepts with a global audience through blog posts and the Data Science Mixer podcast. Her background in academia and social science informs her approach to investigating data and communicating complex ideas — with a dash of creativity from her training in journalism. Susan also loves getting outdoors with her dog and relaxing with some good science fiction. Twitter: @susansivek

Susan Currie Sivek, Ph.D., is the data science journalist for the Alteryx Community. She explores data science concepts with a global audience through blog posts and the Data Science Mixer podcast. Her background in academia and social science informs her approach to investigating data and communicating complex ideas — with a dash of creativity from her training in journalism. Susan also loves getting outdoors with her dog and relaxing with some good science fiction. Twitter: @susansivek

Comments
Emil_Kos
17 - Castor
17 - Castor

Hi @SusanCS,

 

Thank you for pointing out this article to me. This is a great source. It was a great choice to pick Simpsons as an example. I love the super Mario effect(https://www.youtube.com/watch?v=9vJRopau0g0 - link to a ted speech explaining what it is).

 

Do you think by any chance you could attach workflows to this article? I think it would be fun to explore it myself but unfortunately, I don't have a sufficient amount of time to deep dive into it now. 

 

Once more thank you for sharing. It is a great article!

 

SusanCS
Alteryx Alumni (Retired)

Hi @Emil_Kos! I'm glad you enjoyed the article. I love the Super Mario Effect video! Thank you for sharing that. So funny and great insights, too!

 

I'll add the workflow to the article; I didn't attach it because the user will have to go download the original dataset, and it includes tools that require the Intelligence Suite to. Once you've got the dataset, you'll need to update the file paths in the workflow. Hope you'll enjoy tinkering with it! I'm really enjoying experimenting with the Computer Vision tools.

Emil_Kos
17 - Castor
17 - Castor

Hi @SusanCS,

 

I am happy that you enjoyed the ted speech. I like the part about throwing darts 😀

Thank you for the workflow! I got data already, and I will run the model myself. I didn't have a chance to experiment with image recognition tools before.
I am looking forward to it!

SusanCS
Alteryx Alumni (Retired)

@Emil_Kos, that's awesome! Enjoy the new tools and let us know how your experiments go! 

Emil_Kos
17 - Castor
17 - Castor

Hi @SusanCS,

 

Thank you for your workflow. I learned a lot from playing those tools. 

 

One question you have done reseizing of the photos only because of the size prerequisite?

 

Once more, thank you for this precious knowledge. I believe the articles like this are fantastic catalysts for the upskilling process for people like me. People who would like to learn more about advanced analytics.

 

I spend some time creating a recording from this workflow for internal purposes(I wanted to share this use case with others, but I need to polish my recording 😀)

I will let you know once I will share it with a broader audience. 

SusanCS
Alteryx Alumni (Retired)

@Emil_Kos, this comment made my day! I'm so happy to hear this had been useful to you. 

 

As for resizing: Here I resized primarily for consistency, since the size of my original images varied a lot. It's typically good for images to be a uniform size. The "right" size to use is an open question, though. You want your images to be at least the size required by the pre-trained model you choose, but beyond that, it's up to you. Larger images may improve your results, but the tradeoff is the time required to process those larger images. You can experiment a bit to find the right balance between image size and accuracy for your particular use case and dataset.

 

I'd love to see whatever you do with the workflow, if you're able to share! Have fun!

Kevin_VANCAPPEL
10 - Fireball

Hello,

 

  thank you for this new amazing tool.

 

  After used it a few days, I think you could improve it : 

  > by adding the dimensional constraints in front of each of the pre-trained models,

  > by dividing the training data correctly (eg a "sample" tool, 80%, grouped by the label),

  > at least, allow the tool to use black & white images (I wanted to test it on the MNIST, but the tool tells me that it necessarily needs RGB images) ?

 

  Question : do you in the future allow the user to choose between CPU or GPU usage ?

 

  In any case, thank you again for this new tool, it is certainly perfectible, but very simple to use, and I sincerely think that it will allow a greater number of people to understand the many use cases made possible thanks to image recognition.

 

  Thank you again.

 

  Kévin VANCAPPEL

 

  

SusanCS
Alteryx Alumni (Retired)

@Kevin_VANCAPPEL, so glad you had a chance to try out the tool, and thank you for sharing this great feedback. We'd greatly appreciate it if you posted your thoughts about your experience in our Designer Ideas forum so we can share your feedback with those developing the tool further.

 

I agree, there's lots of potential here, and we're excited that you're testing out the tool and offering helpful ideas for making it even better! 🌟