This site uses different types of cookies, including analytics and functional cookies (its own and from other sites). To change your cookie settings or find out more, click here. If you continue browsing our website, you accept these cookies.
Has anyone leveraged the Natural Language Toolkit dictionaries (developed for Python) as part of a Naive Bayes classification for text sentinment analysis in Alteryx? Looking for how to setup those dictionaries for the training set.
I have not tried that exactly, but wanted to alert you to a sentiment analysis tool available on the gallery here. It uses a Microsoft API which is free up to 10,000 transactions per month (and the macro sends batches of 1,000 records, so you might be able to get away with 10,000,000 records per month).
Thanks again Neil. Wanted to let you know I tried it out and it worked like a charm. I used the twitter search connector to download all the #data15 tweets from the Tableau conference and ran them through the Azure ML.
It was a total of 17,000 tweets and the AzureML text macro ran and scored themin just 21 seconds- amazing!
Only thing I'm confused about is the 'batching'. You mentioned that the macro would batch records in groups of 1,000. So I would have expected that i would have used 17 of the 10,000 free calls of the service for the month. But when I checked this morning, my 10,000 calls are totally depeleted and I have to wait until next month (or buy more).
This is my first foray into Azure, so appreciate any explanation on what happened there.
The Microsoft data plan page currently states: "A transaction is one request that returns one page of results. Retrieving multiple pages will result in multiple transactions executed." I mistakenly interpreted that to mean a batch of 1,000 records would count as a single transaction.
I reached out to Microsoft - they quickly responded that each record counts as a single transaction and they will update their documentation to make this clearer. Go Microsoft support!