The DataRobot Tools Team are big proponents of the Python SDK. We were able to quickly and easily convert our Macro+R based tool into one that utilized the HTML and Python SDKs. For Alteryx users, the tool is only useful if you currently have access to a DataRobot account and it looks as follows:
However, for Python tool developers, we have developed some useful helper classes for making it easier to work with the Alteryx Python SDK that we encourage folks to have a look at. Some of the features include:
For now, developers will need to open up the YXI file and examine the code. Eventually we would like to split out our common code and publish it to PyPi as a stand-alone library so it can easily be included into other tools via addition to a tool's requirements.txt file. You can grab our YXI package from our posting on the Alteryx Gallery.
@nick612haylund have a gold star.
Dad Joke: Why did all the cows get Gold Stars?
A: Because they were out standing in their fields!
I should get one of those keyboards just for that amazing joke.
Ever needed a workflow to just stop processing because of a lack of stuff to process? Like no files for a dynamic input tool. If you don't want the error from it or you want there to be enough files to bother processing or any number of other scenarios in which having Alteryx pass 0 records through the workflow is not good, then I have the tool for you.
I give you Terminator. There is one configuration value for the minimum number of records to continue. If the minimum records are received on the input connector, then the records proceed unchanged on the output. If the minimum is not met, then the output is never initialized and no down stream tools process. The 'n' output always output the total number of input records received, even if 0...unlike the Summary tool.
I wanted to build out a simple proof on concept of what a predictive tool built with the Python SDK might look like. The tricky part about building a predictive tool with the Python SDK is that the SDK is designed to pass records through for processing one by one, where most predictive models require access to all input records at once. To work around this, I followed the lead of @NeilR and @Chriszou outlined in this Developer Discussion Thread. I adapted code from the Example Output Python Tool, in ii_push_record to create a temporary table of data, and then performed the clustering analysis in ii_close.
The Tool will shown an error message before the workflow is run. I believe this is related to how the workflow/tool is updated. The tool should run without issue and create clusters for your input data.
There is definitely a lot of room for refinement in the code and tool. I am really excited to take it in new directions, expand it's abilities, and run some comparisons against our R-based tools. I just wanted to post this tool to demonstrate how something like this can be done with the SDK, and to make sure I got one of the super cool Python Badges 🙂
@SydneyF I did something similar for my tool. It wasn't immediately obvious that you couldn't simply stack the input records as it is merely a pointer and every incoming record has the same pointer.
What Python Badge?