This site uses different types of cookies, including analytics and functional cookies (its own and from other sites). To change your cookie settings or find out more, click here. If you continue browsing our website, you accept these cookies.
I want to embed my python script in an alteryx workflow for deployment. I wonder if there is a way to pass the alteryx results as parameters that can be later used by python. I attached a simple example here where I want to pass three parameters: ID = "AAA", Target = "BBB", and Region = "CCC". Then in my python script, I have three corresponding variables named "ID", "Target", and "Region". How do I automatically set the three variables in python to have the values specified from the previous workflow? I prefer to put everything under the alteryx WF rather than saving the alteryx table to an external file (e.g., csv file) before loading into python. Is there a decent solution? I would expect this should work similar to passing command line arguments to python.
It shouldn't matter what the file is named, what's important is the connection name. If you have only 1 connection into the Python Tool, as in your screen shot, then the table should be named '#1' in Python.
EDIT: Sorry, I misunderstood the screenshot, you're running a Script using the Run Command Tool, not the Python Tool. If you're on Alteryx 2018.3 you should try out the new Python Tool.
Otherwise, I don't think Python has support for .yxdb files. I think your best bet would be saving to a .csv or a SQL table, however if this script will be running many times, this could certainly be a bottleneck and I can see why you're trying to avoid it.
Let me know if you have access to 2018.3, otherwise I'll try to find more information about .yxdb files and Python
I see, in that case I don't think you'll be able to avoid writing it to a file that Python can read in first.
Either that or you could try to turn your script into an Alteryx macro using the Python SDK.
The only other solution that comes to mind would be reading it into R, turning it to something like a binary .feather file which has very I/O speeds, and then reading that feather into Python, however that doesn't help much to avoid the bottleneck that comes with reading/writing to disk.