I am trying to build a centralized repository that will pull the exact same thing from 30+ different remote databases. The solution will only pull about 15 tables from each instance, but 15*30+ is a lot of duplication. I am newer to the tool, but looking for a way to be able to just build one workflow and have be ran for each instance of the remote databases. Is there a way to accomplish this?
Hi @chris_oneslager,
Yes, there are ways to accomplish this with Alteryx. Without knowing what types of database you are pulling from, if the list of databases and tables is static and the data truly has no schema differences (i.e. string in one table and numeric another table), then the list of sources could be entered into a Text Input tool, then brought into Alteryx using a Dynamic Input tool. If there are schema differences but the names are the same, then a batch macro could be used to solve this issue as well. This will combine all 450+ tables together and one Alteryx workflow can manage the data all at once.
I am pulling data from Oracle and schemas are identical in both source and destination. Only thing that needs to change is the name of the server and database name. IT would be nice to control the scheduling of each extract separately or be able to re-run an extract individually if needed. Could I also store this information in a table and provide it to the dynamic input tool instead of having it static in a text input tool?
I could not figure out how to change the "template" but in Oracle you can fully qualify a select statement to pull from different databases assume the same creds. Is there a way to change out them template?