Hello Alteryx Community! I am working on loading a dataset with 20 million records and 150 columns. The data is being read from a text file and written to table in SQL server.
I have two questions -
1) it is taking on average about 1 hour to load 1 million rows, is there way to reduce this time even when reading from a text file?
2) Is it possible to load the records on a continual basis instead of all at once? Currently no records show up until the job is complete. I would like to see the rows loading in the table as the job runs.
Thank you!
If it's coming from a text file, you can try having 2 workflows: one to load the text file to a YXDB using the AMP engine (check the AMP box on the configuration pane), and then another module to load the YXDB to the SQL Server table. However, I would look for something in the data that would be a natural grouping, like branch number, or store number, or state/region, . . . anything that can break the records up naturally. And then have a batch macro loop through them and load them one at a time. I do this myself quite often.