This site uses different types of cookies, including analytics and functional cookies (its own and from other sites). To change your cookie settings or find out more, click here. If you continue browsing our website, you accept these cookies.
I've run into this problem a lot when dynamically reading in .xlsx's. Oftentimes I found that one of my suppliers would have opened an excel, filled in a column that wasn't used and deleted the contents of that column later on. On other occassions some excels would have a messed up date somewhere (e.g. DD.MM.YYYY instead of DD-MM-YYYY), also causing read-in issues in the type of field. Somehow, this messes up the dynamic read-in for me.
From your description it seems you might run into something similar. You say you split up your data source in 17 smaller files in workflow 1. Do you do this with 17 different output data tools or just 1 with an additional "Take File/Table name from field?" If you do the first, I would recommend trying the latter.
Secondly, have you tried writing to .csv's and reading those in dynamically in your second workflow? This usually solved the problem for me.
Let me know if this helps, or if I can do anything else to help you.
Thanks for answer, however schema is the same, not different. Same datatypes with same column names even in the same order. Quite irritating to write timeconsuming batch macros dynamic renames etc when alteryx suppose to be easy and fast.