use case: Workflow to load data in the csv files to SQL Server using SQL Bulk loader. csv files have date columns which are formatted as to_char(datecolumn,'mm/dd/yyyy hh24:mi:ss') when reading data from oracle source systems.
issue:
note: When we use a normal SQL Server (non bulk) insert using OLEDB driver, we get no errors and it inserts without fail. We know that the data is good and the target table has no issues since we send this data elsewhere and it runs fine with no issues, we are trying to mimic this process on our end to test
@narayr
Firstly:
Secondly:
All the best,
BS
So this is a macro that can take in many different CSVs (one at a time) with completely different column structures and names, so I am not sure how to specifically identify the datetime columns for each CSV that comes in, since every field from a CSV comes in as a string. I currently am simply using the multi field tool to increase the size of each field from 256 to the max that a string offers, but each field is still a string before and after the multi field tool.
Every field is a string when I do a simple input to output of CSV to SQL server (non bulk) and it works properly and the data gets converted from string to their proper data types that SQL server defines, so Im wondering why this does not happen with the bulk loader. Keep in mind I dont need to use any multi field tool for the non bulk loader
@narayr
Interesting.
1) Firstly:
Can I ask, you're using V_WString or V_String and not String, right (in the Multi-Field tool for your fields)?
V_Strings are variable strings where you can supply a maximum length value to but will scale down accordingly if a cell isn't as big as that maximum length. As oppose to a String datatype where each cell will be forced to be the size you set irrespective of the actual length of a cell's contents.
2) Secondly,
Your use of the multi-field formula tool is valid but generally speaking it's used to apply a formula/expression to multiple fields simultaneously (not just solely change the datatypes of fields). If you solely want to change the datatypes, the Select tool usually will fit your needs. In your case, this is fine though.
3) Thirdly (to answer your query),
3.1) Are all the datatypes of your target SQL Server table set to strings?? (or equivalents like VARCHAR - or however SQL Server defines them) .. because it seems like you're just uploading strings given that you're reading in a CSV and not changing the datatypes. Please confirm.
3.2) I think the first step to resolving this is checking if your data types in alteryx match SQL Server (before you get to the upload tool; check the metadata i.e. column data types).
If you check this article, you can see that the OleDB driver seems to auto convert datatypes in alteryx to unexpected datatypes in SQL Server. However, when using the Bulk Uploader (like you want to), this uses the ODBC driver instead. Based on this article, the ODBC does a better job at retaining the datatype from alteryx into SQL Server. Therefore, again, this could be a datatype mismatch between the two programs.
I can't find the exact conversion table from Alteryx to SQL Server for datatypes but I did find this article (scroll down to the Write section). This is for a slightly different Alteryx product but I guess it's roughly the same https://help.alteryx.com/aws/en/trifacta-application/reference/type-conversions/sql-server-data-type...
Hopefully this helps.
All the best,
BS