Advent of Code is back! Unwrap daily challenges to sharpen your Alteryx skills and earn badges along the way! Learn more now.
Free Trial

Alteryx Designer Desktop Discussions

Find answers, ask questions, and share expertise about Alteryx Designer Desktop and Intelligence Suite.
SOLVED

Error SQLFetch: [Snowflake][Support] (50310) Unrecognized ICU conversion error

knozawa
11 - Bolide

Hello,

 

I got this error message when I tried to read the data using an Alteryx input tool with ODBC driver.

Error: Input Data (3): Error SQLFetch: [Snowflake][Support] (50310) Unrecognized ICU conversion error.

I understand that this error message is due to the Unicode value.  I referred to this link that is exactly the same question.  However, the answer is “contact support”.

I could load the data with Unicode value into snowflake, but I'm not sure why I cannot read the Unicode value from snowflake.

 

Does anyone know how to solve this issue?  My data contains some Chinese characters and/or emoji characters, so I don't want to decompose unicode characters if it's possible.

 

Sincerely,

knozawa

3 REPLIES 3
MichaelF
Alteryx Alumni (Retired)

Hi @knozawa,

 

This seems to be a driver issue, so you could try different ones, or maybe Snowflake will release one in the future that handles these characters. But in terms of troubleshooting, you could just test which characters are throwing the error and remove those. In terms of reading into Alteryx, there's no real workaround for connecting directly to Snowflake since it depends on the driver to connect. Alteryx can handle Chinese characters and emojis so if there's another way to read in from Snowflake without losing those characters, they can definitely be read into Alteryx.

 

Cheers,

Mike

knozawa
11 - Bolide

@MichaelF

 

It seems like the � character causes the issue when I read from snowflake.

 

I replaced the � using the multi-field formula tool in Alteryx:

Replace([_CurrentField_],'�','')

 

After uploading the data into snowflake and read from snowflake, I got the same error message.

 

I noticed that Alteryx doesn’t recognize � after the muti-field tool, but the .csv file that was created for snowflake bulk loading still appears to have many �.

 

Since Alteryx doesn’t recognize � when I read the .csv file in Alteryx, I’m wondering if there is a way to clean up the .csv file before loading into the snowflake without manually clean it up.

 

Sincerely,

Kazumi

knozawa
11 - Bolide

@MichaelFand @HenrietteH found a solution!

 

We needed to replace the delimiter used in the csv file for bulk loading.

 

You can load the data with either loaded, leave it as is but make sure to remove the delimiter used in the csv file.
If you're using the regular output tool with snowflake bulk loading, the character is 0x01, you remove that with the following code: regex_replace([_CurrentField_],"\x01","")
If you're using the bulk loading macro, the delimiter is whatever you picked (I was using "|" as delimiter), so it would be regex_replace([_CurrentField_],"|","")

Then to read the data in convert it to a binary in the query: 
Select 
TO_BINARY(<database>."PUBLIC"."<Table>"."<Column>",'UTF-8') As Content_Binary from <database>."PUBLIC"."table"

Then use the Blob Convert tool to convert the binary field back to text. 

Thank you very much for your help, @MichaelF and @HenrietteH!

 

Sincerely,

knozawa

Labels
Top Solution Authors