I have a zipped csv file stored in an Amazon S3 bucket.
End goal: Copy CSV file into table in Snowflake Database.
I understand an easy solution is to download the ZIP file locally from S3, unzip the file, upload it back into my S3 bucket, and then copy into the table using Snowflake SQL. However, the data that I want copied into Snowflake is extremely large so I am looking for a more time conservative option.
I am hoping to do one of the following using Alteryx:
1) Convert the ZIP file to GZIP (since Snowflake supports the decompression of GZIP files), or
2) Unzip the file while remaining in the S3 network to avoid the timely downloading/uploading of the file to my local computer.
Does anybody know any solution to either one of these options? Thank you!
Have you found the solution yet?
I've just faced similar problem looking for a way to read .gz archive on S3 with Alteryx
Hello @nshields and @evgenypolitov,
please find attached macro. I guess, you have already zip file created and saved somewhere. Attached macro will allow you to rename zip files/or any files as per your needs.
Please note, that first variable is full path to the file that needs to be renamed, and second is only new file name with extension.
Let me know if it was useful!
Best Luck!
Niky
Utilisateur | Comptage |
---|---|
19 | |
15 | |
13 | |
9 | |
8 |