Hi experts,
I may need some help as it is three days we are scratching our head around this issue please.
we have created a DSN-Less Connection in the Alteryx Gallery.
When we configure the output tool to write there it is returning the following error:
Error: Output Data (7): Error SQLDriverConnect: [Snowflake][Snowflake] (4)
REST request for URL
https://blabla898.snowflakecomputing.com:443/session/v1/login-request?requestId=e8aad26c-71d0-45eb-b... failed: HTTP error (http error) - code=404.
The connection is configured as follow:
snowbl:Driver= {SnowflakeDSIIDriver};Server={blabla898.snowflakecomputing.com};Database={ALTERYX_SS}; Warehouse={ALTERYX_W};schema={SCHEMATEST};UID={ALTERYXSERVICE};PWD=__EncPwd1__
We have tried the same identical connection string but installing an ODBC in the laptop and it worked.
Is just with the DSN-less that we are struggling.
Let me know if there is a documentation or it is some simple issue.
If you have any further question please do let me know.
Thanks
Luca
There are some threads on this (DSN-less bulk loading for Snowflake)--- @SeanAdams is the expert. I believe the consensus was that this could not be done via DSNless connection.
If running on a server I would:
1) make sure my server was an AWS EC2.
2) make sure the EC2 role had access to an S3 bucket
3) setup a snowflake external storage connection to the S3 bucket.
4) write alteryx outputs to temp location.
5) use AWS CLI to copy to S3
6) ingest via whatever you want from S3 -> Snowflake (snowpark/external tables/etc/other external staging)
thanks @apathetichell for swift reply,
This is not a good news that cannot be done via DSN less connection. We successfully can input via this and our aim would be to use the same tech to bulk upload.
Just a quick question:
when you say 4) write alteryx outputs to temp location. What is the temp location? What are the steps to achieve this?
Also on point 5) how do we achieve this? Do we need to use Python tool? Or is there a specific tool?
I think we satisfy the point 1
alteryx workflows use a temp directory. you can in workflow use output data to write to a .csv (or on 2024 --- parquet files)... if you are using a formula tool to create this path (which you should be)--- you can then use another formula tool to create a .bat scripting file - and run the run command tool to execute the .bat file.
your .bat file includes
aws s3 cp {{your path - in quotes if needed}} {{your s3 bucket}} --- other commands as needed here
you may want to run a delete command after --- even though Alteryx temp files (especially on Server) SHOULD auto-delete. SHOULD is the key work here - and some junk may be leftover.
once your files are loaded snowflake can auto ingest from a shared external stage/storage location either via snowpipe, external tables (which you'd use to stage and then create a task to auto update your analytic tables) or numerous other methods... that part really depends upon what your company uses for standardized snowflake external storage/table modeling.