Alteryx Designer Desktop Discussions

Find answers, ask questions, and share expertise about Alteryx Designer Desktop and Intelligence Suite.

AWS S3 upload error

JohnLight
8 - Asteroid

I am getting this error message when I'm running a workflow that is uploading information to an AWS S3 bucket:

 

Error: Amazon S3 Upload (130): Error from AWS: Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed.

 

I have run this workflow in the past without error, this only started happening recently. I don't believe it's a network or AWS issue, as I've checked both of those possibilities. 

 

Does anyone have experience dealing with this error?

16 REPLIES 16
apathetichell
18 - Pollux

@JohnLight - ok - just make sure you set up boto3 in your alteryx python venv in order to use the Aimpoint version

alexnajm
16 - Nebula
16 - Nebula

I asked a few people, and they agree that the 8GB file size is probably the issue, so unsure if our tool will solve it. Why isn't batching an option?

 

Either way make sure boto3 is installed to try out the other version!

JohnLight
8 - Asteroid

Boto3 is installed, and I have it working with a test text input tool. How would I go about doing it in batches? Unioning 5+ csvs in quicksight (which is where we're using the data after uploading it to S3) defeats the purpose of convenience of using the S3 upload tool. I've also been able to upload files that are much larger up to 10x this size on similar network speeds.

apathetichell
18 - Pollux

o.k. - one more question - any chance you can run the workflow on an ec2 versus locally? that would prevent any kind of network timeout (assuming the aws role assigned to the ec2 has direct access to the s3). This would require LICENSE SERVER. Not a SERVER LICENSE - LICESE SERVER. 

 

with these file sizes - I'd probably batch them and set up a Lambda job to union files triggered when a new file was added the s3. I haven't used Quicksight - so not sure if you can use a Glue metadata store to build an artificial table off of the individual files - which would be other avenue.

JohnLight
8 - Asteroid

What is the benefit of the new tool over the old tool?

alexnajm
16 - Nebula
16 - Nebula

The main benefit is that it has the Amazon S3 List tool to grab a list of objects from your bucket. Not sure if that's part of your process at all. But the main reason I suggested the alternate tools was because maybe they would work without needing to batch.

 

If you are still finding issues, I would follow @apathetichell 's lead since there seems to be more direct experience with S3 than i have.

apathetichell
18 - Pollux

I'm still a fan of batching so I wanted to give you one more option:

 

Quicksight supports a manifest file (https://docs.aws.amazon.com/quicksight/latest/user/supported-manifest-file-format.html) - I'd recommend Json. store this in another S3. Update your manifest file to include your new incremental file. Upload new manifest file to S3.

 

Upload individual incremental file to S3. Voila - it's been unioned in quicksight. Note - Quicksight needs access to both - they do not need to be the same S3. 

 

so my recommendation 3 S3 tools - 1 to download manifest. 1 to upload updated manifest. One to upload incremental file.

Labels