HI,
I have a few yXDB files (it contains more than million rows ). I want to write it to redhisft db via the S3 bucket copy command. Only CSV files so far work for my copy command (CSV fiels are not able to handle the huge amount of data that I have). Any suggestions on how to load yxdb files into the s3 bucket? Would appreciate a syntax that I can use for uploading yxdb files . Currently only the csv copy commands work for me
copy schemaname.tablename
from 's3://s3 path/test.csv'
iam_role 'arn:aws:iam::xxxxxxxxxxx:role/Role-Redshift-xxxxxx-prod'
null as '\0'
csv
ignoreheader 1
delimiter ',' dateformat 'auto';
Thanks for your help.
Hi @pavi88
We do have an out of the box connector that could help you upload YXDB file to S3 bucket. It's called Amazon S3 upload. Have you tried it? Cheers!
Hi. Thank you for your suggestion. I am aware of the option but I don't have the access keys for the upload option. I managed to write everything to csv and load it into the bucket. Thanks for your help