This site uses different types of cookies, including analytics and functional cookies (its own and from other sites). To change your cookie settings or find out more, click here. If you continue browsing our website, you accept these cookies.
Can you describe the workflow in more detail? Writing inside a macro or anything like that? Or in general, can you recreate the problem with a block-until-done tool where the first output does all the processing, and the second output does your database write in one big barrage? That should hopefully isolate the output step from any prior processing.
So I ended up writing to a .csv and then writing to the database. It is still taking ~5min to write ~3600 records even after utilizing the 'Block-Until-Done' node.
Statistics: Info: Output Data (158): ODBC Driver version: 03.51 Info: Output Data (158): 3423 records have been successfully committed. Info: Output Data (158): 3423 records were written to odbc:DSN=Marketing (digital.adwords_campaign) Info: Output Data (158): Profile Time: 287746.09ms, 99.97% <== Database Write Info: Input Data (169): Profile Time: 86.31ms, 0.03% Info: Block Until Done (170): Profile Time: 4.25ms, 0.00%
I'm fairly certain its a database issue because I have another workflow that connects to a wordpress database managed by wpengine and I push 1k - 4k records weekly and it takes seconds. Maybe I should just contact support and ask for their database config since they also use MySQL :D
I can say that i've experienced this and my work around is to push all the data to a local database then sqldump to the remote server. I regularly upload 100MM + records to my remote servers and this is the only way i've been able to manage that transfer. I'm thinking it's something with the drivers, but i'm really not a database architect.