This site uses different types of cookies, including analytics and functional cookies (its own and from other sites). To change your cookie settings or find out more, click here. If you continue browsing our website, you accept these cookies.
I have about 1.500 rows in an alteryx flow. I need to look through and SQL table of 15m rows and find these 1.500 rows, and then update them in the SQL table. The problem is there is no primary key. The only way to identify unique rows is by a combination of customer number and date. So a snippet of what I need to do:
Use this table:
To look for matches on CustomerNumber and Date (I.e. CustomerNumber=2222 and Date=05-31-2018)
And then update those matches with the new values so the table becomes:
You could just join on both those columns, or if that is cumbersome (pulling all 15M rows into Alteryx for the purpose of joining), you could also try to Data Stream In your data into the database for an In-DB join.
Is this a one time update? If so, I'd suggest something like this:
Union the two together (your original SQL table and the 1500 updates you need to make), making sure to set a specific output order (check box at the bottom of the Union tool) and putting your updates on top. Then grab a Unique on CustomerNumber and Date. The "U" output at this point is what you want your table to look like, so in the Output tool, rather than doing an update do a Delete and Append. You're replacing all the records but the end result will "appear" to just update the 1500 records.
Because of your total record count, I'm not sure that I'd use this as a repeatable process (will likely take a bit of time), but if you just need to do it once than this is a quick and dirty way to get it done without a primary key.
Hope it helps! Feel free to reach out with questions.