Prevent dirty reads in Alteryx
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Notify Moderator
- In many cases workflows are executed in an extract, transform, load order to avoid concurrency problems. However, this is not possible as workflows need to be run at different schedules which have overlapping runtime.
- DB runs on MSSQL
- Changing the isolation level on the database, but Alteryx ignore these.
- Using locks in a pre-SQL statement, but they release right before the writing sessions start.
- The SQL Server bulk loader protocol, but dirty reads still occur.
- Labels:
- Database Connection
- Input
- Output
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Notify Moderator
Hi @Morten ,
I'm not aware of a direct solution for that.
What I've already done was, to create in the pre and post SQL statement, a table with 2 columns, start (preSQL) and finish (postSQL), that way, if the finish is null I'm sure it is a table that is being currently updated.
You have the option to create a counter to wait or to simply stop the process.
It is not a pretty solution, but it does work.
Best,
Fernando Vizcaino
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Notify Moderator
Hi Fernando Vizcaino,
Thank you for sharing you solution. I actually worked with the same idea, but went with something else, as it requires too much configuration for every time you need to input or output data.
Instead I use the in-db tools:
First I use a data stream in tool create a temp table.
Then i bulk upload to the table using write data in-db
This prevents dirty reads and forces a native error, which I can use with a try-catch list-runner I made. However, It doesn't support pre and post sql statement, Update: insert if new and data is not sorted.
So still looking for a better solution..
Thank you for sharing your solution, it is always nice to know, that other people thought something similar. So really appreciate it.
Best regards,
Morten
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Notify Moderator
I came across this same situation. Here was my solution.
1) When writing my data to a table in Workflow 1, I include a column with TotalRowCount, so each row has how many records I'm writing.
2) When pulled data from the table in Workflow 2, I include in the WHERE clause language such as: WHERE Table1.TotalRowCount = (SELECT Count(*) FROM Table1)
