Community Spring Cleaning week is here! Join your fellow Maveryx in digging through your old posts and marking comments on them as solved. Learn more here!

Alteryx Designer Desktop Discussions

Find answers, ask questions, and share expertise about Alteryx Designer Desktop and Intelligence Suite.
SOLVED

SQL Extraction slow with CReW Workflow Alteryx 11.5

Sieuwkje
7 - Meteor

Hey there,

 

I was wondering if more people have experienced this.

Since a week, our extraction phase in alteryx seems to take up a lot more time than before. We havent made any changes to the workflow and the database is the same. I can't think of any change from our side that may have caused this issue.

When we extract manually from the sql database everything seems normal, and we don't think its an issue with the drivers.

I was wondering if this might be an issue using the CReW runner tools. In the past, we sometimes noticed that when the extract workflow containing these runners would run, the coupled workflows were not ended properly in Windows, slowing down the system by staying active. Rebooting would help in that case.

Does this make sense?

And if it does, is there a solution besides rebooting the system?

 

Thanks a lot in advance!

 

4 REPLIES 4
MichalM
Alteryx
Alteryx

@Sieuwkje 

 

Are you using the CReW Runner on the server?

Sieuwkje
7 - Meteor

Yes, we are running it on the server

MichalM
Alteryx
Alteryx

In that case the most likely explanation is that there're more processes running on the infrastructure that are fighting for the resources of the server. As powerful as the CReW Runner macros are (and I've used them a lot in my time), when it comes to using them on server I'd tread with caution and as such wouldn't recommend using them.

 

The Runner macros trigger the workflows on the server "behind the back" of the Controller. So if your server is running on a 4-core machine and you have it configured to run 2 workflows simultaneously, the Runner macros could trigger another 10 jobs in the background and the Controller won't know. So instead of running 2 jobs in parallel there are 12 jobs fighting for the limited resource available which could very well explain the behaviour you're seeing.

Sieuwkje
7 - Meteor

That actually does make sense, maybe this is something we can solve by updating our run schedule to avoid this as much as possible. This schedule might have changed since last week, but i need to check. Thank you so much for your reply!

Labels