Community Spring Cleaning week is here! Join your fellow Maveryx in digging through your old posts and marking comments on them as solved. Learn more here!

Alteryx Server Discussions

Find answers, ask questions, and share expertise about Alteryx Server.

Alteryx Issue: Schedule not running as per next schedule time

Sachit
5 - Atom

Alteryx Issue: Schedule not running as per next schedule time (its showing state = completed next run = Never)

 

We have Alteryx jobs scheduled every 10 min, after running few iterations, in last run Suddenly it Changed to Never and State is Completed

 

clipboard_image_0.png

 

clipboard_image_1.png

 

We tried with the putting the end date but still it is having same issue.

clipboard_image_3.png

 

Please advice why this issue is occurring?

 

 

10 REPLIES 10
jrgo
14 - Magnetar

Hi @Sachit 

 

I don't know if Server is still designed to do this, but I remember that the scheduler, before it was moved into the Gallery, would "Complete" a schedule if the job returned certain errors.

 

Are you able to confirm that the last job for that workflow did complete without errors?

 

And this may not be the case for you, but it's hard to say without troubleshooting the behavior more. If the problem continues, I'd suggest contacting support.

 

Regards,


Jimmy
Teknion Data Solutions

Paul_Holden
9 - Comet

I have been investigating the same issue on one of our Alteryx servers, where four schedules failed on the same day.

 

The last scheduled run did not get correctly recorded - at least it does not show if I run the Alteryx Server Usage Report, only the penultimate run shows.

 

Checking in the server Event Viewer I can see the following errors that coincide with the times of the failures (last run time on the Scheduler view).

 

I can't match the ids up with the schedules or workflows but his would appear to be the root cause of the issue.

I have advised our BI team to recreate the schedules.

 


Error - There was a corrupt record: 5c371e9bebf3d113bc06e879 - PersistenceContainer_MongoDBImpl_Get_Error: Record identifier is invalid <5e7cf832ebf3d110a402c70a> collection <AS_Queue>


Error - There was a corrupt record: 5c18ed4febf3d113bc0356bd - PersistenceContainer_MongoDBImpl_Get_Error: Record identifier is invalid <5e7cfc04ebf3d110a402c754> collection <AS_Queue>


Error - There was a corrupt record: 5bfed44debf3d113bc015e5f - PersistenceContainer_MongoDBImpl_Get_Error: Record identifier is invalid <5e7cfcf6ebf3d110a402c76f> collection <AS_Queue>


Error - There was a corrupt record: 5c18edb1ebf3d113bc0356cc - PersistenceContainer_MongoDBImpl_Get_Error: Record identifier is invalid <5e7d033aebf3d110a402c7c3> collection <AS_Queue>

 

 

Versions

  • Client: 2018.3.51585
  • Server: 2018.3.4.51585
  • Server Binaries: 2018.3.4.51585
  • Service Layer
    • Master: 2018.3.4.51585

 

preeti121288
6 - Meteoroid

I am also facing same issue with my workflow schedules. do we have any solution for this? is it known issue to alteryx ?

 

Paul_Holden
9 - Comet

What version of Alteryx server are you using?

I know of some issues relating to 2018.x but I don't know if they apply to later versions.

preeti121288
6 - Meteoroid

we are using 2019.x version. Please tell me issue related to 2018.x.

 

Thanks

Paul_Holden
9 - Comet

We had the same issue with open ended schedules being marked as complete.

 

We did not work out exactly what was happening. There were a few errors in the server Event log but we could not correlate those to the failures the the last time the workflows ran, or were supposed to run.

 

We ended up creating replacement schedules and removing the old ones. At about the same time we also increased the RAM on the server. Although Windows was not reporting any particular memory exhaustion this did seem to provide some performance improvement and we saw less queuing of jobs which might be relevant to our case (but won't be the same in 2020.x), see below.

 

I was expecting to catch the cause the next time it happened but we have not had a recurrence since we set up the new schedules and increased the host server RAM.

 

The known issue on 2018.x is that the Scheduler seems to "lose track" once there are a certain number of jobs queued. This results in jobs being processed in LIFO order, as opposed to the FIFO order that should be happening, and thus jobs can get stuck at the front of the queue for a long time.

My understanding is that this specific issue was fixed in 2019 and subsequent versions.

 

The jobs that we had schedule problems with are frequently running, every 10 minutes, and so were experiencing the stuck queue issue more than others. It's not clear if that then caused the problem with the schedules but I am suspicious that they might be related since both issues went away at the same time.

 

sameerpremji
8 - Asteroid

Apparently, this issue is back in Server version 2021.1 as we're encountering this a few days after the upgrade from version 2020.3.

 

@lepome @TonyaS 

TonyaS
Alteryx
Alteryx

@sameerpremji 

This looks like Server. I'm the TPM on Engine. 

 

I will send the link to the Server folks. 

Tonya Smith
Sr. Technical Product Manager, cloud App Builder
KevinP
Alteryx Alumni (Retired)

@sameerpremji Please ensure you are running the latest release of Server 2021.1 (2021.1.4.26400) as there was an issue with the initial release that would cause jobs using gallery data connections to fail or get stuck which could be contributing to your issue. If you are still having issues with schedules moving to a completed state unexpectedly after updating to this version please open a support request to investigate what is happening in your environment to cause this.