Hi,
I'm getting intermittent clusters of errors across unrelated workflows which are all referring to memory pressure.
Looking at a memory profile for the host server suggest that there is a memory leak of some kind.
Searching the community posts hasn't turned up very much for me so I wondered if anyone had any insight into what might be causing this or any good ways to see if workflows are not releasing memory correctly?
The server has 32GB RAM
The Worker settings are as follows:
The Engine settings are as follows:
Hey @Paul_Holden ,
I don't have an answer for that. I would suggest you contact Alteryx support and share all the logs with them to analyze what is happening there.
Best,
Fernando Vizcaino
Thanks @fmvizcaino
Agreed, I was mostly checking there wasn't a know issue with the engine before I do that - and that there aren't any particular tools that are known to exhibit this behaviour.
I always get nervous when I search and don't find anything 🙂
Currently I've got one reference to Crew Macros and that's it.
Hello @Paul_Holden ,
There have been some versions known to experience this kind of behavior (happened to me once). To be honest I don't know what did cause this issue and was able to solve it by cancelling are current jobs on that machine and restart it. I know it is not what you wanted to hear but is all I got 😛
Of course there is always the option to try installing and giving the chance to another version but I know it is not as easy.
Regards
To get a better understanding from what you are using.
Is it a stand alone machine or a multinode service? Are you using default mongoDB or user managed? What version are you on.
All what you can share would be helpful.
Regards
Hi @afv2688
As per thread title this is as server running Gallery and Worker on version 2020.4
There are no other workers, this is a stand-alone host.
We are using the default mongoDB.
The server was restarted in May and September for patching - you can see the memory usage dropping to zero and then steadily climbing again.
The server was last restarted on 21 November which ended the current batch of memory errors.
User | Count |
---|---|
4 | |
1 | |
1 | |
1 | |
1 |