Hi there,
I am seeking to deepen my understanding of the AMP engine's processes to ensure consistency in workflows executed across different computers within my team. I understand that the AMP engine utilizes parallel processing to split data into chunks, allowing multiple functions to be processed simultaneously.
What I am particularly interested in is how the records are determined to be split into these chunks. When I run the workflow on my computer, it consistently splits the data into the same chunks, resulting in the same order of output each time. However, I am curious whether running the same workflow on a different computer—especially one with varying processing capabilities or a different version of Alteryx—would lead to different groupings of data into chunks.
I am trying to establish best practices for my team regarding tools that rely on a specific ordering of the data. Any insights into this process would be greatly appreciated, as we have noticed that occasionally a different team member will receive a different order of output when running the same workflow.