This site uses different types of cookies, including analytics and functional cookies (its own and from other sites). To change your cookie settings or find out more, click here. If you continue browsing our website, you accept these cookies.
Hi all, So I have this macro which is a loop structure. I have noticed a case on my final output that disrespects one of my conditions. After a thorough investigation, when I was certain that all my conditions were right, I started playing this macro and saving output after output until the case that I was investigating met all the conditions and was taken out of this loop structure macro, all good so far (attached file "Macro output - right values") But when I looked at the macro output on the main workflow for this specific case, I found that the values were different already (attached file "Macro output on the main workflow - different values"). What is the most strange thing is that those values are random for the most part and weren't there even from the beggining. There isn't a single case in my workflow that those number could be under 4 and those are even lower. Is this an error? somebody help please 🙂
Can you put a browse tool on the tool just before the macro; and also do a screenshot of the test data that the macro processes, to confirm that they are identical? If not identical, please try testing your macro using the data observed in the live app; then see if they match.
Yes man, I checked the data before it went on the macro (which still respected all the minimun conditions), it is different from the final result of the macro (processed manually on the live app until all conditions were met) and the actual output of the macro on the workflow (which is the random result).
Sorry for any bad english. If I wasn't clear, please say it.
For debugging purposes, I would temporarily save the data stream that feeds into the macro (give it a filename that includes a timestamp so that each time you test, it outputs a new file). Run it a few times and see if there is different data each run... that could explain any discrepancies from one run to the next.
Similarly, take any given set of output and run it through the macro manually; once satisfied that it's working, create a new workflow that has nothing but the same file directly feeding into the macro... still a fail, or success? If a fail, it suggests you may be overlooking something while manually iterating through the steps. Unfortunate, I can't help assess that possibility without a concrete workflow example in hand. (If a success, it suggests that the first paragraph here may have been generating different input between runs?)
I've made a more thorough test today and now I'm even more worried.
I've saved the data just before it went on the macro, played the macro with that database, saved the output that was 'ok' and rewrote the part that was not so I could run it again until I had all the possible iterations done.
After that, I compilled all this data that I ran manually on the macro and compared with what was been outputted on the main workflow. The result was that only 33% of the database had no differences.
Just out of curiosity i ran a comparison between the data before entering the macro and after to see if there was some sort of error that wasn't allowing any changes to happen, but that was not a problem, the result was 20% of equal data, which is expected.
To be really thorough, I ran 2 more comparisons:
1. One between the 'ok' output that i ran manually on the macro and the data outputed from the macro on the main workflow;
2. One between the 'not ok' output (not always all the conditions are respected and I can't get an 'ok' result on all the data, but this is expected and fine) that i ran manually on the macro and the data outputed from the macro on the main workflow.
Both of those had a lot of differences as well.
In resume, this is a workflow that have been working correctly for months, I've just changed a feel conditions on both the main workflow and the macro and started getting those errors just now.
Like the Beatles would say... HELP, I need somebody.
This is curious to say the least. Can you do a quick check to ensure that the macro you're editing is in fact the same instance as the macro being consumed by the application? (e.g. the app isn't pointing at a different copy of the macro somewhere?)
That would explain things getting progressively scarier as you make changes but and keep getting results further out of whack with the app.
But this is more of a wild hope, though... I can't explain why you might be seeing what you're seeing.
Does either of you have an update with this? I ran into the same issue today and a search on Community led me to this page. Essentially, within the macro itself, the calculation is correct all the way up to the Macro Output tool. However, when I embed the macro in a regular workflow, and use the same test data, an entirely different calculation outputs from the macro.
Apparently the macros on alteryx don't have the same intelligence on distinguishing variables names.
My problem was that I had 2 variables entering the wokrkflow with the names "X" and "X Sold".
Somewhere in the macro those would get mixed up and generate all the error that I was having, all I did was rewrite the second variable with an underline, which became "X_sold". This was enough to get the flow going correctly.