Well, this is a bit strange. There is a style coming from the Alteryx library include that is forcing radio buttons to be transparent (opacity of 0):
You can force the opacity back to 1 by adding a <style> tag to your header with the appropriate settings. Something like this:
I am not sure why Alteryx would do this, but you will want to check your GUI to make sure there are no unintended visual artifacts as a result. If so, you could always assign your radio buttons to a class and then style based on the class rather than on a radio button inputs.
I have a data which has date and time scans when a sales-order is scanned complete (please find in attached the sample data). what i am trying to achieve here is pull those sales-order which were scanned in last 24 hours form the time when the report is executed. Meaning if the report is scheduled to come out at 8am on 8/16/18 then is should have all the scanned Sales-order from 8am 8/15/18 through 7:59 am 5/16/18.
I tried [Scan_DteTime]=DateTimeAdd([Timenow],-24,'hours') and it didn't worked .
Will really appreciate any guidance.
Thank you .
I've finally tackled a behemoth of a project I put on hold for a while and automated a work report that took in several different Excel and CSV inputs. One issue I'm having now though is that the number of tools used in the file are getting too hard to maintain. Things are all over the place on my screen and maintaining it is starting to amount to a considerable level of effort.
What are some of your favorite tips and tricks when it comes to better organizing a 200+ tool file?
Found in the "Documentation" tool palette, Tool Containers allow you to organize your workflows.
I like to have a container to hold all of my main inputs, a container for output(s), and then usually containers for each logical step in a process.
The nice thing about containers is that you can collapse them so you don't have to see the details.
I usually include Comment tools (also in Documentation) in each container to explain what is happening in it. Typically I'll also put a comment at the beginning of the workflow to give an overview of the purpose.
Use colour to group your containers/comments by function or something else that makes sense to you. I like to use a red comment behind any tool that might require reconfiguration on the next run.
Are all those connections making it hard to see your tools? Set them to wireless and your workflow becomes more readable.
Select tools and use Ctrl/Shift/+ to vertically align and Ctrl/Shift/- to horizontally align the selected tools.
What's acceptable in these situation?
I started a thread yesterday that has since had no solutions. Since then I've worked on the problem and finally came up with a solution
Do I mark my solution as the accepted one. One one hand, marking it as solved will help others with a tricky Excel-to-Alteryx problem. On the other, I could see how this could easily be abused.
I agree with @patrick_digan. Mark it. But also will add: If someone's post helped you to get there, throw them a solve bone too.
@danilang I would absolutely mark it as a solution to benefit others!
If someone were to try and abuse the system, they would stick out pretty easily (and the moderators would handle the situation accordingly). I follow almost all of the posts and I can't come up with an example of somebody "over-accepting" their own solutions. This is one of the best communities out there!
I am having an issue with the Count tool as well as the Filter tool particularly related to instances where the record count value is zero. At the bottom is an image that's a small portion of my workflow (the part I am having trouble with) which I've labeled to show what I am trying to get Alteryx to do.
Goal: The short version is that I have a pull a bunch of data from multiple sources and then split that data out into separate files, one for each team that will correct any issues in the records pulled. Those files are then emailed to each corresponding team lead; the body of the email needs to contain the date for the data involved and a record count.
Issue: The Sumarize tool does not pass 0 record counts, the Count tool does however I must do this multiple times in the workflow so once there are records counted the next time the Count value is zero the tool "chokes" and tries to pass the previous non-zero value too. For example if "John's" file is processed first and has 2 records and "Barry's" file is processed next and has 0 then one email goes out for John but two go out for Barry, one as if there were 0 records for Barry and one as if there were two, except the attachment file (which should only be included if there are records) has no records; because there were none. If the [count] of records for Barry is at least one then only one email is sent to Barry. If I reorder so Barry's file is processed first everything works fine. Reordering is not possible because the data is broken out into 6 teams and any of them could have zero records on any given day.
Already tried: I have already tried giving each Count tool a unique name, using the formula tool to create a new value for each Count tool and then using that new value name in the Filter tool. I have tried the same using the Sumarize tool as well as the Running total tool (after adding a column to the data to be counted). I have also experienced issues when using the formula tool to identify if records were identified for the corresponding team I have tried using Null, Not Null, IsNumber and other functions as well as adding 1 to the Count value in order to achieve a non-null/non-zero/numeric value I can match on. I have also tried using the RecordID tool.
I have also observed this behavior in consecutive runs of a test workflow with only one branch (team filter) using the following dummy data:
Alteryx is pretty amazing so I'm kind of surprised this seems to be so difficult and that I could find no similar or related posts in these forums, or through Google searches. About the only two solutions I've thought of so far would be to create a dummy row of data and merge it with the real data to always get a count value of at least 1, then filter it out when I output the dataset into a file. It just seems this should be easier than doing all that for each report. It just overly complicates the workflow. I also thought about just attaching the blank file and have one path regardless of record count however that also seems overly complicated and silly to attach a file with no data just so only one email is generated for each team.
So how do I get Alteryx to run the workflow as labeled below?
I believe I have resolved the issue.
CharlieS thank you for the confirmation about implementing a dummy data record. It did not work for my case however knowing it worked for someone else helped me look at things a little differently. My solution may work for your use case as well.
The issue was the Run Command tool. It appears having that tool in the workflow stream as it initially was caused it to prevent the count information from being passed if there were no records. So the Report tool along that path did not clear and was holding previous values. So I tried adding another Block until Done tool and everything seems to be working as intended, at least I am only getting one email per team report now.
I am including a cleaner less annotated image of the new and working version of the workflow.
I need to split my data into different groups, based on a category. For example, let's say I have sales data and want to select one sale transaction per store location. I need to take the full population, split by store location, then select one transaction per store. I could accomplish this by using a series of filter tools then the random % sample tool, but I am trying to create a workflow that would work if new stores keep getting added. Using the filter tool, I would need to keep adding filters. Is there a way for alteryx to do this on its own?
You can use the Sample Tool.
Select the option 'Random 1 in N Chance for each Record'
Then select 'Store Location' as a Grouping Field.
This will give you a random % sample of records for each store location
My Grid.yxdb table consists of 1 million spatial grids. I also have a Country.yxdb table which is a country also split into a few hundred thousand grids. I want to spatially match these two tables so that the output is the intersection polygons only, but then also the whole grids and part grids from Grid.yxdb that didn't intersect.
My issue here is that lets say 25% of say Grid_ID=123 (from Grid.yxdb) is intersected by a country grid. The other 75% lies in the sea so didn't have an intersection. I want the output to then be two records for Grid_ID=123, where one is the intersection polygon, and the other would be a polygon for the 75% that didn't intersect. So say in future if I was to spatially combine these two records, I would get my original grid for Grid_ID=123.
Obviously if I tick the box in the Spatial Match tool to output intersection polygons, I wouldn't get the sections of grids from Grid.yxdb that didn't intersect. So how would I do this? Thanks in advance.
If I understood well, you want to keep the big full grid that match and having it divided in two parts.
The part that intersect and the part that does not intersect.
what I do, after the Spatial match, I group the big grid in order to combine the small polygon into one.
Now, in the same row, I have two polygon, one that is the full grid, the other one is just the intersection.
After that I use the Spatial Process tool and I cut out from the full grid the intersection. Now I have the part of the grid that does not intersect.
Then I just select the two desired grid and use a transpose to put all the polygon in rows.
I hope this is what you are looking for?
It sounds like a GIS UNION process?
Here is a macro that will do it.
I am attaching a work flow I'm having problems with. As background, it is a workflow to check if a data source has the same fields as a given list of fields. The part I'm wondering about is if the data source has no records then I don't get field names by transpose (not a huge deal but I wonder if there's a better way) I get a count which is essentially number of fields and then do a compare of the count of data source and field list. If the count is > 0 then do the check using a crew macro however if it is equal to zero then it should stop. However it processes on and I wonder how I should stop after I get on a certain path. Anybody got ideas? I'm using 11.7 of designer.
The absolute easiest way to do this is an example I have attached.
This is not necessarily the most robust option, but it is an option that should work.
Basically, you can create a dummy column and populate one line of data, then perform a Union with your main dataset. This will force at least 1 line of data to exist. You can make sure this dummy column is deselected in the transpose tool, which should make the rest of the process work as expected.
EDIT: One more note - I think the Count records piece of this workflow is superfluous for the test you are trying to complete, and you could remove it as it is causing unnecessary errors in the Expect Equal tool
In order to validate the format of two files, I would use a FIELD INFO tool after reading each file. Then place a select after the Field info tool and only output the NAME & TYPE for each stream of data. Then use the expect equals. You'll find out if the two are configured the same.
Alternatively, if you add a UNION tool after each file you can set it to AUTO CONFIG BY NAME and ERROR if fields are missing. You'll see any issues quickly that way too.
PS, Ben give me a call sometime.
I am reading a list of filenames from a directory (basically, sub-directories) and somehow an additional ".csv" gets appended to filename in end, when Dynamic Input tool reads it. How do I alleviate this issue?
Here is the error description:
Designer x64 Started running C:\Users\jkaur12\Contract Renewal\test flow 1.yxmd at 08/14/2018 15:51:10
Directory (2) 95 records were generated
Dynamic Input (7) File not found "Y:\BI data dump\June 2017\Adhoc JK - Business MRR KEL-CGY-EDM-FTM - 160430.csv.csv"
Dynamic Input (7) The file "Y:\BI data dump\June 2017\Adhoc JK - Business MRR KEL-CGY-EDM-FTM - 160430.csv.csv" has a different number of fields than the 1st file in the set and will be skipped
Dynamic Input (7) 0 records were read from 2 files/queries
Designer x64 Finished running test flow 1.yxmd in 1.2 seconds with 1 error and 1 warning
I took a look at your workflow. It looks like your formula to remove the .csv suffix from the filename after the directory input has been set up incorrectly.
You are using the statement...
In fact it should be...
Alternatively I would actually advise you just keep the full path field as is and don't overwrite it with the filename, then use the 'Change Entire File Path' option.
We have a JSON that has so many nodes, and for a pattern, the first three rows would have items for the first table, the second five rows would be for a diff table and the list goes on for like 400 tables.
Is there a way that alteryx could separate out this JSON file and map it to each table pased on a particular field?
For eg, in my case: MOITYPE?
The reason I recommend text to columns is it gives you nested arrays as indexes. e.g. "root.0.table.0.nestedtable.0.column". There are 3 nested arrays in that structure. Anyway, using your example, if it has to be completely dynamic, it gets more interesting, but if you know your table name than a simple filter and transpose/cross-tab combo should do it. Updated attached with the input.txt below.
Regarding being dynamic, the attached example uses a batch macro to dynamically write each table to its own output .csv. You should be able to modify that macro to output to a table instead.