I have attached the batch macro but you will need to create the file paths to feed into it.
This will require a directory tool to get a list of all your xlsx files. You will then need to use a formula tool to add the specific sheet name to each xlsx file.
If you go to the solution in this article it shows the directory and formula steps to add the sheet name to the full path field.
***Each file will need to be have the same structure e.g. you use a select records tool 1-4, so if one file onl misses the first 1-3 you will have to think of another dynamic logic for this batch macro to work for all files.
FYI. Download the macro workflow, save it somewhere and then on a new canvas right click>>>Insert>>Macro. You can then use the directory and formula to build and process your xlsx files.
My fault for not interpreting correctly.
Again similar techniques will be used but the key is to pivot the data so your field headers and 'Avg Man etc. are side by side. You can then concatenate the field headers and avg etc. values together and then pivot the data to your ideal format.
Attached is a workflow you can use as a basis.
Definitely not the most elegant solution, but I think it should get the job done. I did not find a way to use the Report(table) tool to do this so i just reorganized the data using other tools.
Note: You may have to do some sorting to get it in the order you want.
Okay, I just checked and this is a known issue
It is an issue with the underlining R package
They are working on the fix and hopefully have something soon. You will have to install a newer version of the Predictive tools. unfortunately, I don't have the exact date."
He then had me right-click on the Linear tool and change the version to the previous 1.0. Then the workflow worked with no errors.
The values that are Low Selectivity are found in [CYDB FILENAME]_Indexes.xml. It is not neatly formatted for the naked eye. You can also look at the directory where you wrote the cydb and you'll see all of the indexes (.cyidx) there. If the size is small, it is low selectivity and if the size is large you can guess which one that is. I have a file with a 5GB data file and the indexes range from 30 KB (low) to 245 KB (high).
I hope that this helps,
I worked around this using a combination of a Multi-row formula and Crosstab tool (see the attached workflow)
Step 1 was to give each of the [Values] a rowID restarting for each [Name] using the Multirow formula. Once i had this i could use this new RowID as the key field with which to crosstab my data back out.
Hope this method works for your data.
Set up your outputs like above and replace the text inputs with the data to write to each excel sheet.
My esteemed colleague @patrick_digan provided you with a solution that gets you to the answer of which product(s) exist on all purchase orders. I am going to take a liberty and answer the question in terms of the common products.
If you look at which products are on 95% or 99.7% of the purchase orders, we can look for the average count that exists for products on purchase orders and add either 2 or 3 standard deviations from that average. An alternative is to use the Tile tool and select the SMART TILE option. It will provide you with a tile # and sequence within that tile for which products are the most prevalent.
Many thanks for your patience while I tracked down this answer. I wish I had better news for you as, unfortunately, the tool only allows for the latest version of a list to be read. If you would, this would make a great suggestion in the Ideas Section on our Community. If the suggestion receives a lot of votes (or stars), this will get the attention of our developers and that functionality could possibly make it into a future release.
Thank you, and again, I'm sorry that we couldn't give you the answer that you were looking for.
Regex is fun because how you build it really depends on your data and your knowledge of how quirky it can be.
Based on your original post it look like you want to keep "HOMECARE2 Provide" and the numbers you want to get rid of are the dates.
so what I did is first replace the possible dates with nothing.
The first replace gets rid of dates with either 2/17 or 12/15 format, the second with either 2-17 or 2-15.
As I said regex is a great tool, but how complex your expressions are depends on how extreme your data is.