Inspire 2019 is just around the corner and we're calling on you to help us with Tips and Tricks for the session.
We're looking for your "Aha!" moments, your "I cant believe I didn't know you could do that" realizations, and anything you do in Alteryx to save clicks. What are your go to time savers that the Community could benefit from? Please post in the comments and share your tips with us.
Everyone that posts a tip in this thread will get the Tips and Tricks 2019 Badge!
If we use your tip in the book your username will also be featured.
We will have special swag for Tip Meisters who are featured in the book that come out to attend our session in Nashville.
For tip inspiration check out last year's post and book.
Adding a special thanks to everyone who submitted content in 2018 and came out to the session.
As you can see from the picture, we had a huge turnout in Anaheim! We had to add an encore session to make sure everyone that waited in the line got to see the session.
As always, thanks for your support and we look forward to seeing y'all down in Nashville!!
- The Tips and Tricks Team ( @MargaritaW , @JessicaS and @HenrietteH )
If you create a workflow in a new version of Alteryx but need to share it with a user with an old version, you can just open the workflow using an editor like Notepad++ and change the workbook version on the first line. Just make sure you're not using any advanced features not available in older versions...
When pulling from a data source that may not be reliably updated, use the test tool to do data validation and workflow events to send you an email after a run with errors. This will allow you to have peace of mind enough to schedule it instead of manually running it to verify that the data is correct.
Nice reconciliation pattern - pull out non matching records, then non matching fields within records.
Endlessly fettlable, treating complete no matches differently etc, but very handy to have a test on the ultimate output that triggers when any records are output.
Would love to see other people's takes on this - I'm sure there are more elegant ways of doing this.
Note - just bumped into @MarqueeCrew in the ginormous Gaylord who told me about his new CReW Delta tool for configuring evaluation equivalence of datasets that I'm going to have a look at later on !
An R trick for easily and efficiently applying an aggregate function to every row in a dataset, grouped by a given column, and capturing multiple rows of data generated by said function. This was demonstrated in the answer to this question.
Basic code is here:
library(tseries)
library(data.table)
dt <- setDT(read.Alteryx("#1", mode="data.frame"))
outDT <- dt[, adf.test(r), WidgetID]
write.Alteryx(dt, 1)
write.Alteryx(outDT, 2)
What it does: applies "adf.test(r)" to the dataset, and grouping over "WidgetID".
Just a few lines of easily understood code, and data.table is fast.
And, the output is multiple columns of info provided by the given function:
Cheers,
John
Hey everyone,
I think this feature was introduced in 11.7. When I am building a long workflow that takes a while to run, perhaps takes a long time to run a query in a database, you can instead let it run once. Then go to the output window. Copy all records with headers. Ctrl+N then (Ctrl+v or right click paste). This will create a text input with the most recent result from your other workflow. I do this to cut down on run times for testing. When I am done I just add the segment of workflow from the current canvas which runs super fast to the one with the live database connections to save lots of time.
TIP: Take any data on the clip board and paste it to the canvas and it will create a text input for you with your data in it.
-Ed
Very nice! Thanks for share