This site uses different types of cookies, including analytics and functional cookies (its own and from other sites). To change your cookie settings or find out more, click here. If you continue browsing our website, you accept these cookies.
The post above asked about running using a huge dataset. One answer was to use CALGARY tools in running the data. I like me some CALGARY, but it is awful in some use cases.
I regularly work with national data and don't worry about operationalizing the data into subsets. I'm a fan of performance tuning workflows after building them with a K.I.S.S. architecture. Is the complexity of performance enhancements worth risking the maintainability of the workflow.
If you'd be willing to co-author a use-case on Alteryx, I'd be happy to review your needs with you and make some suggestions specific to your requirements. If that sounds reasonable, please PM me with your details and I'll set something up with you.
Alteryx ACE & Top Community Contributor
Chaos reigns within. Repent, reflect and reboot. Order shall return.
@MarqueeCrew - I think that *.YXDB will be sufficient/better if dataset is up to 10 million records (in ~250 fields), and Calgary file (.cydb) is for unexceptional size of the data (good to know about "Calgary" tools).