This site uses different types of cookies, including analytics and functional cookies (its own and from other sites). To change your cookie settings or find out more, click here. If you continue browsing our website, you accept these cookies.
Working in finance, I’m somewhat bummed that I don’t often get to explore the possibilities within location-based data. That’s why I was thrilled when a project fell into my lap to visually analyze taxes paid across German villages.
Alteryx can work with data in Hadoop in multiple ways, including HDFS, Hive, Impala, and Spark. Many of these offer multiple configuration options. The goal of this article is to present the various options available at a high level, note some performance observations between them, and encourage you to perform similar analyses to understand what works best in your environment.
In this article, we’re going to explore how we can leverage Alteryx to run analytics on fishing reports in Southern California. You’ll learn how to use web scraping to pull in data from 976 Tuna, macros to automate the cleansing process, spatial analytics to better understand where the fish are biting, reporting tools to visualize our analysis, and the Alteryx Server to automate this on a weekly basis.
Many customers strive to create a culture of analytics at their organization. Part of this process is designing a system that allows different users to collaborate in one single platform - a language as we call it. More and more, customers are turning to Alteryx to be their common language for decision making.