This site uses different types of cookies, including analytics and functional cookies (its own and from other sites). To change your cookie settings or find out more, click here. If you continue browsing our website, you accept these cookies.
Have you ever used the ConsumerView Analytical File in US Core Data and stared wide-eyed at the codes returned? There is now an alternative to looking in the documentation for the coded values! The ConsumerView Renaming Macro allows you to rename the codes into readable data.
If you need more geographical information on a coordinate, try converting it into a spatial object and using the Find Nearest Tool to find coinciding Experian geographical data from an Allocate Input Tool.
Update Allocate Append tool using XML
Works in 9.5 and 10.0
You’re running a process to select certain variables to be used within a model. You’ve built your process, but you’re getting tired of having to run it twice. Once to pull thousands of variables to check for relevance, and a second time with just the variables you want to include in the final model based on the tests you’ve run.
There’s good news! You can use the Action tool within the Interface toolset to update the Raw XML of the Allocate Append tool to dynamically select the variables you want to use, and it’s not as hard as you might think.
The first thing we need to do is find out what the XML code is for the variables we want to use, and the format it needs to be in for the Allocate Append tool to recognize it. You can enable the XML view from the User Settings menu (Options è User SettingsèEdit User Settings). On the Advanced tab, there is a check box to “Display XML in Properties Window”:
Once you’ve checked the box, return to your Allocate Append tool, or any tool on your canvas, and you’ll see a new option on the right hand side that will allow you to see the XML code the tool is creating.
From here you can get the format you need for the XML code that we’ll pass into the macro to be created later.
Once you know the variables you want to use, you can use the variable name (code, not description) to build out the XML string as show above. If you select multiple variables, what you’ll notice is that they are each on their own line under the “<Variables>” tag in the XML code. The list you make must follow the same format:
In the sample workflow attached, you’ll see that I am using a Text Input tool to simulate the data stream that contains the fully compiled XML strings needed. As you will most likely see in your data, I have one variable per record. The problem is I need all of the variables in the same cell, on their own line. So how do we combine the records into one, and add a new line?
The answer is we use the Summarize tool. Within the Summarize tool we can use the Concat function to combine the XML strings into a single cell, and in the concatenate Properties section, we can indicate that we want to use a new line as the separator by typing in \n.
Now that the prep work is done, all we need to do is pass this new variable list into the Allocate Append tool through XML. This can be achieved with a simple Batch Macro. For the Control Parameter you want to use the Variable list that we just created. The Control Parameter gets connected to the Allocate Append tool which adds the Action tool as shown below.
In your Action tool, select the option to “Update Raw XML with Formula”, expand the options under Allocate Append until you see “Variables” and highlight that section. You’ll want to update the Inner XML, and the formula to use is the connection from the Control Parameter as shown below.
Once you have this set up, simply add your Macro Input (for your incoming data stream) and Macro Output (to feed back into your workflow) to complete the macro set up.
Return to your original workflow, insert your newly created Batch Macro and connect your inputs. Your variable list stream will feed into the ¿ input, and your main data stream to the other.
You’re now set to dynamically change the variables you are pulling! Simply run your process for selecting relevant variables, build your XML strings through the Formula tool and pass them into your macro.
Have you ever used the Allocate tools and received back some strange looking variable names? You're not alone! The Allocate Rename Fields Macro will allow you to rename your fields into readable variables.
The macro can be downloaded here. Note: This will navigate you to the Alteryx Gallery. Select "Download & Install the Allocate Rename Fields Macro" and follow the prompts to install.
USING THE TOOL
The Allocate tools allow users to enrich their workflows with third party data provided from Experian and the US Census. This data contains demographic and household information by geography. Allocate tools can be found under the “Demographic Analysis” tab in the Alteryx toolbar; they include the Allocate Input, Allocate Append, Allocate Report, and Allocate Metainfo.
Allocate Input and Allocate Append tools allow users to select variables to display by geography. Once configured, the fields returned look something like this:
Add the Allocate Rename macro after the Allocate Input/Append. In the Configuration window, select the Dataset that you are pulling from. Press Run for the magic!
Voila! Your field names are now human-readable.
What if my company blocks access to downloading new tools/macros from the Gallery?
In the case that you cannot download this macro, you can use Alteryx to dynamically rename the field names. See is it possible to get the variable name I see in the Allocate tool?
Mosaic BG Dominant and Mosaic BG Household Distribution counts are balanced to Experian’s census estimates. ConsumerView is a marketing file and therefore doesn’t need to be balanced to the census estimates.
It is now easy to update your variable names using the Dynamic Rename tool and the MetaInfo. First, on the Allocate MetaInfo tool, you will want to select the data you are using and leave the Variables selected: Next, bring in a Dynamic Rename tool, select the Take a Field Names from Right Input Rows option (have your MetaInfo come in on the Right). For Old Field Name from Column choose Name, and for New Field Name from Column choose Description. I would recommend also choosing Ignore for if number of Field Names do not match. What the tool is doing is matching the data in the Name column of the MetaInfo and if it matches any of the names from the left Input data, it will replace it with the data in the Description column of our MetaInfo. I choose ignore because the tool will try to match all variables in the MetaInfo data set to the left input data, and throw a warning or error if it doesn't match. Below is an example of what the process would look like. It's also been configured into a macro that can be downloaded below: The configuration of the macro is simple, just pick the data set you are using (this should automatically populate with what you have loaded on your computer): Below is an example of how your module would look using the macro, it would need to be place somewhere below your Allocate tool to work: After using the macro (or process) the variables will appear as the description (which is what is shown in the Allocate tools):
Household Level Analytics Module
Business Problem: Businesses investing in new customer acquisition will be more successful in reaching prospects if they know which consumer profiles best describe their current customers. Compiling customer databases through marketing or loyalty card programs allows businesses to know who their customers are, as well as where they are located. When correctly leveraged, this type of information enables strategic and focused spending of marketing funds. Actionable Results :
Understand the demographic attributes of your customer base
Target new customers that fit the profile of your current customers
Ensure that your advertising and marketing funds are spent in the most effective way possible
Overview: Would you like to identify key demographic traits of your target customers? By appending household-level characteristics to a customer file, you can achieve the most accurate Consumer Profiling of both existing and prospective cstomers. This analysis allows business owners to target households that are not in their customer database, but are in their trade area and match the demographics of current customers. Customer acquisition using targeted households is a more efficient way to direct spending on advertising and marketing programs. Vertical: Retail Data Utilized: Customer file containing the following fields:
Customer Address containing street number, street name, city, state
Customer ZIP Code
Alteryx Data: Experian Household File Application Process:
The selected customer file is run through the Calgary Join tool using Experian household data to isolate the Experian records that match the customer records.
Fuzzy Matching is then performed to eliminate all duplicate records.
Finally, the wizard outputs the customer file with appended household-level data.
The Q4 2018 Business and Location Insights data packages includes analytics-ready data from a variety of vendors as well as data-specific analysis tools to get the most from the packaged data. All data packages will be available via the Downloads & Licenses portal later today. US & Canada Business Insights customers should receive their hard drives later this week, but you are also welcome to download the data from the portal.
Here are a couple important notes for this release. Please see the product specific release notes for more information.
Product name change (does not affect product contents):
US and Canada "Data", which includes demographics, business lists, etc., are now called "Business Insights"
All "Spatial" packages are now called "Location Insights"
DigitalGlobe Satellite Maps base URL changed to whitelist.alteryx.com/v1/dgmaps/v1. See this post for more details.
All documentation packages included in the attached .zip file. Please let us know if you have any questions or concerns.
Team Lead, Data Products
Note that this post includes a corrected version of the US Data Variable List that was not included on the hard drive or via the Downloads & Licenses portal. The correction corresponds to TrueTouch variables within the ConsumerView dataset - there were no new variables with the Q4 2018 release.
As of the Q4 2018 data release, Experian's CAPE offering accessed through Alteryx's Demographics Analysis toolset will move to an annual update schedule instead of semi-annual. The change was made by the vendor, whose research determined there isn't enough variance to warrant updating the Annual Update to the Thin Update (“A” to “B” in Alteryx terms). This affects the following data sets: CAPE Demographics, “Current Year Estimates” and “Five Year Projections” (CYE & FYP) CAPE Seasonal Population CAPE Daytime Population CAPE Consumer Expenditure, CYE & FYP CAPE Retail Demand & Retail Supply (Scaled) American Community Survey, Mosaic Workplace and Mosaic Residential are already on an annual update schedule and will remain on that release cycle. However, the annual Mosaic updates will be delivered with the rest of the CAPE updates beginning with the Q2 2019 release. This change does not affect the quarterly Alteryx Data delivery frequency to our customers, as geographies and other data sets follow their own release cycles. We will continue to deliver Alteryx Data quarterly as the other data sets included within the offering follow their own update release cycles.
The Q3 2018 Canada Data package includes analytics-ready data from TomTom, Dun & Bradstreet and Statistics Canada as well as data-specific analysis tools to get the most from the packaged data. The documentation package attached includes – Release notes, variable list and change log D&B Analytical file - data description, SIC lookup code, penetration report Spatial products include documentation on drive time methodology and Alteryx map layers What's new in this release? 1600+ Language Characteristics variables from the Statistics Canada 2016 Census regarding “mother tongue” and “language spoken at home” Business Summary built for Statistics Canada 2016 Dissemination Area inventory Sample workflows included with the data installs are now grouped within a top-level “Data Install Samples” category Please download and extract the attached 'Q32018_CA_Data.zip' for the complete documentation.
The Q3 2018 US Data package includes analytics-ready data from TomTom, Experian, Dun & Bradstreet and the US Census as well as data-specific analysis tools to get the most from the packaged data. The documentation package attached includes – Release notes, variable list and change log Experian CAPE demographic data methodology document, Mosaic segment descriptions Experian ConsumerView Analytical file – user guide and penetration report Simmons overview D&B Analytical file - data description, SIC lookup code, penetration report Kalibrate Technologies traffic count overview and metadata Spatial products include documentation on drive time methodology and Alteryx map layers What's new in this release? Sample workflows included with the data installs are now grouped within a top-level "Data Install Samples" category Annual updates for Places, Other Name Places, CBSAs, and CCDs/MCDs Please download and extract the attached 'Q32018_US_Data.zip' for the complete documentation.
The ConsumerView Matching macro enables users to match their customer file to the Experian ConsumerView data. Starting with customer information such as name and address you can leverage the ConsumerView macro in Alteryx to append a variety of information about your customers such as household segmentation, home purchase price, presence of children in a home, estimated education and income levels, length of residence, and many more!
When using the Demographic Analysis tools in the Alteryx Designer there are two types of variables you’ll be able to utilize from the Allocate Engine: built-in and virtual variables. Built-in variables are the most granular demographic measures the engine offers and virtual (custom) variables are calculated from these built-in variables. If you’re interested in taking a look at the underlying formulas that constitute these variables, there are two ways to do so:
Through the Alteryx Data Products GUI
You can find the Alteryx Data Products Allocate Interface in your start menu
From here just locate your variable in the “Variables” tab
Then select “File” and “Variable Information”
The Allocate Metainfo tool
In the Allocate Metainfo tool select “Variables”
After running the workflow, the list of variables in your chosen dataset will be output, along with their constituent formulas (if they are virtual variables)
You can also create your own custom variables! For your reference, a list of demographic variables that are included in the Alteryx Data Packages can be found here.
This note is to alert users of Alteryx 8.6 to 10.1 of a potential message window that may appear after updating the US and Canadian CASS engine. Subsequent to completing the CASS installation and closing the installer, some clients have reported receiving a message window stating that the program may not have been installed correctly. This message can be disregarded and will not impact dataset functionality.
The source of the problem is being identified and a fix will be included in future Alteryx software releases.
Action Simply choose ‘Cancel’ to close the Program Compatibility Assistant window. Alteryx can then be opened and workflows can be run as normal.
Was the installation of CASS successful? Yes, CASS was installed fully and all scripts have finished running.
Will this message impact my installation or existing workflows? This message is entirely cosmetic; there are no impacts to the installation or workflows.
When you drag and drop an Allocate Report tool onto the Alteryx canvas or are browsing reports in the Alteryx Gallery, do you feel overwhelmed by the number of Allocate reports visible? Your next question might be, "what's included in each report?" The report name helps somewhat but not always and there isn't a listing in Help to guide you.
Attached is a spreadsheet on the available reports grouped by Type (List, Rank, Summary, Comparison) and displaying report name, description and check marks in columns for key content items (Age, Employment, Income, Retail Demand, etc.).
If you have questions or comments, feel free to contact email@example.com.
Here we are in 2015. The 2010 Census is five years behind us and the 2020 Census is five years away. Have you wondered about the next Census? How will data be collected? Will the questionnaire catch up with current technology? What happens to non-responders? Since much of our demographic data is based upon the results from each Census (whether from the Census Bureau or demographic vendors like Experian), I went looking on the Census Bureau's web site for a preview of coming attractions. And I found a page at A cost-effective 2020 Census answering my questions. The decennial Census is mandated by the U.S. Constitution. If you answered the Census in 2000, you took black/blue pen to paper for either a short or long-form questionnaire. No Internet access back then. In 2010 you still used a black/blue pen on paper and answered 10 simple questions even though the Internet was integrated in much of our day-to-day life. Are we relegated to a black/blue pen on paper to answer the 2020 Census Questionnaire? Based on information at census.gov, the next Census will encourage self-response via the Internet. Nice! And for those who do not respond, other existing governmental data may be used as a supplement. This equates to cost reductions with fewer physical offices, fewer staff and less followup with non responders. In 2010 there were 500+ Census offices and more than 750,000 staff on the ground. The 2020 Census may have as few as 150 Census offices and 200,000 staff on the ground. Technology may also influence another component of the U.S. Census - the Topologically Integrated Geographic Encoding and Referencing (TIGER) database. These are reference maps, created for the Census, used to visualize geographic and statistical data. Maps are the basis for companies such as TomTom who offer enhanced versions for licensing and inclusion in navigation products. Alteryx users can find mapping layers in the Map Input, Reporting and Browse tools as backdrop references for spatial objects. As referenced on census.gov, existing maps and address lists may be updated using technology, data and GPS to collect interviews efficiently. In the past enumerators walked EVERY block in EVERY neighborhood in the United States gathering responses and information. You can read more about the Census Bureau's 155-year history of mapping here: 155 years of mapping From what I read, these changes have the potential to save taxpayer dollars, maintain a high level of accuracy and make responding to the Census easier. So what happens next? Testing these new processes began this year on a small-scale and national basis. On April 1, 2017, Congress will be delivered the 2020 Census "topics." On April 1, 2018, "question wording" will be delivered. April 1, 2020 is Census Day! On December 31, 2020 apportionment counts are delivered to the President. Results of the Census were historically not instantaneously available but were released over a period of a few years. But who knows what WILL be available in another 5 years. http://census.gov/ is an excellent resource for information on the Census, American Community Survey (ACS), geographies, news and events.
Question Are seasonal population figures included in total population counts?
Answer It is very important to note that the CAPE ‘Seasonal Population’ only refers to the proportion of the population that is temporarily living in housing units that are defined as ‘For seasonal, recreational, or occasional use’. The CAPE ‘Seasonal Population’ therefore needs to be combined with the permanent ‘Residual Population’ to estimate the overall level of the population in each area by quarter.
We are trying to understand the difference between employees and daytime population. It looks like some of the population may be double counted. Can you explain what rows are used for the 2014 Total Daytime population #.
Methodologies are different for Employees and Daytime Population.
Employees & Establishments in Business Summary are sourced from the D&B Business list and summarized to a geographic level although delivered in the Experian CAPE release. The employee counts are as accurate as the D&B employee value but are also subject to block centroid allocation used for population.
Employment fields from the Occupation & Employment folder are based upon the American Community Survey, modeled to a current year value and are part of CAPE.
Daytime Population is sourced from Experian and are compiled values using several CAPE fields. The excerpt below is pulled from the Tech Overview delivered to clients.
Daytime PopulationDaytime Population – Current Year Estimates (CYE)
The Daytime Population database is created using a variety of methodologies applicable for different subsets of the Total Daytime Population. These subsets are then added together to create the Total Daytime Population.
The process starts by identifying key subsets of the residential population that are assumed to stay in or close to their home location during the day. In particular, the following subsets of population are assumed to remain in the same Block Group during the day as the Block Group in which they live (or reside):
Residential Population : Children aged less than or equal to 2
Residential Population : Civilian aged 16+ population that are unemployed
Residential Population : Civilian aged 16+ population that work at home
Residential Population : Population aged 65+ who are retired
Residential Population : Population aged 16+ who are homemakers
Residential Population : Population aged 16+ who are in the Armed Forces
All of the above variables can be directly obtained from previously calculated CAPE – Demographics – Current Year Estimate (CYE) residentially-based variables, except for the ‘Residential Population : Population aged 16+ who are homemakers’. This variable is calculated by applying suitable localized proportions to the existing ‘larger population’ variable of the ‘Civilian aged 16+ population who are ‘Not in Labor Force’. Applying these proportions determines the subset of this ‘larger population’ that are estimated to be homemakers.
Once these initial subsets of Daytime Population who are assumed to stay in their residential Block Group during the daytime are defined and accounted for, then the daytime location of other population types are modelled. It is assumed that these remaining population types are much more likely to travel out of their residential Block Group to reach their typical daytime location than is the case for the population groups previously accounted for. However, flows from home address to daytime address that occur completely within the same Block Group are also possible for these types.
First, the estimate of daytime population at place of work that has already been modelled for the Mosaic Workplace database is accounted for. This variable is:
Daytime Population, Civilian 16+, at WorkplaceAfter the above, the main population groups left to be modelled are:
Within the work to create Mosaic Workplace, this variable is estimated using Census Tract-to-Tract flows of workers from residence to workplace, and National Business Database data to update these flows and allocate them from Tract level to Block Group level.
Daytime Population, Students : Prekindergarten to 8th grade
Daytime Population, Students : 9th grade to 12th grade
Daytime Population, Students : Post-secondary students
Daytime Population: Any remaining Civilian aged 16+ population that are ‘Not in Labor Force’ and have not yet been accounted for.
All of the three student populations are modelled using a variety of data from the National Center for Education Statistics (NCES) and also information from key institutions (i.e. universities/colleges) themselves. After making allowance for students registered at an institution but very unlikely to travel to that institution on a typical day (for example, students undertaking online courses), this information is compiled and modelled to create an initial estimate of the typical number of students that spend the day at the location (or campus) of each institution. These figures are then calibrated so that the initial estimates of students who spend a typical day at the location of each institution, and those who stay within their residential Block Group during a typical day, are balanced to equal the national number of students within each category (i.e. Prekindergarten to 8th grade, 9th grade to 12th grade, Post-secondary students).
Once all students have been accounted for, current estimates of each relevant daytime population sub-group are tallied and compared to the national estimate of ‘Residential Population: Civilian aged 16+ population that are Not in Labor Force’. The above work does not yet account for a proportion of this population group. The, as yet unaccounted for, proportion of this group is therefore calculated and assumed to spend a typical day within the Block Group in which they live.
Having allocated all of the relevant subsets of residential population to either the Block Group in which they reside, or to another Block Group which they are estimated to travel to in order to spend a typical day, then the two final variables in the database are calculated:
Daytime Population Aged 16+
Total Daytime Population (i.e. all ages)