This site uses different types of cookies, including analytics and functional cookies (its own and from other sites). To change your cookie settings or find out more, click here. If you continue browsing our website, you accept these cookies.
AlteryxSpatialPluginsEngine.dll could not be loaded: The specified module could not be found
When running a workflow using spatial data from an Oracle database with the Summarize tool on the canvas, you may see the following error:
AlteryxSpatialPluginsEngine.dll could not be loaded: The specified module could not be found
Product - Alteryx
Oracle (spatial data)
Specific cause is unknown. Known to be related to setting up Oracle database connections.
Add the Alteryx bin path to your PATH variable in User Environment Variables. The path depends on whether you have Admin or Non-Admin Alteryx installed:
Admin: C:\Program Files\Alteryx\bin
Note: To access Environment Variables, search your Windows desktop for "Environment Variables", then click "Edit Your System Environment Variables" and follow the screenshots below:
Spatial Matching Best Practices
Even though the Spatial Match tool is extremely fast and efficient, there are additional ways its speed can be further improved. This article provides suggestions to increase performance.
As defined in the Help documentation, the Spatial Match tool establishes the spatial relationship (contains, intersects, touches, etc.) between two sets of spatial objects. At least one input stream should include the Polygon type spatial objects. The other set will contain any of the other types of spatial objects, such as points or lines. But wait - which set of objects shall be used for the Universe (U) and which set for the Target (T)?
Under the hood the Spatial Match tool will put the U input into a temporary YXDB file with a spatial index . This is a highly efficient data format for spatial data. Thus, instead of indexing the geometric features of the object (first image), the objects' bounding boxes are indexed (second and third image).
This effectively means that when calculating a spatial match, only a few spatial objects inside the relevant boundary box must be considered for the spatial calculation. Next, every object from the T input is spatially matched with the relevant objects of the U input.
In one line: The Spatial Match tool can ignore most Universe records that do not match the Target record. Using this fact to your advantage can greatly speed up your workflow.
Deselect spatial objects not needed
As with many other tools, the Spatial Match tool has a built-in Select tool enabling one to deselect columns that are not needed. While discarding unnecessary columns comes in handy to make data sets more readable, it can be a real performance improvement. Therefore, unnecessary spatial objects should be removed from the workflow. Unnecessary data consumes memory and takes away otherwise available resources.
In the below example, toggling back the spatial object will increase the tool output from 7 kB to 757 kB.
Spatial tool output with unnecessary spatial object
Spatial tool output without unnecessary spatial object
Consider Using the Dynamic Input Tool
In certain circumstances, using the Dynamic Input tool is quicker to perform a Spatial Match than using the native Spatial Match tool. Note: This can be only used for the spatial relationship 'Universe contains Target'.
To perform a spatial match using the Dynamic Input tool, select the spatial data file, then choose the second option: 'Modify SQL Query'. Select the latitude and longitude fields for the Universe object, and the spatial object field for the Target object. This SQL filter will only let through data that fall within the bounding rectangle of the polygon.
Harness the Power of Calgary Data and YXDB
The YXDB (.yxdb) and Calgary DB (.cydb) data formats use spatial indexing. As explained above, this can give the workflow a major efficiency boost. Therefore, when possible it is strongly advised to import data from the above two DB types.
The second advantage is that they both enable you to leverage the spatial index. As defined in the Help documentation, for Calgary use the spatial Calgary Join tool. If specifying a Calgary file, be aware that the Calgary spatial index uses 5 decimal places of precision for compression and speed. The yxdb spatial index uses 6 decimal places. This adds an additional round-off error of up to a maximum of 1.8 feet to Calgary indexes. In other words, it is possible that a point can be 1.8 feet inside of a polygon and yet still be found as "outside."
In summary, using the YXDB and Calgary DB data formats has the advantage of the highly efficient spatial indexing.
Use Integrated Tool Input in Spatial Tools
For larger data sets, the option to Use Records from File or Database can be used for added speed. This also uses the spatial index and has the advantage that the entire dataset will not have to be read into memory for the workflow to start, as I/O is usually the biggest performance bottleneck for Alteryx.
Spatial Match tool (Help documentation)
The Report Map Tool (master it here) allows to create thematic maps by selecting a "Theme" field on the Data tab and then further defining the theme on the Layers tab, for example:
The above example creates a map of Texas showing average annual rainfall totals where orange is the least rainfall and blue the most:
Pretty nice, right? But what if you want to change the map and instead of applying the theme to the fill color for the polygons, you want to apply the theme to the outline and just show that?
That is a little trickier because the Report Map Tool allows you to adjust the outline color and size of your polygons, it doesn't automatically apply the theme, so a workaround needs to be built.
You could feed in each polygon as an individual layer but that is difficult to manage - to keep the color gradient consistent, making sure they are in the right order. And what if a new rainfall range is introduced? You might have to adjust a couple of layers to account for it.
A better option would be to turn the polygon outlines into line polygons themselves. That would allow you to apply a theme right to the outline polygons.
In order to do this, we will use the following tools:
A RecordID is assigned so that we can pull the data apart and put it back together again.
The polygons are split into detailed regions using the Poly-Split tool and rows flagged as 'holes' are removed.
The polygons are split into points.
Those points are reassembled as a sequence polyline. The create the polyline, the data is grouped by the RecordID to keep each polyline separate. (A polyline contains multiple line segments, where a line has one start and one endpoint, but can have any number of points in between. A polyline can be quite complex as in the case of road systems, or rivers.)
The sequence polylines are joined back to the original data set.
Using the reporting tools to create the maps with rainfall range as the thematic field.
With that workaround you can create a map that looks like this:
For details on tool configurations, see the attached workflow.
The Report Map tool allows the user to define theme settings/ranges and to modify the size, icon, and color of the display for each range, and this can be done rather easily. First, in the Map tool on the Data tab, pick which column you would like to theme off in the Thematic Field selection area: Once this column is selected, go to the Layers tab in the Map tool and expand the layer options for your theme layer. Click on the Theme and options will appear on the right. For the purpose of defining your own theme settings, you will want the Tile Method to be on Unique Value, which gives the Specific Values area. The Specific Values area is where you list what you want to theme on. For this example, we are theming off the DMA_Name so you would enter each of the DMA names you would like to theme. If you have a lot of ranges, you could also use a Summarzie tool in your module and Group By your theme column, thus giving you a list of your theme values. Run the module once to populate the Browse tool and you can then click and hold on the first row and drag down to the row of your choosing, selecting them all. Ctrl+C will copy the rows and you can paste them into the Specific Values area using Ctrl+V Once the values are entered, click on Refresh and a layer option will show up for each of the theme values you set. Now that the theme values are layers, you can go to the Style option under each layer and change the Point Style , bring in a Custom Point , change the Size , Color , modify the Outline Color and Outline Size . If you don't define all the values that are contained in the data you are bringing thru, the Map tool also provides options on what to do with these. This can also be done with number ranges with a few small changes. For Tile Method , choose Manual Tile . Enter in the cutoff for each range that you would like to be able to theme. Hit Refresh and the new layers for the theme ranges will be displayed, allowing you to modify each one. Also note that layers are created for the ranges below and above what you specify in the Cutoff Values area.
Inset maps, depending on whether they are larger (zoomed in) or smaller (zoomed out) in scale, can provide some valuable detail or point of reference information respectively, while also providing a little more interest to your map at the same time. Creating an inset map is relatively simple, but does require several steps to get it to look right. In the example below, we will create an overview inset map (smaller scale) which will allow the map reader to see a much broader area around the main focus of the map, in turn providing a greater area for spatial reference.
To create an inset map, follow these steps:
Create your main map.
Copy the Report Map tool used in the main map and paste it on the canvas. This will serve as the core of the inset map and prevent you from having to recreate most of the map elements.
In the Settings tab of the new Report Map tool:
Change the Map Size of the overview map to something in the neighborhood of 2 x 2 inches. You don't want this too big so that it covers important parts of your main map, or too small to be of use.
Adjust the Expand Extent Minimum Width to meet your particular needs. In this example, I have set it to 75 miles. You can leave the default of 10% as the Minimum Width will override this.
In the Legend tab, choose '[None]' for the Position as we won't need or have room for a legend on the inset map.
Connect all of the layers that you want to show on the inset map.
Add any additional layers as points of reference. County boundaries have been added in this example.
SInce the inset map needs to be much smaller in both size and scale, it is recommended that you make adjustments to the map layers, including the standard TomTom base layers.
Reduce the size of points and width of lines and polygons. This will make them much more legible considering the smaller size and scale of the inset.
Depending on the scale of your inset map, you may also want to disable some of the standard TomTom base layers found in the Layers tab. For example, at a relatively small scale, you would not necessarily want to show city parks, smaller cities, golf courses, etc. Now, some of these might automatically turn off depending on the scale of your inset, but it is a good practice to go through the base layers and turn off any of them which you feel will only clutter the overview map.
Join the map to the main map. Important: If you are producing multiple maps at once, you will want to join the inset map to the main map by the 'Group' field, using the 'Specific Field' option. Otherwise, you can join by Record Position.
Using the Overlay tool, add the inset to the main map. I personally prefer to reduce the Padding to 0.2 inches all the way around as I feel that the default 0.5 inches allows the inset to intrude too much into the main map. Same goes for the legend.
One of the standard outputs from the Report Map tool is a field called 'BoundingRect' (Bounding Rectangle). Add the bounding rectangle to the inset map (and format appropriately) to show the extent of the main map. See the red rectangle in the inset of the final map below.
Final Map (bounding rectangle in red):
Things to Consider:
An inset map can also be of larger scale in order to show more detail.
The inset map itself may cover over important details of the main map, such as nearby stores or competitors, for example. Make sure that this is acceptable to your use case, while also keeping in mind that the overview inset map can show these objects that are covered by the inset itself.
A border was added to the inset map, as well as to the main map and legend. You can take a look at the attached worflow to see how the Layout tool was used to do this. Details regarding this will be discussed in a separate, soon to be released, Knowledge Base article.
This article is part of the CS Macro Development Series. The goal of this series is to communicate tips, tricks, and the thought process that goes into developing good, dynamic macros.
Many macros need to be especially dynamic to allow the user to select which field to use in a very complex process. Take the Make Grid tool for example. It asks the user to specify a spatial object field, and only carries that field through its calculations. What comes out of the tool are two new fields, GridName and Grid, and none of the original fields at all.
I set out to build a macro just like this tool, except to generate a hexagonal grid. I started by building a normal workflow that could do this process, and when I was ready to convert it to a macro, I realized that I wasn't sure of the best way to enable it to choose this field dynamically.
There are two main ways to get data into your macro. Here's a quick summary of how they work:
The Macro Input tool has a checkbox in its configuration that reads Show Field Map.
If this is unchecked, then your macro won't do anything with the data - it will just stream in everything as is and trust that the stuff inside knows how to account for anything you throw at it.
If it is checked, then your macro will create drop down menus in its configuration window that will ask for the fields you have present in the Template Input. These drop down menus will let you select which fields to stream in to the macro in place of the ones in its template.
The field map needs all those drop downs to be filled out for it to do its thing, but if you want to make one of these inputs optional, just add (Optional) to the field name in your macro template.
1. Easy to set up! One checkbox and your template are all that's needed.
2. Makes sure only mapped fields enter the macro. This is good when converting a workflow to a macro because you don't need to worry about every form the input data stream could be in. If your stream has other fields, they will get tacked on to the stuff coming out of the macro.
Drop Down menus are an alternative way you can bring fields into your macro that offers a bit more control of the process.
They're particularly useful when connected to the anchor of a Macro Input tool.
You can then update a Select tool with the selected field to choose which field is being passed along.
1. Allows you to specify which fields to show to the user from a list of field types. (In this example, I am only allowing spatial objects.)
2. You can have a default selection populate the interface. (Here we will have any field starting with "SpatialObj" get selected automatically in the configuration of the macro.)
3. If you want something to be optional, you can use the [None] option.
"The Select Tool Trick"
If you make use of the Drop Down tool to bring in your data you'll need to update a Select tool. Here's a little trick that will make converting your workflows a lot easier.
First you'll want to uncheck *Unknown in the Select tool, since this will bring in every field not explicitly unchecked here. Then, have only the field you're selecting for checked, and navigate over to your Action tool and point it at the selected field.
Instead of having this repeated for every tool using this field, just have the field renamed in the Select tool, and refer to it by that name in all your downstream fields.
This turned out to be just what I needed for the Make Hex Grid macro, where I have a ton of stuff happening downstream and I only wanted one field to get through my Select tool.
Check out the example for a simplified version of this.
Those of you who have used the Report Map tool to create thematic maps have likely been unimpressed with the way Alteryx outputs the thematic legend text. Alteryx added two little known/used tools: the Map Legend Splitter and Map Legend Builder. With a little finesse, you can get the legend to go from completely unformatted to fully customized.
Not only does this allow for an easier to read the legend, but it also can save valuable space on your map or document. The example above simply involves taking the default thematic output legend text and replacing it with user-defined text for those layers.
Here's How You Do It
The entire workflow is illustrated after all of the steps below.
In the Report Map tool on the Legend tab, change Position to "Separate Field". This will output the map and legend as separate objects, allowing you to work with just the legend.
Add two Select tools after the map. In the first Select tool, select only the legend. In the second, only the map (and BoundingRect, if needed).
Add the Map Legend Splitter tool after the Select tool that selects the legend, and select "Legend".
Add a Record ID tool which will be used later to re-sort the legend back to its original order.
Add a Filter tool using the [ThemeName] field in order to extract just the records which make up the thematic part of the legend. For this example: [ThemeName] = “Block Groups”.
Create a Lookup table containing the Record IDs and the new text for the legend rows that you want to replace.
Join the lookup table to the legend stream using RecordID. Deselect the original “Text” field and rename the “NewText” field to “Text”. Deselect the second RecordID.
Union the new modified legend rows back with the non-modified legend rows using “Auto Config by Name”.
Sort the records back to their original position.
Use the Map Legend Builder to rebuild the new legend. The default configuration is all that is necessary.
From this point, you can choose to either overlay the legend on the map (using the Overlay tool), or join the legend back to the map (using “Join by Record Position” in the Join tool) and position the legend adjacent to the map as desired using the Layout tool.
Below is the entire workflow numbered by the steps above. Attached is a sample workflow created in 10.0.
In a recent article (Create an Indexed Map), I mentioned the indexed maps found in the Rand McNally Road Atlas. Well, also found in the Road Atlas are mileage charts, or, distance matrices. These matrices can be easily created in Alteryx. The example below will provide the distance between every store in a dataset to every store in that same dataset. Here's how we did it.
Distance Matrix/Mileage Chart Example
Create a Cartesian Join of your data
Using the Append Tool, create a Cartesian Join of all of the records in the dataset.
This will give you a combination of every record to every record in your dataset.
Don't forget to "Allow All Appends" (learn more about creating Cartesian Joins here).
Measure the distances between all record combinations.
Using the Distance Tool, measure the distances between the point combinations.
Flip the data into columns using the Cross Tab Tool.
Use a Select Tool to change the Store Number column to a string.
Doing this will prevent the Table Tool from adding commas to this field.
Use a Table Tool to create a formatted table.
Add a Column Rule to the Store Number field to format the column as bold.
Create a Row Rule in order to force a one decimal place to the distance data.
Here's the workflow which you can also find attached:
That's it. Feel free to leave any comments or ask any questions.
Gathering and using spatial data requires a way to record the location of whatever is being observed (e.g., buildings) that preserves the spatial properties of the data (e.g., the distance between points in a dataset). Spatial reference systems are used to record the spatial properties of data in a meaningful and translatable way.
Spatial reference systems are typically made up of two components, a 2-dimensional coordinate system, which defines the rules of how points are assigned to observations of spatial data, and a 3-dimensional datum, which defines the origin and scale of the coordinate system. These two components in combination allow spatial phenomena to be meaningfully translated and recorded as data.
The coordinate system of a spatial reference system is used to define the location of a spatial object using a set of numbers.
Think about a simple cartesian coordinate system that you might have been exposed to in math class. Typically, an x-coordinate is used to represent the horizontal position of an object, and the y-coordinate is used to represent the vertical position. Combining these two coordinates describe the location of a point in the 2-dimensional plane.
Geographic coordinate systems work the same way, but for positions on the surface of the Earth.
For example, latitude and longitude are a coordinate system where latitude represents the north/south location of a point, and longitude describes the east/west.
For analyzing spatial data in a relatively small region of interest, geographic coordinate systems can be accurate while approximating the shape of the Earth as a sphere. For an analysis that covers a larger area, the approximation of an ellipsoid (it has two radii instead of one) is much more accurate (+-0.3%). However, the Earth’s true shape is not an ellipsoid either. To maximize the accuracy of analysis where spatial data spans a large area of the Earth, variation across the Earth’s surface need to be considered.
A geodetic datum (also called a datum for short) defines a reference frame for spatial coordinates, measurements, and calculations. It consists of a selected ellipsoid and a definition of the position of the ellipsoid relative to the center of the geoid (the smooth but irregular surface of the Earth). Some of the more well-known datums include WGS84, NAD83, OSGB36.
If you have worked with spatial data before, you made have heard of projections. Projections are a special type of coordinate system that specifically account for the distortion that occurs when translating a 3-d object (the Earth) to a 2-d representation (e.g., a map). Depending on the projection chosen distortion will occur in one or more aspects of shape, area, distance, and direction. It is important to note that each map projection will preserve only or two of the four spatial properties.
A common map projection is the Mercator projection , though it has limitations.
The Mercator Projection was designed as a navigational tool for sailors. It preserves the shape of coastlines, and direction (rhumb lines, useful for navigation) while distorting area. Moving away from the Equator to the poles, the size of landmasses appears much bigger than they really are. For instance, Greenland owing to its closeness to the North Pole will appear roughly the same size as Africa.
This shows how important the choice and understanding of the projection can be to get accurate results.
How does Alteryx Designer handle Spatial Data?
All spatial data read into Alteryx is automatically transformed to a WGS84 datum with a latitude and longitude coordinate system.
Spatial calculations in Designer are performed on a sphere with a radius between the polar and equatorial radii of the Earth. This does result in some distortion in calculations. Near the equator, distance calculations can be 0.2% smaller than their actual size, and near the poles, distance calculations can be 0.2% larger. Within the US and Europe, distance calculations are more accurate. Alteryx does not use any projected coordinates when performing spatial calculations.
Projections can be specified when writing out data spatial formats via the Edit projection dialog box. This feature is supported in the following spatial file formats: MID/MIF TAB, SHP, Oracle, and ESRI Personal Geo-Database.
All maps from the Report Map tool are drawn in Spherical Mercator projection.
Often in spatial analytics, you’ll need to find the closest spatial object to another. The most intuitive way to do that is through the Find Nearest Tool, which specifically captures the ability to find the shortest distance between spatial objects in one file (targets) and a user-specified number of objects in another file (universe objects). This tool does an amazing job of simplifying the process of finding the nearest object to another but it can also add significant time to your workflow.
I often elect for an alternative method that has trimmed significant run time off of many of my spatial workflows. That is, using the Append Fields Tool to duplicate your target spatial objects for each universe and using the Distance Tool to calculate DriveTime. After that’s done, simply add on a Summarize Tool, group by the target and take the “Min” DriveTime for each. You could also sort ascending by DriveTime and sample for the first target by grouping with that field. There is a caveat, however, as the Append Fields Tool drastically increases the number of records in your input and will only speed up the process if there are significantly more targets than universes.
These methods are distinct in that the Find Nearest Tool must do a DriveTime run from each target spatial object to each universe spatial object (200 DriveTime passes in Example 1) whereas the Distance Tool approach already has all the points available to it and recognizes that there are many more targets than universes. As a result, it runs the reverse-direction DriveTime calculation starting from each universe to all target spatial objects at once (5 DriveTime passes in Example 1). If it is quicker for you to use the Find Nearest Tool, be sure to shed the spatial objects you no longer need in your workflow as soon as possible, even inside the Find Nearest Tool’s configuration if possible. That could also reduce your run time due to the sheer size of the spatial object datatype. Below are some examples of the methods. They can also be seen in the attached workflow, AppendAlternative.yxzp.
Universe Objects: 5
Attempt 1: Find Nearest Tool
Run Time: 8 minutes 13 seconds
Attempt 2: Append Fields Tool and Summarize
Run Time: 11.9 seconds
Universe Objects: 52
Attempt 1: Find Nearest Tool
Run Time: 49.7 seconds
Attempt 2: Append Fields Tool and Summarize
Run Time: 12.6 seconds
Multi-Line labels can be useful when you want to display multiple data points. Instead of using one long concatenated string, you can tell Alteryx to create a new line once it encounters a specific character specified by the user. To create a multi-line label simply follow these steps:
Use a Formula tool to concatenate the fields that you want to display while also adding a wrap character in between each value. In this example, we are using the backward slash as the wrap character.
In the Report Map tool in the 'Data' tab, select the new field you created in Step 1 for the 'Label Field'.
In the 'Layers' tab, check the 'Wrap Character' box and specify the wrap character used in the previous step.
The results should look something like this:
Things to Consider:
Make sure that the character you use as the wrap character is not present in any of the data that you are including in the label. Otherwise, you will get more lines than you bargained for, along with the possibility for some strange data. There is a section of the attached workflow that performs this check.
Though this would be complete overkill, I tested a 50 line label and it worked. In other words, there doesn't seem to be a practical limit to the number of lines you can create for your label.
Please see the attached completed workflow.
Thanks for reading!
Business Problem: Thematic maps are often used to display data geographically with colored or shaded themes, but sometimes users wish to see the data differently. For this purpose, dot density mapping has become a frequently requested feature for map rendering in Alteryx. Dot density creation is possible with the inclusion of the spatial function within the formula tool. This function, ST_Random Point, will randomly disperse a point within a given polygon. Utilizing this tool, anyone can create a macro to produce the data required to generate a dot density map.
Actionable Results: Easily create dot density thematic maps
Overview: It can often be convenient to view thematic maps as clustered points. This type of visual output is a logical and accurate representation of data occurring in a non-continuous distribution. Vertical: Any Key Tools Used: Formula Tool (ST_Random Point spatial function), Generate Rows Required Input: As inputs, the Dot Density macro requires two fields: geography with an associated value and a configuration of the number of dots per value. Determining the appropriate number of dots per value may require some trial and error to produce desirable results. Knowing the min, max, and median values associated with the base geographies would help you to determine and optimal dots per value. This coupled with the size of dots on the map will greatly affect the aesthetic of the mapping.
We recently had a user that was looking to distinguish polygons between each other using dashed lines, a style not currently available in the Report Map Tool. But that’s alright, we can use the opportunity to showcase how you can be creative in Alteryx by using a few tools.
In cifically filter out the record IDs you want to change or use the Sample tool to pull random records, or 1 of every N Records. (FYI – if you use the record ID, you will want to remove that column after you have split the records, due to the record ID used later in the mapping process.)
Once you have selected the polygons you want to use, you will need to break those polygons into individual points using the Poly-Split tool. Here, choose Polygon field and Split to Points. Splitting the polygon into points will allow you to adjust the polygon by each point.
Then, you'll want to remove some of the points to create the “dotted line” effect by using the Sample tool. This tool's settings will want 1 of every N Record selected (you can change the N=3 to any number you like that will have the spacing effect you want).
Now that you have removed some points, you'll want to do a few things to give the points a grouping effect. In order to do this, add another Record ID tool, then filter the record ID by odd and even numbers. You can do this using the Filter tool and using the expression mod([Record ID],2)>0. Then add Record ID tools to the T and F anchors to complete the grouping effect when you add them both to a Union tool.
Grouping the points allows you to build your Polylines. After the Union tool, add a Poly-Build tool. The Build method will be Sequence Polyline using the SpatialObj and the RecordID, as the Source and Group fields, respectively.
Your final step is to add a Map tool and pull in the data from your Poly-Build tool, as well as the original centroid points of the polygons with which you created the split lines, and finally the remaining polygons you want to be represented as full lines around the radius. When configuring the Map tool, remember that the points coming from the Poly-Build are actually lines and not Polygons. Your layering will need to have Points, Lines, and Polygons to complete the map.